Empirical Research

Fraction Errors in a Digital Mathematics Environment: Latent Class and Transition Analysis

Sarah Marina Karamarkovich*a, Teomara Rutherfordb

Journal of Numerical Cognition, 2019, Vol. 5(2), 158–188, https://doi.org/10.5964/jnc.v5i2.150

Received: 2017-10-11. Accepted: 2018-08-15. Published (VoR): 2019-08-22.

*Corresponding author at: 2310 Stinson Dr, Raleigh, NC 27695, USA. E-mail: skessle@ncsu.edu

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Student struggles with fractions are well documented, and due to fractions’ importance to later mathematics achievement, identification of the errors students make when solving fraction problems is an area of interest for both researchers and teachers. Within this study, we examine data on student fraction problem errors in pre- and post-quizzes in a digital mathematics environment. Students (n = 1,431) were grouped by prevalence of error types using latent class analysis. Three different classes of error profiles were identified in the pre-quiz data. A latent transition analysis was then used to determine if class membership and class structure changed from pre- to post-quiz. In both pre- and post-quiz, there was a class of students who appeared to be guessing and a class of students who performed well. One class structure was consistent with the idea that early fraction learners rely heavily on whole number principles. Identification of co-occurrence of and changes to fraction errors has implications for curricular design and pedagogical decisions, especially in light of movements toward personalized learning systems.

Keywords: fraction errors, elementary mathematics, latent class analysis, latent transition analysis

Educators have long been concerned with students’ struggles with fractions (Brown & Quinn, 2006). An analysis of the 1990 National Assessment of Educational Progress revealed that less than half of 12th graders were successful with fractions, percentages, decimals, and simple algebra (Mullis, Dossey, Owen, & Phillips, 1991). These rates have not improved in more recent years across a number of U.S. states (e.g., Bailey et al., 2015; Higgins, 2008; Kim, Schneider, Engec, & Siskind, 2006; NMAP, 2008; Torbeyns, Schneider, Xin, & Siegler, 2015). Struggle with fractions is especially concerning, because fraction knowledge is important for later success in mathematics (NMAP, 2008). In a study by Siegler and colleagues (2012), fraction and whole-number division knowledge were the strongest predictors of both algebra achievement and overall mathematics achievement five to six years later, even after controlling for a host of demographic and prior achievement factors. Similarly, Bailey, Hoard, Nugent, and Geary (2012) found that fraction achievement in sixth grade predicted general mathematics achievement in seventh grade, also after controlling for demographic and prior achievement. Additionally, fraction knowledge has been shown to be a stronger predictor of future mathematics achievement than whole number knowledge (e.g., Bailey et al., 2012; Booth & Newton, 2012; Siegler et al., 2012). This supports the hypothesis that fraction skills are among the early math skills more predictive of later mathematics achievement (e.g., Bailey et al., 2012).

With fractions critical to later mathematics achievement, it is important to understand some of the reasons students struggle with fractions. One way to do this is through understanding the common errors students make when solving fraction and other mathematics problems (Bottge, Ma, Gassaway, Butler, & Toland, 2014; Voza, 2011). These error analyses can help researchers and educators address fraction misunderstandings by providing a target for mathematics interventions. With this paper, we extend the analysis of fraction errors to a digital mathematics environment—a type of instructional tool that is gaining increasing prevalence in the modern classroom (Buckingham, 2013). We identify which types of errors students make within the Spatial Temporal (ST) Math elementary mathematics learning software and whether students can be grouped according to these errors and their co-occurrence, and examine how these groupings change after students have been exposed to fraction instruction within the platform.

Why Do Students Struggle With Fractions?

Prior research has offered a number of reasons why students struggle with fractions: fractions are multifaceted (e.g., Pantziara & Philippou, 2012), fraction algorithms are far more complex than natural numbers (e.g., Hiebert, 1992; Stafylidou & Vosniadou, 2004), and students often experience a gap between conceptual and procedural knowledge (e.g., Bailey, Siegler, & Geary, 2014; Jordan et al., 2013; Jordan, Resnick, Rodrigues, Hansen, & Dyson, 2017). The following sections explain how these factors contribute to student difficulty with fraction understanding.

Fractions Are Multifaceted

Fractions are thought to have five subconstructs: part-whole, ratio, quotient, operator, and measure—also known as magnitude (Charalambous & Pitta-Pantazi, 2007; Pantziara & Philippou, 2012). Each of these subconstructs represents a different aspect or way to use fractions. See Table 1 for descriptions of each construct.

Table 1

Sub-Constructs of Fractions

Sub-construct Definition (Charalambous & Pitta-Pantazi, 2007) Example with 2/3
Part-whole an object is partitioned and the fraction compares the number of parts to the total number of partitions two out of three slices of pizza
Ratio comparison of two different quantities two slices of cheese pizza and three slices of pepperoni
Operator function applied to number, object, or set two-thirds of a pizza, regardless of the number slices
Quotient division splitting two pizzas between three people
Measure/ magnitude does not fit easily into the pizza example because it represents where a fraction is placed on a number line (or how big the fraction is)

Although many children may understand fractions as a part-whole, this concept alone is not enough for complete understanding of fractions (Lamon, 2012; Pantziara & Philippou, 2012). Charalambous and Pitta-Pantazi (2007) included knowledge of a fraction’s measure as necessary for developing proficiency in fractions and noted that performance on fraction measure tasks is poor compared to performance on tasks that represent the other constructs. One reason may be because having a two-dimensional number makes it harder for students to grasp the one-dimensional concept of measure (sometimes referred to as magnitude, e.g., Booth, Newton, & Twiss-Garrity, 2014). Needing to understand how two numbers (the numerator and denominator) can be condensed into a single number or one-dimensional point on the number line makes it harder for students to perform simple algorithms, such as ordering and comparing fractions (see the integrated theory of numerical development, Rinne, Ye, & Jordan, 2017; Siegler, Thompson, & Schneider, 2011). As in Table 1, consider 2/3 and compare it to 5/6. It is difficult to immediately know which fraction is larger. Seeing that both five and six are larger than two and three, students may believe that 5/6 is larger. However, students are taught that the bigger the denominator, the smaller the part. This may lead to students saying 5/6 is the smaller number because it has the bigger denominator. It is not until we look at the relationship between two and three (for 2/3) and five and six (for 5/6), that we can be sure that 5/6 is larger.

Fraction Procedures Are More Complex Than Natural Numbers

The algorithms used for addition, subtraction, multiplication, and division are also more complex for fractions than for natural numbers. When adding or subtracting natural numbers, one can simply combine the digits of the same place value and regroup if necessary. However, when adding or subtracting fractions, one must first find the least common denominator, transform the fractions so they will have the same common denominator, then add/subtract the numerators. Table 2 further describes how arithmetic (and their algorithmic symbols) differ for fractions.

Table 2

Comparison of Fractions and Natural Numbers

Procedures Natural Numbers Fractions Fraction Arithmetic
Form Takes the form ab Takes the form a/b
Ordering Ordering number depends on comparing similar place values Ordering depends on the relationship between numerator and denominator
Quantities Discrete Discrete and continuous
Addition/Subtraction Combining digits of same place value Use equivalent fractions with common denominator then combine numerators a/b + c/d = da/db + bc/db = (da+bc)/db
Multiplication Multiplication makes number bigger Multiplication/division makes number bigger or smaller depending on the fractions a/b × c/d = ac/bd
Division Division makes number smaller a/b ÷ c/d = a/b × d/c = ad/bc

Note. Differences as stated in prior research.

The typical order of mathematics instruction may compound student difficulties in understanding fraction complexity. In most curricula, students learn natural numbers and then fractions (e.g., National Governors Association Center for Best Practices, 2010). Although learning fractions after learning natural numbers is a logical sequence, students often have a hard time reconciling the schema of natural numbers with that of fractions (Hiebert, 1992; Mack, 1995; Stafylidou & Vosniadou, 2004). This may be especially problematic because algorithmic symbols for fractions do not have the same meaning as the algorithmic symbols for whole numbers (see Table 2). The change in the procedures associated with these symbols can be confusing for children who need to modify their schema for the symbols (e.g., the procedure for adding natural numbers compared to the procedure for adding fractions; Hiebert, 1992, p. 294). Thus, it is not just the concepts of fractions that are complex, their procedures are complex as well and often do not correspond to the procedures used with whole numbers.

Conceptual Versus Procedural Knowledge

Each of the aforementioned challenges to understanding fractions highlight the difficulties students may have in gaining conceptual knowledge of fractions. Conceptual and procedural knowledge have a bidirectional relationship, but are separate entities that contribute to fraction success (Baroody & Ginsburg, 1986; Rittle-Johnson & Alibali, 1999; Siegler & Stern, 1998). Thus, researchers sometimes split fraction knowledge into concepts and procedures (e.g., Bailey, Siegler, & Geary, 2014; Jordan et al., 2013; Jordan et al., 2017). Conceptual knowledge is the “explicit or implicit understanding of the principles that govern a domain and of the interrelations between pieces of knowledge in a domain” (Rittle-Johnson & Alibali, 1999, p. 175). By comparison, procedural knowledge is “the knowledge of how to perform mathematical tasks…meant to generate the right answer to a given type of problem” (Hallett et al., 2012, p. 470). Although both conceptual and procedural knowledge are important to fraction achievement, reliance on procedural knowledge alone is unlikely to result in true understanding and advanced achievement—students who rely primarily on procedural knowledge may apply it incorrectly because they lack the conceptual knowledge that tells them when and why to use specific procedures (Hallett et al., 2012).

What Types of Errors Do Students Make?

It is important to understand both why students struggle with fractions and how they struggle with fractions. To understand how students struggle with fractions, researchers have looked at the errors students make when solving fraction problems (e.g., Ashlock, 2001; Bottge et al., 2014; Brown & Quinn, 2006; Malone & Fuchs, 2017). To do this, researchers give participants a test/activity and observe errors made. Most tests given in this manner have been free response (e.g., 1/2 + 3/4 = ______), although some have included multiple choice (e.g., Brown & Quinn, 2006). Although this procedure is used among most fraction error researchers, there are differences in methods in terms of what skills are tested and how errors are addressed. For example, some researchers have focused on specific fraction knowledge, like ordering or adding/subtracting fractions (Bottge et al., 2014; Malone & Fuchs, 2017), whereas others have focused on a broad range of fraction knowledge (Ashlock, 2001; Brown & Quinn, 2006). Additionally, some researchers define categories of errors a priori and then examine tests to see how often students make these errors (Bottge et al., 2014; Malone & Fuchs, 2017), whereas other researchers first give tests and then code and define errors from student answers (Ashlock, 2001; Brown & Quinn, 2006). Regardless of the method used, two broad categories of fraction errors have emerged, general misunderstanding of the concepts of fractions and improper application of whole number rules to fraction problems (e.g., Ashlock, 2001; Bottge et al., 2014; Brown & Quinn, 2006; Malone & Fuchs, 2017). These a priori errors and their origins will be further discussed in the context of the results of this study (see Results sub-heading Fraction Errors).

Motivation for Current Study

There is an extant body of work identifying student errors in fraction problems and examining why students might make these errors (e.g., Ashlock, 2001; Brown & Quinn, 2006). Although these studies have contributed to our understanding of the structure of student struggles with fractions, they have largely viewed student errors in isolation, without consideration of the instructional context. Work examining student fraction errors using assessments embedded in authentic educational contexts might provide information more immediately applicable to the classroom. Additionally, previous research has used student errors to enhance instruction (e.g., Ashlock, 2001), but often focuses on individual errors and what they mean. However, it is likely that students who make one type of error also make another type of error. By classifying error profiles, teachers may be able to more efficiently address a combination of errors. Finally, examining profiles of errors both before and after an instructional event may reveal both typical patterns of change and those errors and patterns that may be resistant to instruction. This can lead to refinement in instruction aimed at common profile groups and the identification of smaller groups of students for more targeted intervention. Similar research has been conducted to see what algebraic errors are persistent throughout instruction, but this has not yet been extended to fractions (Booth, Barbieri, et al., 2014).

Current Study

Using data from within the digital mathematics software ST Math, we identify common fraction errors, examine co-occurrence of errors to define student error profiles, and investigate the changes in profiles from before to after instruction. Specifically, we ask:

  1. What errors are made when students solve fraction problems within ST Math?

  2. How can students be grouped into classes by the errors they tend to make together? How do student characteristics differ between these classes?

  3. How do these classes change from pre-quiz to post-quiz?

We expect that certain errors will be more prevalent in the pre-quiz due to a naïve understanding of fractions, such as the ratio error or whole number ordering (Ashlock, 2001; Malone & Fuchs, 2017). We also expect errors related to fraction magnitude to be persistent into the post-quiz due to the construct’s complexity (Booth, Barbieri, et al., 2014; Rinne et al., 2017; Siegler et al., 2011).

Method

Context: ST Math

ST Math, created by MIND Research Institute (MIND), is an interactive instructional software for computers and tablets that is based on theory that suggests that the ability to visualize mathematics concepts leads to better conceptual knowledge and performance (see Geary, 1995; National Research Council, 2005; Shaw & Peterson, 2000). ST Math has previously been shown to result in small improvements in math achievement, especially on topics involving number sense (Rutherford et al., 2014; Schenke, Rutherford, & Farkas, 2014). ST Math is currently used in 45 states with over one million students and is designed to align with both Common Core and relevant state standards, including those in Florida, the site of this study.

Within ST Math, students progress through a number of objectives focused on specific math concepts. The ST Math content follows a hierarchical pattern: objective, sub-objective, game, level, puzzle. Within the objective/sub-objective, there are a variety of games that use the same imagery and design throughout their levels. Each of these games contains between one and 10 levels that increase in difficulty. Within each level, students complete interactive puzzles, which are the delivery method for the mathematics content. For each level, the student has between one and three lives—if they answer more puzzles incorrectly than they have lives, they are removed from the level and can chose to replay it or replay a previously-passed level. Before students begin the objective, they must complete a five-question multiple choice pre-quiz on that objective’s content. After demonstrating mastery of the content within an objective by successfully completing all levels within, the student then completes a five-question multiple choice post-quiz that mirrors the pre-quiz, question-for-question in topic, but uses different specific examples and numbers. The pre- and post-quizzes have either three or four answer choices for each question (see Table 5 on page 15 and Figure 1 on page 25 for examples).

The third-grade curriculum for the 2015-2016 Florida version of ST Math had 23 required objectives and eight optional objectives. These objectives covered content such as place values, addition and subtraction, multiplication and division, and fractions. The three objectives covering fraction content were part of the required curriculum and began approximately 65% of the way through the curriculum.

Participants

Participants were third graders from a school district in central Florida participating in an NSF-funded project relating gameplay within ST Math to student achievement and motivation. Students within the district played ST Math as part of their normal instruction during the 2015-2016 school year. The sample (N = 4,290) was limited to third graders with pre- and post-quiz data from the three fraction objectives (n = 2,187), and who also had district-provided demographic and achievement data (n = 1,431). The 1,431 students attended 75 different schools, 54% were male, 60% were white, 16% were Hispanic, 13% were black, 5% were Asian, and 46% qualified for free or reduced lunch (see Table 3). The analysis sample differed at statistically significant levels (ps < .05) on most demographics from the larger sample of third graders. There were fewer students with disabilities, who qualified for free/reduced lunch, were English Language Learners, and who were black. Conversely, there were more gifted students, Asian students, and White students. Because the sample was limited to students that had completed all three fraction objectives, the reduced sample may have a lower percentage of students using ST Math for remedial reasons.

Table 3

Demographics

Variable Percent of Sample
(n = 1,431)
Percent of Total
(N = 4,290)
Male 54 52
Student with Disability 10 17*
Free/Reduced Lunch 46 58*
English Language Learner 11 13*
Gifted 19 13*
Race
Asian 6 4*
Black 13 21*
Hispanic 16 18*
White 60 53*
Other 5 4

Note. Differences between total and analysis samples were determined using chi-squared tests. There were 75 schools in the sample and total.

*p < .05.

Measures of Fraction Errors

As implemented in Florida during the 2015-2016 school year, the ST Math curriculum aligned with the Florida Standards, which introduced fractions in the third grade (Mathematics Florida Standards, 2014; MIND Research Institute, 2017). The three fraction objectives within the ST Math third grade curriculum during this year were Fraction Concepts, Fractions on the Number Line, and Comparing Fractions. The pre- and post-quizzes for these objectives tested fraction concepts and algorithms. Table 4 describes the types of questions within each objective’s quizzes.

Table 4

ST Math Fraction Objective for Third Graders

Quiz Question Types Example Question
Fraction Concepts (64% completion rate)
  1. match a fraction figure to a written fraction

  2. match whole numbers to their equivalent fractions (e.g., 3 = 3/1)

jnc.v5i2.150-fs1
First type
Fractions on the Number Line (60% completion rate)
  1. match a point on a number line to a fraction

  2. match three fractions to points on the number line

  3. match a fraction to a point on the number line (possible points labeled with letters)

jnc.v5i2.150-fs2
Second type
Comparing Fractions (54% completion rate)
  1. match a fraction figure to a written fraction

  2. match three fractions to points on the number line

  3. identify the correct statement of magnitude comparisons

    1. identify the correct fraction sentence

    2. select the appropriate fraction to complete the number sentence

jnc.v5i2.150-fs3
Third (b) type

Procedure

Error Coding

Data were the answer choices on the pre- and post-quizzes on fraction objectives. Only incorrect answer choices were included in the analysis of errors. Overall, there were 86 possible incorrect answer choices for the six quizzes (pre- and post-quizzes for three objectives). Error coding was completed in three key steps. First, answer choices were qualitatively coded using a priori and a posteriori codes. The a priori codes were taken from previous research on fraction errors (e.g., Ashlock, 2001; Bottge et al., 2014; Brown & Quinn, 2006; Malone & Fuchs, 2017). When errors did not align with a priori codes, they were given an a posteriori code based on the presumed logic used to select that answer. After the first round of coding was finished, the authors collaborated with personnel at MIND to confirm or refine the researchers’ a posteriori codes. In designing the quiz questions, MIND often crafted incorrect answer options to represent common errors they had observed students make in prior versions of ST Math; thus, these common errors from MIND’s observations were reflected in the final codes. Lastly, error code names were refined to follow consistent naming conventions. Even after the three rounds of coding, there were still some answer choices that did not fit with a known or postulated error pattern. These answer choices were deemed filler (or random) choices. For example, in a question asking what fraction represented a circle with five parts that has one part shaded (e.g., 1/5) the answer option 2/5 did not follow any identifiable error logic. Thus, this answer choice was coded as “filler.” From the 86 incorrect answer choices, there emerged eight different error types and the filler choice, for a total of nine error categories.

Data cleaning

To make the error codes usable data, several steps were followed in the statistics analysis software Stata (Version 14; StataCorp, 2015). Variables were created that represented the proportion of times the student made that specific error to the total errors the student made. In this manner, if a student made the complement error twice and the filler error three times, they would have a total of five errors. The proportion for the complement error would be 0.4 (2/5) and the proportion for the filler error would be 0.6 (3/5). All other errors would have proportions of zero because they did not make those errors. The last step was creating an ordinal scale that would be compatible with latent class and transition analyses (Goodman, 2002):

  1. the student did not make the error;

  2. the student made the error between 0% and 25% of the time (0 < x ≤ 0.25);

  3. the student made the error between 25% and 50% of the time (0.25 < x ≤ 0.50);

  4. the student made the error between 50% and 75% of the time (0.50 < x ≤ 0.75);

  5. the student made the error between 75% and 100% of the time (0.75 < x ≤ 1).

Error proportions were used in analyses instead of number of errors made to control for the number of errors students made overall. Using number of errors would bias results by heavily weighting on student performance and we would lose the relative frequency of each error.

Latent Class Analysis

Latent class analysis (LCA) is a mixture modeling method used with categorical variables. It attributes the relationship between variables to an unobserved, latent variable (Collins & Lanza, 2009; Goodman, 2002; Nylund, Asparouhov, & Muthén, 2007). The goal of LCA is to group people into classes based on their observed variables. These classes represent the underlying categorical latent variable. This procedure is similar to factor analysis; however, it provides a person-centered approach instead of a variable-centered approach. Thus, the classes are created to group similar participants together, rather than similar variables (Collins & Lanza, 2009; Goodman, 2002). The person-centered approach allows for generalizations of the patterns of behavior. In this study, LCA was used to determine fraction error patterns. Models with k versus k + 1 classes were tested iteratively to determine best fit. To determine best fit, sample size adjusted Bayesian Information Criterion (adjusted BIC), Bayes Factor, Entropy, and the bootstrap likelihood ratio test (BLRT) were used (Nylund et al., 2007). After conducting the LCA, class membership was regressed on demographics and game-play variables to further understand the composition of each class and external variables that predicted class membership. For this step, logistic regressions were used and were clustered at the teacher level to account for nesting within classrooms.

Latent Transition Analysis

To determine if class membership changed between time points, such as between a pre- and post-quiz, latent transition analysis (LTA) was used (Rindskopf, 2010). Rindskopf (2010) defines LTA

as a statistical model in which (i) latent categorical constructs are defined at two or more time points, (ii) parameters are included that assess initial status and transition probabilities from time i to i + 1..., and (iii) observed variables are imperfect indicators of the hypothesized latent variables. (p. 199)

LTA regresses the class variable at the second time point onto the first time point to determine the likelihood of the classes remaining the same. LTA was used to determine if classes remained the same between pre- and post-quiz. The Mplus software (version 7; Muthén & Muthén, 1998-2015) was used to run both the LCA and LTA.

Logistic regressions were used to determine how students differed between the classes, as was done with pre-quiz classes. Additionally, logistic regressions were used to predict the movement from pre- to post-quiz class. All logistic regressions were clustered at the teacher level to account for classroom similarities.

Results

Fraction Errors

Three of the 10 error codes were developed a priori—illogical sizing/spacing (Ashlock, 2001), ratio (Ashlock, 2001), and whole number ordering (Malone & Fuchs, 2017)—and the other six were a posteriori—complement, filler, incomplete information, reciprocal, reducing fractions to whole numbers, and same numerator/denominator ordering. Table 5 summarizes the types of questions and the errors possible within each question type. Overall, students made an average of seven errors on pre-quizzes and four errors on post-quizzes (out of 15 questions each for both pre- and post-quizzes, see Table 6). Table 7 details the proportions of times each error was made and how many students made it. See Figure 1 for example of errors.

Complement

The complement error occurred when the student chose the complementary fraction to the correct answer choice. For example, if the question asked what fraction represents the shaded part of a shape and the answer was 2/3, the complement would be 1/3. On average, students made this error 11% of the time in pre-quizzes and 9% of the time in post-quizzes. About 29% of students made the complement error in pre-quizzes and 23% in the post-quizzes.

Filler

The filler error encompassed answer choices that did not appear to follow a known error logic. Consider the example in Figure 1. The correct answer was 0/1 and 1/1 was coded as representative of reducing fractions to whole number error. However, 1-0 and 0-1 were coded as filler because they did not fit with any other error and they addressed whole number subtraction more so than fractions. For the pre-quiz, there were six questions that contained at least one filler error, with eight answer choices coded as filler. For the post-quiz, there were four questions that contained at least one filler error, with six answer choices coded as filler. The filler error was made about 11% of the time on pre-quizzes and 7% of the time on post-quizzes. Approximately 49% of students made this error at least once on pre-quizzes and 55% on post-quizzes.

Illogical Sizing/Spacing

This error represented misunderstanding that fraction parts must have equal sizes and consistent spacing (Ashlock, 2001). There were two ways this error was reflected. The first was when students were presented with figures divided into unequal parts and were asked to match this figure to a fraction. Some students’ answers included the number of (unequal) parts in the denominator, indicating a failure to recognize that parts of a fraction must be the same size. The second was when students were presented with number lines upon which fractions were placed with incorrect space. Students made this error, on average, 9% of the time on pre-quizzes and 10% of the time on post-quizzes. Twenty-nine percent of students made the illogical sizing/spacing error at least once on pre-quizzes and 35% of students made it at least once on post-quizzes.

Table 5

Summary of Question Types and Their Errors

Question Type Number of Questions Errors Possible
Matching written fractions to visual models 8
  • complement

  • illogical sizing/spacing

  • incomplete information

  • ratio

  • reciprocal

  • reducing fractions to whole numbers

Making equivalent fractions 4
  • complement

  • incomplete information

  • reciprocal

  • reducing fractions to whole numbers

Placing fraction(s) on a number line 12
  • complement

  • illogical sizing/spacing

  • incomplete information

  • reciprocal

  • whole number ordering

Comparing fractions 6
  • same numerator/denominator ordering

  • whole number ordering

Note. Question types are from the three fraction objectives.

Incomplete Information

This error had two different characterizations. The first was when the answer was partially correct but missing a critical part that would make it correct (e.g., if there was a mark placed at one third of a number line but the number line was zero to two, the student might have answered 1/3 when in reality the answer was 2/3). The second was when the answer matched only part of the question. For example, if the student was asked to identify the mixed number 1 3/4, a student making this error would answer only 3/4 or only 1, missing the totality of the original prompt. Students made this error, on average, 19% of the time on pre-quizzes and 8% of the time on post-quizzes. Sixty-seven percent of students made the incomplete information error at least once on pre-quizzes and 35% of students made it at least once on post-quizzes.

Ratio

Ratio errors were made when the student selected the fraction as the shaded parts over the unshaded parts instead of the shaded parts over all of the parts (Ashlock, 2001). Thus, if a circle was split into four equal parts with one of the parts shaded, a student making this error would select 1/3 instead of 1/4. On average, students made this error 7% of the time in pre-quizzes and 2% on post-quizzes. About 7% of students made the ratio error in pre-quizzes and 6% of students made this error in the post-quizzes.

Reciprocal

The reciprocal error occurred when the student flipped the numerator and denominator. For example, if the question asked what fraction represents the shaded part of a shape and the answer was 2/3, a student making the reciprocal error would select 3/2. Students made this error 15% of the time on pre-quizzes and 10% of the time on post-quizzes. Roughly half of the students made this error at least once on pre-quizzes (49%) and post-quizzes (45%).

Reducing Fractions to Whole Numbers

Unlike whole numbers, fractions can be equivalent to both other fractions and to whole numbers (Bottge et al., 2014). This error occurred when a student did not correctly identify fractions that were equivalent to whole numbers. For matching written fractions to visual models, a student may answer that a circle with one section of five shaded is equivalent to the fraction 5/5. As for matching fractions, a student making this error might incorrectly state that three is equal to 3/3. Answer choices that corresponded to these errors all had fractions that could be reduced to a whole number. The fractions in these answer choices followed one of three forms: (1) a/a, where the fraction could be reduced to one; (2) 0/a, where the fraction could be reduced to zero; or (3) a/1, where the fraction could be reduced to a. Students made the reducing fraction error 17% of the time on pre-quizzes and 10% of the time on post-quizzes. Approximately 66% of students made this error on the pre-quizzes and 44% made it on the post-quizzes.

Click to enlarge
jnc.v5i2.150-f1
Figure 1

Example of errors in the quizzes. The answer choices highlighted in each question represents the labeled error. For the filler question, both C and D were coded as filler.

Table 6

Total Errors Made in Quizzes

M SD Minimum Maximum
Pre-Quiz 6.621 2.757 0 13
Post-Quiz 4.135 2.542 0 14
Table 7

Summary of Error Types

Error Description Proportion of Times Error is Made
Max Times Error Was Made Number of Students Who Made This Error
Mean Min Max
Complement Complement of correct fraction (e.g., correct: 2/3, incorrect 1/3) 0.11 0.00 1.00 3 421
0.09 0.00 0.67 2 327
Filler Incorrect answer but no obvious error 0.11 0.00 0.71 4 706
0.07 0.00 0.50 3 843
Illogical size/spacing Unequal parts of a shape or illogical spacing on the number linea 0.09 0.00 0.75 3 422
0.10 0.00 0.75 3 503
Incomplete information E.g., leaving out the whole number in a mixed fraction 0.19 0.00 1.00 4 855
0.08 0.00 0.67 2 338
Ratio The fraction is the shaded part over the unshaded partsa 0.07 0.00 1.00 1 99
0.02 0.00 0.67 2 82
Reciprocal Reciprocal of correct fraction (e.g., correct: 2/3, incorrect 3/2) 0.15 0.00 0.75 3 705
0.10 0.00 0.60 3 639
Reduction to whole number Misconception of what fractions are equivalent to whole numbers 0.17 0.00 0.80 4 938
0.10 0.00 0.60 3 635
Same numerator/ denominator ordering error Ordering incorrectly when the fractions had the same numerator or denominator 0.12 0.00 0.33 1 498
0.08 0.00 0.50 2 399
Whole number ordering Ordering only the numerator or denominatorsa 0.26 0.00 0.60 6 1,282
0.12 0.00 0.57 4 822

Note. Statistics for pre-quiz errors are on top and statistics for post-quiz errors are below. n = 1,431.

aa priori code.

Appendix A includes a table displaying correlations between error types.

Same Numerator/Denominator Ordering Error

This ordering error was possible only when students were asked to compare fractions with the same numerator or denominator in the comparing fraction questions. Within these questions, the student would only have to compare the numerator or denominator, but still made an error. For example, a student may say 1/3 > 2/3 (same denominator). There was one question in the pre-quiz and two questions in the post-quiz for which this error was possible. Students made the same numerator/denominator ordering error 12% of the time on pre-quizzes and 8% of the time on post-quizzes. Approximately 35% of students made this error on the pre-quizzes and 28% made it on the post-quizzes.

Whole Number Ordering

The whole number ordering error was made when the student ordered the fractions based on their numerator or the denominator, without apparent consideration of the relationship between the two (Malone & Fuchs, 2017). For example, a student might order 1/2, 1/3, and 1/4 in that order, because two is less than three and three is less than four. In this case, they are only ordering based on the denominators and not considering the fraction as one number defined by the numerator/denominator relationship. Students made the whole number ordering error, on average, 26% of the time on pre-quizzes and 12% of the time on post-quizzes. Almost all students made this error at least once on pre-quizzes (90%) and over half of students made it at least once on post-quizzes (57%). Students made the whole number ordering error the most in the pre-quiz and post-quiz.

Pre-Quiz Latent Class Analysis

As noted above, nine types of errors could be made in the third-grade quizzes—complement, filler, illogical size/spacing, incomplete information, ratio, reciprocal, reducing to whole numbers, same numerator/denominator ordering, and whole number ordering. Analyses were run in Mplus using the proportion of times each of the nine errors were made to identify categorical latent variables.

Fit Criteria

A comparison of models with one to five classes is found in Table 8. There was no model that displayed the best fit across every fit statistic. Although the two-class model had the highest Bayes Factor, the two-class model did not contain obvious patterns. Conversely, the four-class model had the lowest BIC but did not have enough distinction between the classes. However, the three-class model had the second lowest adjusted BIC value of the models (above 10; Kass & Raftery, 1995) and was the most interpretable; therefore, the three-class model was determined to be the best-fitting.

Table 8

Table of Third Grade Pre-Quiz LCA Values

Class Sample Size Adjusted BIC BLRT p-value Bayes Factor
1 21746.513
2 21493.295 < 0.001 > 10
3 21430.938 < 0.001 < 1
4 21422.483 < 0.001 < 1
5 21454.988 < 0.001 < 1

Note. Entropy for three-class model is 0.543. The three-class model was chosen due to low sample size adjusted BIC and interpretability of the model.

Pre-Quiz Error Classes

Figure 2 shows the structures of fraction errors in each class. The classes were examined holistically and named based on the prominent pattern of errors. The naming pattern followed one of two options. The first is that the class name is the same as the errors the students in that class made the most often. The second naming pattern was used when there was no error was made significantly more often than the other errors. In this case, the name represented the distribution of all errors. The classes for the pre-quiz errors were the Distributed Errors Class, Whole Number Ordering (WNO) Class, and Few Errors Class. Students were approximately evenly distributed within the pre-quiz classes (30%, 35%, and 35%, respectively).

Click to enlarge
jnc.v5i2.150-f2
Figure 2

Pre-quiz latent class model. Each color represents how often students made the error. The y-axis represents how many students in that class made a specific error a certain percentage of time (e.g., in the Few Errors Class, over 86% of the students did not make the complement error and about 12% made the complement error 25%-50% of the time).

Distributed errors class

The first class had the widest distribution of errors, meaning that they made the most error types of all of the classes. The majority of the errors were made at least once, with six of the nine errors being made at least 25% of the time. About 30% of the students (n = 432) were placed in the Distributed Errors Class.

Whole number ordering error class

The students in the WNO Class primarily made the whole number ordering error. Although these students also made other errors, the majority of students made the WNO error at least 25% of the time but rarely made other errors this often. Approximately 40% of the students (n = 572) were placed in the WNO Class.

Few errors class

Students in this class primarily did not make errors, except for the whole number ordering error. However, this class is distinct from the WNO Class because errors were made so infrequently—over 70% of the students in this class did not make six of the nine errors. Of the sample, 30% (n = 427) were placed in the Few Errors Class.

Predicting Class Membership

To determine who was in each class, separate logistic regressions were conducted for each class (0/1 whether the student was a member of that class). When running logistic regressions, an odds-ratio of greater than one indicates an increase in likelihood of membership. Conversely, an odds-ratio of less than one indicates a decrease in likelihood of membership. If an odds ratio is less than one, taking the difference of the odds ratio and one can be interpreted as the decrease in odds. Student pre-quiz emerged as a statistically significant predictor of membership for each class. For each point increase in overall average pre-quiz score, students had a 1.12-fold increase in the odds of being classified into the Few Errors Class. However, for each point increase in average pre-quiz score, student odds of being classified in the Distributed Errors Class decreased by 0.06-fold. Similarly, for each point increase in average pre-quiz score, student odds of being classified in the WNO Class decreased by 0.02-fold. Therefore, students who had higher pre-quiz averages tended to be in the Few Errors class, whereas students who had lower pre-quiz averages tended to be in the Distributed Errors or WNO Class. No other predictors and demographics were statistically significantly related to class membership for either the Few Error or the Distributed Errors Classes. However, if a student was classified as gifted, the odds of them being in the WNO Class decreased by 0.51-fold. See Table 9 for logistic regression statistics.

Table 9

Logistic Regressions for Pre-Quiz Class Membership

Variable Distributed Errors
WNO Error
Few Errors
Odds-Ratio Z-score Odds-Ratio Z-score Odds-Ratio Z-score
Male 1.24 1.65 0.80 -2.00 0.90 -0.70
Content Progress 1.00 -0.09 1.00 0.05 1.00 0.09
Average pre-quiz 0.94*** -15.38*** 0.98*** -7.42*** 1.12*** 17.60***
Disability 0.87 -0.60 1.13 0.62 0.90 -0.41
Free Lunch 0.98 -0.13 1.24 1.53 0.68 -2.31
ELL 1.10 0.44 1.25 1.14 0.60 -1.57
Gifted 1.35 1.60 0.49*** -4.02*** 1.37 1.40
Race
Asian 1.33 1.08 0.82 0.75 0.84 -0.54
Black 1.13 0.59 1.26 1.39 0.70 -1.53
Hispanic 0.93 -0.35 1.02 0.13 1.09 0.38
Other 1.22 0.70 1.24 0.76 0.63 -1.37
Constant 11.60*** 1.95 0.00***

Note. Odds-ratios and Z-scores are provided. Reference groups: White, non-English Learner, not eligible for free/reduced lunch. Clustered by teacher. n = 1,431.

***p < .001.

Latent Transition Analysis

For the latent transition analysis, a post-quiz model was constrained to three classes for two reasons. First, the three-class model had the lowest sample size adjusted BIC value (see Appendix B). Second, it allowed the number of classes to remain constant across the pre- and post-quiz models. The three-class post-quiz model was estimated by regressing the post-quiz model onto the pre-quiz model. This regression determined the thresholds for the post-quiz classes. The thresholds determine how often each error is made in each class, i.e., what the prominent pattern of errors is for each class. Additionally, the regression determines the transition probabilities based on the models, i.e., how likely a student is to remain in a given class for pre-quiz and post-quiz.

Post-Quiz Model

Figure 3 shows the structures of fraction errors in each class for the post-quiz errors. As with the pre-quiz classes, colors indicate the percent of times, on average, students in that class made the particular error out of their total errors made and the y-axis represents how many students in that class made the particular error the specified proportion of time. In the same manner as was done for the pre-quiz Latent Class Analysis, the post-quiz classes were examined holistically and named based on the prominent pattern of errors. The post-quiz error classes were the Reciprocal Error Class, Distributed Errors Class, and Few Errors Class.

Click to enlarge
jnc.v5i2.150-f3
Figure 3

Post-quiz latent class model. Each color represents how often students made the error. The y-axis represents how many students in that class made a specific error a certain percentage of the time (e.g., in the Reciprocal Class, over 90% of the students did not make the complement error and about 8% made the complement error 0%-15% of the time).

Reciprocal error class

Students in this class made errors that were largely distributed across the types, as is illustrated by the relatively high amount of blue in the bars (the error was made between 0% and 25% of the time). However, over 80% of the students made the reciprocal error at least once, in stark contrast to the percentage of students who made this error in the other two post-quiz classes. Approximately 30% of students (n = 437) were in the Reciprocal Error Class.

Distributed errors class

Similar to the Distributed Errors Class in the pre-quiz, the post-quiz Distributed Errors Class comprised students who made a variety of errors, but the reducing fraction to whole numbers and whole number ordering errors were made most often. However, students also made other errors more often than in the other classes. Of the sample, 29% (n = 415) were in this class.

Few errors class

Like the pre-quiz Few Errors Class, students in the post-quiz Few Errors Class made the least errors. This class does differ from its pre-quiz counterpart by having most errors made less often, especially in regards to the relatively small number of whole number ordering errors. Approximately 40% of students (n = 579) were in this post-quiz Few Errors Class.

Differences in the Classes

As with the pre-quiz, logistic regressions were run to determine which demographic and game-play variables predicted class membership. A student had higher odds (1.09-fold increase) of being placed in the Few Errors Class if they had a higher pre-quiz average. Conversely, higher-performing students were less likely (0.05-fold decrease) to be placed in the Distributed Errors Class and the Reciprocal Error Class (0.03-fold decrease). Students were more likely to be placed in the Few Errors Class if they were identified as gifted (2.14-fold increase) but less likely to be placed in the Reciprocal Error Class (0.52-fold decrease). See Table 10 for logistic regression statistics. Additionally, see Appendix C for average proportion of times errors were made by each class.

Table 10

Logistic Regressions for Post-Quiz Class Membership

Variable Reciprocal Error
Distributed Errors
Few Errors
Odds-Ratio Z-Score Odds-Ratio Z-Score Odds-Ratio Z-Score
Male 0.99 -0.05 0.90 -0.77 1.00 0.04
Content Progress 1.00 0.70 1.00 -1.37 1.00 0.90
Average pre-quiz 0.97*** -8.63*** 0.95*** -12.16*** 1.09*** 15.87***
Disability 1.04 0.20 0.87 -0.65 1.06 0.24
Free Lunch 1.04 0.30 1.18 1.13 0.75 -1.78
ELL 1.56 2.14 1.02 0.08 0.53 -2.33
Gifted 0.48*** -3.74*** 0.79 -1.20 2.14*** 3.43***
Race
Asian 0.92 -0.28 0.91 -0.33 1.15 0.48
Black 1.47 2.13 1.09 0.46 0.59 -2.49
Hispanic 0.93 -0.39 1.02 0.12 1.07 0.35
Other 0.91 -0.31 1.92 2.23 0.54 -1.90
Constant 1.60 10.47*** 0.00***

Note. Odds-ratios and Z-scores are provided. Reference groups: White, non-English Learner, not eligible for free/reduced lunch. Clustered by teacher. N = 1,431.

***p < .001.

Transition Probabilities

Students who were in the Few Errors pre-quiz class, the class that was highest-performing at pre-quiz, moved only to the Reciprocal Error and Few Errors post-quiz classes, among which they were almost entirely placed into the post-quiz Few Errors Class (99.8%; only one of the 506 transitioned to the Reciprocal Error Class). Students who were in the pre-quiz Distributed Errors Class, the class that was lowest-performing at pre-quiz, moved to each of the three post-quiz classes. Primarily, these students were placed in the post-quiz Distributed Errors Class (56%), followed by the post-quiz Reciprocal Error Class (27%), with only 16% placed in the post-quiz Few Errors Class. Lastly, the students in the pre-quiz Whole Number Ordering Class, followed a similar transition pattern to those in the pre-quiz Distributed Errors Class. Of the WNO Class, 64% moved to the post-quiz Reciprocal Error Class, 35% moved to the Distributed Errors Class and 6% moved to the Few Errors Class. See Table 11 for a summary of the latent transition probabilities.

Table 11

Latent Transition Probabilities

Pre-Quiz Classes Post-Quiz Classes
Reciprocal Error Distributed Errors Few Errors
Distributed Errors 0.274 0.561 0.163
WNO Error 0.642 0.352 0.006
Few Errors 0.002 0.000 0.998

Note. Cells represent the probability that a student would be in the post-quiz class given that they were in the designated pre-quiz class. For example, if a student started in the Few Errors Class, there was a 0% chance they would be in the post-quiz Distributed Errors Class.

Logistic regressions were run to better understand which students transitioned between classes. Because all but one student who started in the pre-quiz Few Errors Class remained in the post-quiz Few Errors Class, logistic regressions were only run on students who started in either the Distributed Errors or WNO Class. Among these six transition possibilities, statistically significant predictors were only identified for movement into the post-quiz Few Errors Class. If a student started in the pre-quiz Distributed Errors Class, the odds of them moving into the post-quiz Few Errors Class increased if they had a higher pre-quiz average or were classified as gifted (1.04-fold and 3.58-fold increase, respectively). Being classified as gifted also increased the odds of being placed in the post-quiz Few Errors Class if students started in the WNO Class (44.86-fold increase). However, being an English Language Learner also increased the odds of students transitioning from the WNO Class to the Few Errors Class (11.30-fold increase). See Appendix D for full logistical regression models.

Discussion

This study had three main questions:

  1. What errors are made when students solve fraction problems?

  2. How can students be classed by the errors they tend to make together?

  3. How do these classes change from pre-quiz to post-quiz?

Research Question 1: Fraction Errors in St Math

Within this study, we expand upon previous fraction error analysis in both the context and content of our questions and in the identification of our errors. First, breaking from the tradition of using open-response researcher-administered questions (e.g., Ashlock, 2001; Bottge et al., 2014; Brown & Quinn, 2006; Malone & Fuchs, 2017), we used a multiple-choice test provided in the course of students’ actual instruction. Although the multiple-choice nature of our instrument limited the scope of errors students could make, it allowed us to examine errors in-situ within an authentic educational context. Additionally, if designed carefully and with common errors in mind, the use of such tests may make error identification more efficient—if researchers can make multiple choice answers with specific errors in mind, they can assume the error by the answer choice instead of coding each written answer. However, because students were required to pick an answer choice, selecting a random wrong (or right) answer was more likely to occur. It is therefore possible that students may have picked an answer choice that matched an error code without actually following the logic to get that error, resulting in more noise. This may be especially true of students new to fraction problems, who, on open-ended questions, might not provide any answer. By forcing an answer choice, students who are new to fractions may make multiple errors. In fact, it appeared that this may have borne out in the propensity of lower-performing students within our pre-quiz Distributed Errors Class.

As our second contribution to the nature of error identification, we described new categories of errors. Of the nine errors coded, six of them were a posteriori in that they were not previously described in research on fraction errors—complement, filler, incomplete information, reciprocal, reducing fractions to whole numbers, and same numerator/denominator ordering error. These a posteriori errors were made just as often, if not more, than those assigned a priori codes from prior research. It remains to be seen if these errors would be found outside of the context of ST Math or in non-multiple-choice examinations.

As the multiple-choice nature of the ST Math quiz questions may have influenced the errors that arose, so too may have the visual nature of the questions. The focus within ST Math on visual-spatial instruction is reflected in a number of quiz questions that used visual representations, such as shaded figures or number lines. These questions may elicit different errors than those seen in prior studies, which, other than writing factions based on visual models, relied primarily on questions that were symbolic (e.g., 1/2 + 3/4 = ____; Ashlock, 2001; Bottge et al., 2014; Brown & Quinn, 2006; Malone & Fuchs, 2017).

Research Question 2: Models of Struggling Students

Three classes of errors were identified using LCA at pre-quiz—Few Errors, Distributed Errors, and Whole Number Ordering Error. Students in the Few Errors Class tended to have the highest pre-quiz averages. The students in this class made few errors, and when they did make errors, tended to make the same error the majority of the time. Although the LCA identified this group as a class, many of the students had little in common regarding the type of errors they made—instead, they were joined together merely by their propensity to make few errors. It is intuitive that high performers would make few errors overall and that these errors may not be consistent among the high performers. There are two possibilities we offer for why these students did not make similar types of errors. First, it may be that even high-performing students experience moments of carelessness, and therefore the distribution of errors these students make is the result of a mostly random process. Alternately, it may be that there are sub-groups within this class that cluster on their tendency to make certain types of errors, but our instrument was not sensitive or extensive enough to capture these differences.

The Whole Number Ordering Error Class contained less than five percent of students who did not make the whole number ordering error and almost 60% who made it at least half of the time. The high rate of this error—ordering only numerators or denominators without considering the relationship between the two—is likely due to a naïve understanding of fractions and an improper reliance on knowledge of whole numbers and their ordering schema (Malone & Fuchs, 2017). Although other errors were made by students in this class, none were as concentrated as the whole number ordering error. Students in the WNO Class tended to score lower on the pre-quizzes and were less likely to be labeled as gifted.

Lastly, the largest group of students were placed in the pre-quiz Distributed Errors Class. This class made a wide range of errors but did not make most errors more than 25% of the time. The seemingly random nature of their errors may indicate that most of their errors are due to a general lack of fraction knowledge and unfamiliarity with fraction problems. This conclusion is further supported by students in this class having the lowest pre-quiz averages, indicating a lower level of fraction knowledge. This may indicate that students with little fraction knowledge do not make errors that indicate one type or a few types of conceptual or procedural misunderstandings, but instead, exhibit a pattern that may be more indicative of guessing. It may be that these students would be those that would leave open-ended questions blank, and therefore would not be attributed to a specific error or classified into an error profile on the types of exams frequently used in prior fraction error research.

Research Question 3: Class Membership Changes Pre to Post

Both class membership and class structure differed between pre- and post-quizzes. This was expected, because the number of total errors being made decreased as the students learned from ST Math. Two of the three pre-quiz classes remained in the post-quiz—the Distributed Errors Class and the Few Errors Class. Interestingly, the percent of students in the Few Errors Class increased by 10% from pre- to post-quiz. The class also changed slightly in its composition of errors, as depicted in Figure 4, wherein the pre-quiz error distribution is shown in the left columns and the post-quiz in the right.

Click to enlarge
jnc.v5i2.150-f4
Figure 4

Distributed Errors and Few Errors classes transition. The left bars represent the pre-quiz class and the right bars represent the post-quiz class.

The most notable change from pre to post is that the proportion of errors increased for the filler and illogical size/spacing categories in both classes and the complement error in the Distributed Errors Class. For the illogical size/spacing error, students who make this error have some aspect of the question correct—they understand that 1/5 is one of five parts (for example) but do not recognize that the parts must be equal, or they correctly order fractions but do not properly space them on the number line. Thus, the students who make this error may have a partially developed understanding of fractions. Alternately, students who selected answer options in line with this error may have been confused at encountering what may be viewed as a tricky option. Questions that elicit this error are necessarily complex–changing the spacing of fractions on the number line may have taxed student working memory. The filler error may have been more prevalent in the post-quiz classes because students were not experiencing a specific misconception, rather they likely made careless errors. Thus, although they have applied certain misconceptions in the past, those misconceptions are suppressed and thus the student relies on a different strategy, in this case, choosing an answer choice that does not fit under a specific misconception (see Booth, Barbieri, et al., 2014, describing the arithmetic error for an example of an error not representing a misconception). This theory is supported by the Overlapping Waves theory that states that children’s strategies fluctuate as they learn (Siegler, 1996).

Only movement into the Few Errors class could be predicted by student characteristics. Students who had higher pre-quiz averages and were labeled as gifted were more likely to move from the Distributed Errors Class in the pre-quiz to the Few Errors Class in the post-quiz. Gifted students, in addition to English Language Learners, were also more likely to move from the Whole Number Ordering Class to the Few Errors Class. Movement between the classes from pre- to post-quiz followed a pattern similar to the one found with Resnick et al.’s (2016) fraction magnitude growth trajectories. They found three growth classes—consistently accurate, inaccurate with growth, and inaccurate with minimal growth. Although our project did not specifically look at growth, we did find that students who made few errors in the pre-quiz continued to make few errors in the post-quiz (similar to the consistently accurate class), students moved from one of the lower performing classes—Distributed Error and WNO—to the post-quiz Few Errors class (similar to the inaccurate with growth class), and some students were never in either of the Few Errors class (similar to the inaccurate with minimal growth class).

Limitations

As noted, although the multiple-choice nature of the questions allowed us to collect authentic educational data from a large number of students, it also provided a limitation in that it constrained the possible number of errors to those already provided within the software. It also allowed for the number of times an error could be made to vary. For example, the same numerator/denominator ordering error could only be made on one specific type of question—comparing fractions. Similarly, the number of times an error could be made varied between the pre- and post-quiz, preventing a true measure of difference, although this difference was small (differences ranging from zero to two possible errors). Additionally, although we grounded our coding decisions in both prior research of fraction errors and understanding of mathematics and mathematics education, we lacked insight into each student’s thought process as they solved the problems. Future studies may rectify this shortcoming by providing for methods such as cognitive interviewing to complement error coding.

Implications and Future Directions

Our results are immediately applicable to the digital platform from within which the data came, ST Math. First, we can work with the platform developers to reduce the number of filler items in ST Math quizzes and provide for a variety of error options with which to identify student misconceptions. These same guiding principles can be used by other test and platform developers to allow for more fine-grained data collection and identification of opportunities to assist student learning. Even after instruction, students demonstrated difficulty with the proper placement of numerator and denominator (reciprocal error). It may be that these are difficult concepts for all students, which would explain the persistence of these errors at post-quiz. Alternately, it could mean that ST Math is not presenting content in a way that teaches this idea as well as it does other fraction concepts. Examination of the curriculum and the puzzles that cover material related to these concepts can help to answer this question. Experimental studies altering this material and examining resulting error patterns can inform future iterations of both ST Math and other elementary fraction curricula.

Our work can also be applied to the realm of personalized or individualized instruction (e.g., learner profiles; Pane, Steiner, Baird, & Hamilton, 2015). As typically conceptualized, personalized learning aims to tailor each student’s instructional needs to meet that student “where they are.” This may be optimally accomplished by a computer program, such as ST Math, that can require mastery of a skill before a student moves on to more advanced skills. However, technology cannot work in isolation—knowledgeable teachers are necessary to support student learning (Anderson, Corbett, Koedinger, & Pelletier, 1995; Hew & Brush, 2007). By identifying typical patterns of student errors and grouping students based on the classes that define these patterns, teachers may be able to efficiently offer a type of personalized learning outside of the digital environment. At pre-test, a number of students displayed naïve schemas about fractions as demonstrated by the clustering of their errors around whole number ordering. The needs of these students in the classroom may be different than those who have demonstrate what we view as a more general unfamiliarity with fraction problems as seen in the Distributed Errors Class at pre- and post-quiz.

Conclusion

We set out to better understand third grade students’ errors with fractions in a digital learning environment using multiple choice tests of fraction concepts and algorithms. We identified new types of errors, such as incomplete information and reciprocal errors. We then examined the co-occurrence of these errors using Latent Class Analysis, identifying three patterns of student errors at pre-quiz: one that demonstrated a naïve fraction schema, one that contained high performers without consistent errors, and one that included the majority of the students and likely reflected student unfamiliarity with fraction problems. We found that, although some patterns remained the same, such as distributed or few errors, some patterns changed after instruction. Our work contributes to the growing field of research on fractions, adding to the understanding of the complexity of fraction knowledge and offering insights into the particulars of student struggles, insights that can contribute to the design of assessment and instruction, and ultimately, to improved fraction and mathematics achievement.

Funding

Support for this research was provided in part by the National Science Foundation, grant number 1544273 and based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No DGE-1746939.

Competing Interests

The authors have declared that no competing interests exist.

Acknowledgments

The authors would like to thank MIND Research Institute and the school district for participating in this study.

Author Note

This research was completed to satisfy the first authors’ thesis requirements. Substantial sections of that thesis have been reproduced in the current paper and the empirical data was reported in the thesis as well. The thesis is available through North Carolina States University’s Theses and Dissertations Repository at http://www.lib.ncsu.edu/resolver/1840.20/33652.

Data Availability

Researchers who wish to examine the data can contact the authors to discuss what opportunities and constraints may exist.

References

  • Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. Journal of the Learning Sciences, 4(2), 167-207. https://doi.org/10.1207/s15327809jls0402_2

  • Ashlock, R. B. (2001). Error patterns in computation: Using error patterns to improve instruction. Upper Saddle River, NJ, USA: Prentice Hall.

  • Bailey, D. H., Hoard, M. K., Nugent, L., & Geary, D. C. (2012). Competence with fractions predicts gains in mathematics achievement. Journal of Experimental Child Psychology, 113, 447-455. https://doi.org/10.1016/j.jecp.2012.06.004

  • Bailey, D. H., Siegler, R. S., & Geary, D. C. (2014). Early predictors of middle school fraction knowledge. Developmental Science, 17(5), 775-785. https://doi.org/10.1111/desc.12155

  • Bailey, D. H., Zhou, X., Zhang, Y., Cui, J., Fuchs, L. S., Jordan, N. C., . . . Siegler, R. S., (2015). Development of fraction concepts and procedures in U.S. and Chinese children. Journal of Experimental Child Psychology, 129, 68-83. https://doi.org/10.1016/j.jecp.2014.08.006

  • Baroody, A. J., & Ginsburg, H. P. (1986). The relationship between initial meaningful and mechanical knowledge of arithmetic. In J. Hiebert (Ed.), Conceptual and procedural knowledge: The case of mathematics (pp. 75-112). Hillsdale, NJ, USA: Lawrence Erlbaum.

  • Booth, J. L., Barbieri, C., Eyer, F., & Paré-Blagoev, E. J. (2014). Persistent and pernicious errors in algebraic problem solving. The Journal of Problem Solving, 7, 10-23. https://doi.org/10.7771/1932-6246.1161

  • Booth, J. L., & Newton, K. J. (2012). Fractions: Could they really be the gatekeeper’s doorman? Contemporary Educational Psychology, 37, 247-253. https://doi.org/10.1016/j.cedpsych.2012.07.001

  • Booth, J. L., Newton, K. J., & Twiss-Garrity, L. K. (2014). The impact of fraction magnitude knowledge on algebra performance and learning. Journal of Experimental Child Psychology, 118, 110-118. https://doi.org/10.1016/j.jecp.2013.09.001

  • Bottge, B. A., Ma, X., Gassaway, L., Butler, M., & Toland, M. D. (2014). Detecting and correcting fractions computation error patterns. Exceptional Children, 80(2), 237-255. https://doi.org/10.1177/001440291408000207

  • Brown, G., & Quinn, R. J. (2006). Algebra students’ difficulty with fractions: An error analysis. Australian Mathematics Teacher, 62(4), 28-40.

  • Buckingham, D. (2013). Beyond technology: Children’s learning in the age of digital culture. Hoboken, NJ, USA: John Wiley & Sons.

  • Charalambous, C. Y., & Pitta-Pantazi, D. (2007). Drawing on a theoretical model to study students’ understandings of fractions. Educational Studies in Mathematics, 64(3), 293-316. https://doi.org/10.1007/s10649-006-9036-2

  • Collins, L. M., & Lanza, S. T. (2009). Latent class and latent transition analysis: With applications in the social, behavioral, and health sciences. Hoboken, NJ, USA: John Wiley & Sons.

  • Geary, D. C. (1995). Reflections of evolution and culture in children’s cognition: Implications for mathematical development and instruction. The American Psychologist, 50, 24-37. https://doi.org/10.1037/0003-066X.50.1.24

  • Goodman, L. A. (2002). Latent class analysis: The empirical study of latent types, latent variables, and latent structures. In J. A. Hagenaars & A. L. McCutcheon (Eds.), Applied latent class analysis (pp. 3-55). New York, NY, USA: Cambridge University Press.

  • Hallett, D., Nunes, T., Bryant, P., & Thorpe, C. M. (2012). Individual differences in conceptual and procedural fraction understanding: The role of abilities and school experience. Journal of Experimental Child Psychology, 113, 469-486. https://doi.org/10.1016/j.jecp.2012.07.009

  • Hew, K. F., & Brush, T. (2007). Integrating technology into K-12 teaching and learning: Current knowledge gaps and recommendations for future research. Educational Technology Research and Development, 55(3), 223-252. https://doi.org/10.1007/s11423-006-9022-5

  • Hiebert, J. (1992). Mathematical, cognitive, and instructional analyses of decimal fractions. In G. Leinhardt, R. Putname, & R. A. Hattrup (Eds.), Analysis of arithmetic for mathematics teaching (pp. 283-322). Hove, United Kingdom: Psychology Press.

  • Higgins, L. (2008). Algebra 1 stumping high school freshman: Class of 2011 confronts tougher state requirements. McLean, VA, USA: Gannett.

  • Jordan, N. C., Hansen, N., Fuchs, L. S., Siegler, R. S., Gersten, R., & Micklos, D. (2013). Developmental predictors of fraction concepts and procedures. Journal of Experimental Child Psychology, 116(1), 45-58. https://doi.org/10.1016/j.jecp.2013.02.001

  • Jordan, N. C., Resnick, I., Rodrigues, J., Hansen, N., & Dyson, N. (2017). Delaware longitudinal study of fraction learning: Implications for helping children with mathematics difficulties. Journal of Learning Disabilities, 50, 621-630. https://doi.org/10.1177/0022219416662033

  • Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430), 773-795. https://doi.org/10.1080/01621459.1995.10476572

  • Kim, D., Schneider, C., Engec, N., & Siskind, T. (2006). South Carolina end-of-course examination program 2003-04 Algebra I: Mathematics for the Technologies state report. Columbia, SC, USA: South Carolina Department of Education.

  • Lamon, S. J. (2012). Teaching fractions and ratios for understanding: Essential content knowledge and instructional strategies for teachers. New York, NY, USA: Routledge.

  • Mack, N. K. (1995). Confounding whole-number and fraction concepts when building on informal knowledge. Journal for Research in Mathematics Education, 26(5), 422-441. https://doi.org/10.2307/749431

  • Malone, A. S., & Fuchs, L. S. (2017). Error patterns in ordering fractions among at-risk fourth-grade students. Journal of Learning Disabilities, 50, 337-352. https://doi.org/10.1177/0022219416629647

  • Mathematics Florida Standards. (2014). MAFS: Mathematics Standards. Retrieved from http://www.fldoe.org/core/fileparse.php/5390/urlt/0081015-mathfs.pdf

  • MIND Research Institute. (2017). Mind Research Institute: Learn math visually. Retrieved from http://mindresearch.org

  • Mullis, I. V., Dossey, J. A., Owen, E. H., & Phillips, G. W. (1991). The state of mathematics achievement: NAEP's 1990 assessment of the nation and the trial assessment of the states (Report No. 21-ST-04). Washington, DC, USA: Educational Testing Service.

  • Muthén, L. K., & Muthén, B. O. (1998-2015). Mplus User’s Guide (7th ed.). Los Angeles, CA, USA: Muthén & Muthén.

  • National Governors Association Center for Best Practices. (2010). Common Core State Standards for Mathematics. Washington, DC, USA: Council of Chief State School Officers.

  • National Mathematics Advisory Panel. (2008). Foundations for success: The final report of the National Mathematics Advisory Panel. Washington, DC, USA: U.S. Department of Education. Retrieved from http://www2.ed.gov/about/bdscomm/list/mathpanel/report/final-report.pdf

  • National Research Council. (2005). How students learn: Mathematics in the classroom. In M. S. Donovan & J. D. Bransford (Eds.), Committee on how people learn: A targeted report for teachers (Division of Behavioral and Social Sciences and Education). Washington, DC, USA: The National Academies Press.

  • Nylund, K. L., Asparouhov, T., & Muthén, B. O. (2007). Deciding on the number of classes in latent class analysis and growth mixture modeling: A Monte Carlo simulation study. Structural Equation Modeling, 14(4), 535-569. https://doi.org/10.1080/10705510701575396

  • Pane, J. F., Steiner, E. D., Baird, M. D., & Hamilton, L. S. (2015). Continued progress: Promising evidence on personalized learning (RAND Research Reports, RR-1365-BMGF). https://doi.org/https://doi.org/10.7249/RR1365

  • Pantziara, M., & Philippou, G. (2012). Levels of students’ “conception” of fractions. Educational Studies in Mathematics, 79, 61-83. https://doi.org/10.1007/s10649-011-9338-x

  • Resnick, I., Jordan, N. C., Hansen, N., Rajan, V., Rodrigues, J., Siegler, R. S., & Fuchs, L. S. (2016). Developmental growth trajectories in understanding of fraction magnitude from fourth through sixth grade. Developmental Psychology, 52(5), 746-757. https://doi.org/10.1037/dev0000102

  • Rindskopf, D. (2010). Latent transition analysis. In G. R. Hancock & R. O. Mueller (Eds.), The reviewer's guide to quantitative methods in the social sciences (pp. 199-208). New York, NY, USA: Routledge.

  • Rinne, L. F., Ye, A., & Jordan, N. C. (2017). Development of fraction comparison strategies: A latent transition analysis. Developmental Psychology, 53(4), 713-730. https://doi.org/10.1037/dev0000275

  • Rittle-Johnson, B., & Alibali, M. W. (1999). Conceptual and procedural knowledge of mathematics: Does one lead to the other? Journal of Educational Psychology, 91(1), 175-189. https://doi.org/10.1037/0022-0663.91.1.175

  • Rutherford, T., Farkas, G., Duncan, G., Burchinal, M., Kibrick, M., Graham, J., . . . Martinez, M. E., (2014). A randomized trial of an elementary school mathematics software intervention: Spatial-Temporal Math. Journal of Research on Educational Effectiveness, 7, 358-383. https://doi.org/10.1080/19345747.2013.856978

  • Schenke, K., Rutherford, T., & Farkas, G. (2014). Alignment of game design features and state mathematics standards: Do results reflect intentions? Computers & Education, 76, 215-224. https://doi.org/10.1016/j.compedu.2014.03.019

  • Shaw, G., & Peterson, M. (2000). Keeping Mozart in mind. San Diego, CA, USA: Academic Press.

  • Siegler, R. S. (1996). Emerging minds: The process of change in children’s thinking. New York, NY, USA: Oxford University Press.

  • Siegler, R. S., Duncan, G. J., Davis-Kean, P. E., Duckworth, K., Claessens, A., Engle, M., . . . Chen, M., (2012). Early predictors of high school mathematics achievement. Psychological Science, 23(7), 691-697. https://doi.org/10.1177/0956797612440101

  • Siegler, R. S., & Stern, E. (1998). Conscious and unconscious strategy discoveries: A microgenetic analysis. Journal of Experimental Psychology: General, 127(4), 377-397. https://doi.org/10.1037/0096-3445.127.4.377

  • Siegler, R. S., Thompson, C. A., & Schneider, M. (2011). An integrated theory of whole number and fractions development. Cognitive Psychology, 62(4), 273-296. https://doi.org/10.1016/j.cogpsych.2011.03.001

  • Stafylidou, S., & Vosniadou, S. (2004). The development of students’ understanding of the numerical value of fractions. Learning and Instruction, 14, 503-518. https://doi.org/10.1016/j.learninstruc.2004.06.015

  • StataCorp. 2015. Stata Statistical Software: Release 14. College Station, TX, USA: StataCorp LP.

  • Torbeyns, J., Schneider, M., Xin, Z., & Siegler, R. S. (2015). Bridging the gap: Fraction understanding is central to mathematics achievement in students from three different continents. Learning and Instruction, 37, 5-13. https://doi.org/10.1016/j.learninstruc.2014.03.002

  • Voza, L. (2011). Winning the “Hundred Years’ War.” Teaching Children Mathematics, 18(1), 32-37. https://doi.org/10.5951/teacchilmath.18.1.0032

Appendices

Appendix A

Table A.1

Correlations Between Error Types

Error type Pre-quiz
Post-quiz
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Pre-quiz
1. Complement
2. Filler .06*
3. Illogical Size/Spacing .05 .15***
4. Incomplete Information -.06 -.05* -.06*
5. Reciprocal .10** .10** .05* -.03
6. Reduce to Whole Number -.16*** .18*** .12*** .10** -.12***
7. Ratio .08** .21*** .09** .13*** -.00 .07**
8. Same Num/Denom Ordering .06* .18*** .13*** .11*** .10** .11*** .11***
9. Whole Number Ordering .03 .06* -.08** .01 .12*** .16*** .02 .15***
Post-quiz
10. Complement .08** .12*** .08** -.00 .08** .08** .06* .15*** .14***
11. Filler .04 .17*** .12*** .08** .11*** .15*** .06* .16*** .27*** .08**
12. Illogical Size/Spacing .05* .10** .09** -.01 .04 .08** .04 .08** .16*** .08** .14***
13. Incomplete Information .09** .17*** .08** .08** .09** .09** .05* .10** .14*** .08** .13*** .01
14. Reciprocal .06** .11*** .03 .04 .22*** .02 .07** .12*** .13*** .09** .08** .03 .06*
15. Reduce to Whole Number .07* .08** .07** .03 -.01 .21*** .01 .07* .09** .01 .08** .07* .03 -.23***
16. Ratio .03 -.00 .03 .01 .02 .02 .07** .03 -.01 .03 .08** .03 .07 .06 .05
17. Same Num/Denom Ordering .07** .19*** .11*** .03 .13*** .09** .13*** .22*** .14*** .15*** .19*** .06* .16*** .18*** .07** .05
18. Whole Number Ordering .10** .13*** .12*** .03 .13*** .12*** .10** .21*** .27*** .21*** .27*** -.11*** .14*** .13*** .11*** .06* .33***

*p < .05. **p < .01. ***p < .001.

Appendix B

Table B.1

Table of Third Grade Post-Quiz LCA Values

Class Sample Size Adjusted BIC BLRT p-value Bayes Factor
1 20187.229
2 19757.500 < 0.001 > 10
3 19693.396 < 0.001 < 1
4 19714.993 < 0.001 < 1
5 19741.642 < 0.001 < 1

Note. Entropy for three-class model is 0.664. The three class model was chosen due to low sample size adjusted BIC and interpretability of the model.

Appendix C

Table C.1

Table of Average Proportion of Times Errors Were Made by Each Class

Error type Pre-Quiz
Distributed Errors Whole Number Ordering Error Few Errors
Complement 0.05 0.19 0.08
Filler 0.17 0.12 0.04
Illogical size/spacing 0.15 0.07 0.05
Incomplete information 0.22 0.18 0.17
Reciprocal 0.10 0.23 0.10
Reduction to whole number 0.32 0.11 0.10
Ratio 0.13 0.07 0.01
Same numerator/ denominator ordering error 0.19 0.14 0.04
Whole number ordering 0.29 0.31 0.18
Error type Post-Quiz
Reciprocal Error Distributed Errors Few Errors
Complement 0.13 0.12 0.03
Filler 0.19 0.21 0.07
Illogical size/spacing 0.11 0.13 0.08
Incomplete information 0.13 0.12 0.02
Reciprocal 0.20 0.07 0.05
Reduction to whole number 0.00 0.22 0.09
Ratio 0.03 0.03 0.01
Same numerator/ denominator ordering error 0.13 0.14 0.01
Whole number ordering 0.17 0.22 0.04

Appendix D

Table D.1

Logistic Regressions for Movement out of Few Error Class

Variable Distributed Errors → Reciprocal Error
Distributed Errors → Distributed Errors
Distributed Errors → Few Errors
Odds-Ratio Z-Score Odds-Ratio Z-Score Odds-Ratio Z-Score
Male 0.94 -0.27 0.95 -0.23 1.23 0.70
Content Progress 1.00 0.60 1.00 -1.86 1.00 1.55
Average pre-quiz 0.99 -1.12 0.99 -1.35 1.04*** 3.10***
Disability 0.86 -0.37 0.81 -0.61 1.86 1.38
Free Lunch 0.56 -2.24 1.46 1.76 1.12 0.39
ELL 2.04 1.66 0.94 -0.18 0.29 -2.28
Gifted 0.47 -2.16 0.74 -1.05 3.58*** 3.58***
Race
Asian 0.44 -1.46 1.14 0.32 2.19 1.52
Black 1.27 0.68 1.32 0.85 0.33 -2.36
Hispanic 0.92 -0.23 1.01 0.03 1.08 0.19
Other 1.02 0.04 1.58 0.83 0.35 -1.25
Constant 0.52 6.13 0.00

Note. Odds-ratios and Z-scores are provided. Reference groups: White, non-English Learner, not eligible for free/reduced lunch. Clustered by teacher. n = 1,431. Omitted variables predicted failure (i.e., students were not in the post class).

***p < .001.

Table D.2

Logistic Regressions for Movement out of Distributed Error Class

Variable WNO Error → Reciprocal Error
WNO Error → Distributed Errors
WNO Error → Few Errors
Odds-Ratio Z-Score Odds-Ratio Z-Score Odds-Ratio Z-Score
Male 1.36 1.50 0.75 -1.40 0.33 -0.65
Content Progress 1.00 0.21 1.00 -0.19 1.00 -0.73
Average pre-quiz 1.01 1.42 0.99 -1.46 1.01 0.23
Disability 1.10 0.30 0.91 -0.29 1.00 omitted
Free Lunch 1.15 0.61 0.88 -0.57 0.41 -1.23
ELL 1.17 0.47 0.81 -0.65 11.30*** 3.49***
Gifted 0.80 -0.71 1.06 0.17 44.86*** 5.79***
Race
Asian 1.01 1.30 0.52 -1.22 1.00 omitted
Black 1.35 1.08 0.74 -1.07 1.00 omitted
Hispanic 0.85 -0.55 1.15 0.49 2.94 1.37
Other 0.58 -1.18 1.81 1.29 1.00 omitted
Constant 0.63 1.56 0.00

Note. Odds-ratios and Z-scores are provided. Reference groups: White, non-English Learner, not eligible for free/reduced lunch. Clustered by teacher. n = 1,431. Omitted variables predicted failure (i.e., students were not in the post class).

***p < .001.