Children’s early math knowledge before kindergarten is a strong predictor of later academic achievement (Claessens & Engel, 2013; Rittle-Johnson et al., 2017; Watts et al., 2014). Math is hierarchical by nature with earlier concepts and practices (i.e., arithmetic) foundational for later concepts and practices (i.e., algebra). Tackling what it means for preschool-age children to understand mathematics topics helps press for clarification on what mathematics understanding means and entails. For example, although explicit indicators of understanding are useful, it is important to capture implicit understanding as well. Young children are typically less able to provide clear verbal explanations, including justifications, than older children, and thus identifying implicit indicators of understanding is particularly important.
Patterning is a domain of early math that, like numeracy, is a positive predictor of later math achievement (Fyfe et al., 2017; Rittle-Johnson et al., 2017). Patterning knowledge encompasses the ability to notice and use predictable sequences (Rittle-Johnson et al., 2015). Identifying, extending, and describing patterns in objects and numbers are core to mathematical thinking (Charles & Carmel, 2005; Sarama & Clements, 2004; Steen, 1988). For example, both counting and arithmetic principles describe generalizations of predictable sequences. Preschool children are especially adept at learning repeating patterns, or linearly arranged sequences with a unit of repeat (e.g., ABABAB). Working with repeating patterns provides early opportunities to identify and describe predictable sequences, without requiring numeracy knowledge. The goal of the current paper is to consider potential operationalizations of repeating patterning understanding, as well as expand the toolkit for measuring repeating patterning knowledge.
Defining and Measuring Repeating Patterning Understanding
Similar to the larger field of mathematical cognition, researchers on children’s repeating patterning often use the terms understanding and knowledge interchangeably. For example, some prominent researchers have used the term understanding (Kidd et al., 2014; Papic et al., 2011; Pasnak et al., 2016), while others have used the term knowledge (Rittle-Johnson et al., 2015, 2017; Starkey et al., 2004), and others have used both understanding and knowledge in the same paper (Rittle-Johnson et al., 2013). Furthermore, additional terms such as abilities (Mulligan et al., 2020; Wijns et al., 2019a, 2019b, 2020, 2021, 2023), skills (Fyfe et al., 2015, 2017, 2019; Mulligan et al., 2020; Rittle-Johnson et al., 2019; Zippert et al., 2019), and competencies (Wijns et al., 2019a, 2019b) are used in the repeating patterning literature. We will use the term knowledge as an umbrella term and discuss different ways we might define understanding.
Children first learn to work with simple alternating AB patterns, such as red-blue, and then learn to work with patterns with three or four elements that repeat, called the unit of repeat or pattern unit (e.g., ABB and AABB patterns). The easiest patterning tasks are complete (add the missing item to a pattern, also called repair) and duplicate (make an exact replica of a model pattern, sometimes called copy). More complex patterning tasks include extend (continue an existing pattern), abstract (recreate a model pattern using a different set of materials, also called transfer or translate), and identifying the unit of repeat (Clements & Sarama, 2009; Papic et al., 2011; Rittle-Johnson et al., 2015; Wijns et al., 2019b). Preschool children often have the lowest accuracy on identifying the unit of repeat (e.g., Clements & Sarama, 2009; Rittle-Johnson et al., 2013; Wijns et al., 2019b).
How might we define and operationalize repeating patterning understanding? The primary definition suggested by past educational and cognitive research is recognizing and using the unit of repeat, which is the general principle underlying repeating patterns. Such a definition is in line with Crooks and Alibali (2014), who noted that general principles include rules, definitions, and aspects of domain structure. However, operationalizing this definition of understanding within repeating patterns is challenging.
One approach to measuring children’s understanding of repeating patterns has been to develop tasks that tap into children’s ability to identify the unit of repeat. Explicit measures ask children to re-create or mark the smallest unit of repeat. For example, children are shown a block tower pattern with three instances of the unit of repeat and asked to create the smallest tower that shows only the repeating unit (Clements & Sarama, 2009; Collins & Laski, 2015; Junker et al., 2025; Rittle-Johnson et al., 2013). An implicit measure asks children to re-create a pattern with the same number of items from memory (Fyfe et al., 2015; Papic et al., 2011; Rittle-Johnson et al., 2013; Wijns et al., 2019a, 2019b). Children’s verbal reports suggested that identifying the unit of repeat and how many times it repeated facilitated performance on this memory task (Papic et al., 2011), in line with measures of encoding of task structure, such as through re-creation of chess pieces on a chess board (Chi et al., 1994). In general, 4-year-old children do poorly on these unit of repeat tasks. Limited measurement validation work indicates that identifying the unit of repeat items can have adequate inter-item correlations and infit and outfit statistics with other repeating patterning items (Lüken, 2012; Rittle-Johnson et al., 2013; Tian & Huang, 2021). However, there are multiple reasons that children may not succeed on these items. Children may fail to understand the task instructions or have other general cognitive difficulties, such as working memory limitations or language comprehension difficulties. In support of this claim, children’s working memory capacity was related to their accuracy on an explicit measure of identifying the unit of repeat, but not their accuracy for duplicating and extending patterns (Collins & Laski, 2015).
A second approach is to consider children’s solution procedures and classify children who use unit of repeat procedures as understanding repeating patterns. There has long been a concern that young children succeed on repeating patterning tasks using comparison-based procedures, such as one-to-one appearance matching of individual elements in the pattern that does not require attention to the unit of repeat (Economopoulos, 1998). However, research on repeating patterning has rarely directly measured what procedures children used to complete the tasks, relying instead on the analysis of children’s response errors (Bojorque et al., 2021; Borriello et al., 2022; Collins & Laski, 2015; Junker et al., 2025; Rittle-Johnson et al., 2013). One exception used clinical interviews with a range of patterning tasks, with children prompted to describe how they solved each item (Lüken & Sauzet, 2021). Based on children’s nonverbal behavior and verbal reports, most children ages 3-6 used multiple types of procedures. The most sophisticated, unit of repeat procedures, involved explicitly naming or showing the unit of repeat (e.g., taking all elements of the unit at once out of the box or grouping them together before arranging) and was rarely used (e.g., 8% of trials by 5-year-olds; Lüken & Sauzet, 2021). Such a procedure is thought to reflect relational thinking – thinking of “expressions and equations in their entirety rather than as a process to be carried out step by step” (Carpenter et al., 2005, p. 51; as cited in Junker et al., 2025, p. 32). A more common procedure that could produce a correct answer was comparison-based procedures (e.g., 32% of trials by 5-year-olds), such as comparing and matching elements in one-to-one correspondence (called a recursive procedure by Junker et al., 2025). Children generally used less sophisticated procedures with more difficult patterning tasks, such as abstracting and unit isolation tasks, and more sophisticated procedures with less difficult patterning tasks, such as duplicating. These findings suggest that all patterning tasks used in the research literature can be solved correctly with procedures that do not require explicit attention to the unit of repeat.
A third potential operationalization for children’s understanding of repeating patterns is success with abstracting (also called transfer or translating). To recreate a model pattern using a different set of materials is often more difficult than duplicating or extending patterns, and it might require greater attention to the underlying structure of the model pattern and not just its surface features (Clements & Sarama, 2009; Collins & Laski, 2015; Rittle-Johnson et al., 2013; Warren & Cooper, 2006). Although recent research indicates that success on abstracting does not require explicit attention to the unit of repeat and can be solved correctly using comparison-based procedures (Junker et al., 2025; Lüken & Sauzet, 2021), it does at least require that children not simply match identical elements in the pattern.
We explore a fourth possibility, which is operationalizing repeating patterning understanding as spontaneous use of correct procedure(s) across multiple tasks and problem features. This definition aligns with Bisanz and LeFevre’s (1992) suggestion that understanding can be defined and measured as “using an appropriate procedure spontaneously on one particular task” or “a similar procedure on a variety of related tasks” (p. 117). Importantly, compared to other mathematical domains, such as counting, where the application of correct procedures could be “due simply to memorization of a correct procedure” (Crooks & Alibali, 2014, p. 368), preschool children are typically not taught or shown specific procedures for solving patterning tasks in many countries, including the U.S. and Türkiye (formally known as Turkey). Therefore, we contend that concerns about memorization of correct procedures without understanding is typically not an issue for repeating patterning tasks. At the same time, rather than requiring that students use a unit of repeat procedure, we could consider all correct procedures as acceptable, even if they are less sophisticated such as comparison. There is not an established threshold for what constitutes “a similar procedure on a variety of related tasks” (Bisanz & LeFevre, 1992, p. 117). In line with established patterning research, we operationalize patterning understanding through successful performance on specific task types, recognizing that explicit measurement of cognitive procedures is beyond the scope of this work. At a minimum, we suggest it should be successful performance on at least two types of tasks with at least two different units of repeat.
The Current Study
In the current study, we explored different potential operationalizations of repeating patterning understanding. As part of this effort, we utilized Mark Wilson’s construct modeling approach to measurement development (Wilson, 2023) to develop and test a construct map, which is a representation of the continuum of knowledge through which people are thought to progress. The construct map guides the development of a comprehensive assessment, and both the construct map and the assessment are evaluated using item-response theory (IRT) models. This construct map provided insights into different operationalizations of understanding. An additional advantage of IRT models, in contrast to using more traditional total scores, is that the difficulty of the items is considered when generating ability scores, allowing us to assess construct validity for our measure by comparing the IRT output with our hypothesized construct map. This is the approach we have used in the current paper.
Table 1 presents a previous construct map for repeating patterning based on four types of constructed-response items (Rittle-Johnson et al., 2013). Based on this construct map, understanding could be defined as success in (a) identifying pattern unit items, (b) abstracting pattern items, or (c) at least 2 patterning tasks (levels), such as duplicating and extending. In all cases, children should succeed with at least 2 different units of repeat; children first succeed with alternating AB patterns, but understanding should require flexibility when working with other units of repeat (e.g., ABB, ABC). We also consider additional pattern tasks and new potential levels to inform an expanded construct map as a result of the current study.
Table 1
Established Construct Map for Repeating Patterns
| Level | Skill | Sample task |
|---|---|---|
| Level 4: Pattern unit recognition | Identifies the pattern unit | “What is the smallest tower you could make and still keep the same pattern as this?” |
| Level 3: Pattern abstraction | Translates patterns into new patterns with same structural rule | “I made a pattern with these blocks. Please make the same kind of pattern here, using these cubes” (using new colors and shapes). |
| Level 2: Pattern extension | Extends patterns at least one pattern unit | “I made a pattern with these blocks. Finish my pattern here the way I would”. |
| Level 1: Pattern duplication | Duplicates pattern | “I made a pattern with these blocks. Please make the same kind of pattern here.” |
Note. From Rittle-Johnson et al. (2013), who originally adapted it from Clements and Sarama (2009).
Importantly, most often, repeating patterning tasks were constructed-response items, with students creating their responses using physical objects (e.g., Clements & Sarama, 2009; Papic et al., 2011; Rittle-Johnson et al., 2013, 2015, 2017). However, having students create their responses using physical objects can be time-consuming and cumbersome. Selected-response tasks make patterning tasks faster and easier to administer, allow for online data collection, and allow for systematic creation of distractor response options. One repeating patterning assessment has used selected-response items for one of their four tasks (e.g., Wijns et al., 2019a) and an assessment of growing patterns (more complex patterns where a sequence changes by the same rule each unit) has used only selected-response items for complete or extend tasks (Pasnak et al., 2016; Wijns et al., 2019a). In this study, we focus on the development of an Early Patterning Assessment (EPA) – Repeating Patterns (Rittle-Johnson et al., 2020), which includes primarily selected-response items that can be administered in-person or online. In Study 1, we report on four rounds of data collection with 4- to 7-year-old children in the United States and, in Study 2, we report on data collection with 4- to 7-year-old children in Türkiye. One open question is defining and measuring repeating patterning understanding using selected-response items.
Study 1
Method
Participants
Data were collected from 270 children, drawn from four rounds of data collection across public and private preschools in a Southeastern urban metropolitan area in the United States between Fall 2019 and Spring 2022. However, three participants removed assent during the session, and multiple participants were dropped for the following reasons: 12 for parental interference, six for technical problems, two for receiving developmental/cognitive special education services, one for not completing the full assessment, one for not paying attention, and two for participating twice because of researcher error. The final participants (N = 243) were 52% girls ranging from 3.93 years to 7.56 years (M = 5.41 years, SD = 0.80). Half of the participants (51%) were attending kindergarten, 39% were attending preschool, and the remaining 9% either were not in school, homeschooled, or attending daycare. Almost all participants spoke English at home (96%), with 7% speaking at least one other additional language. The majority of participants were White/Caucasian (83%), 5% were Biracial, 5% were Asian, 4% were Black/African American, and 3% were Hispanic/Latino, based on information from 220 participants. Of those who were asked and responded, the majority did not receive financial assistance for the child’s education (88%) and did not receive special education services (88%). Table 2 presents the format, sample sizes, and grade distribution for each round of data collection.
Table 2
Sample by Round of Data Collection
| Round | Format | N | Grade |
|---|---|---|---|
| Round 1 (Fall 2019) | In Person | 47 | 100% Kindergarten |
| Round 2 (Fall 2020) | Online | 96 | 38% Kindergarten, 58% Pre-K and 3% Othera |
| Round 3 (Spring 2021) | Online | 64 | 6% Kindergarten, 59% Pre-K and 28% Othera |
| Round 4 (Spring 2022) | In Person | 36 | 100% Kindergarten |
aOther category includes children who were not in school, homeschooled, or attended daycare.
Measure
The “Early Patterning Assessment (EPA) – Repeating” (Rittle-Johnson et al., 2020) was administered in each round of data collection, with minor adjustments to the assessment in each round. An online version of the assessment was created from the paper version during the COVID-19 pandemic.
Complete, extend, and abstract items had three response choices while ID-if-pattern items required a yes or no answer choice. ID-if-a-pattern was a new pattern task type we developed to assess whether children applied the definition of a repeating pattern (i.e., a sequence that repeats) to understand if a sequence is a pattern. Participants' responses to the ID-if-pattern items that were patterns (i.e., the correct answer was “yes”) were scored as part of the measure, while items that were not patterns (i.e., the correct answer was “no”) served as distractor items. ID-Unit was the only task that was not selected-response and asked children to circle the part that repeats. It was only included in Round 4, which was done in person, because of the technical challenges of administering the items online. Each item was scored as “1" if the answer was correct and “0” if the answer was incorrect.
Children's patterning knowledge was assessed using four multiple-choice tasks and one open-ended task (see sample items in Figure 1).
Figure 1
Sample of Early Pattern Assessment Items
Note. The first 4 items are from the online version of the assessment. Identifying the unit of repeat items were only presented on paper in Round 4 and participants were asked to circle the part that repeats. It was not a selected response item.
The assessment consisted of 31 items across rounds of data collection. The distribution of items by round and item statistics are provided in Table 3. Five of these items were administered in all rounds (one ID-if-pattern, two extend, and two abstract items).
Table 3
Description of and Summary Statistics for Study 1
| ItemType_Unit | Rounds | Proportion Correct (SD) | Item Total Correlation | Item Difficulty (SE) |
|---|---|---|---|---|
| Completing_AB | 1, 2, 4 | 0.90 (0.3) | 0.41 | -2.51 (0.27) |
| Completing_ABCD | 1 | 0.91 (0.28) | 0.38 | -2.40 (0.56) |
| Completing_ABCC | 2, 4 | 0.81 (0.39) | 0.41 | -1.77 (0.25) |
| Completing_ABC | 1, 2, 4 | 0.82 (0.38) | 0.47 | -1.76 (0.22) |
| Completing_ABB | 1, 2, 4 | 0.78 (0.42) | 0.47 | -1.44 (0.21) |
| ID-if-Pattern_ABB | 1, 2, 3, 4 | 0.74 (0.44) | 0.41 | -1.28 (0.17) |
| ID-if-Pattern_ABC | 1, 3, 4 | 0.75 (0.43) | 0.40 | -1.23 (0.22) |
| Extend_AABB | 1, 3 | 0.73 (0.45) | 0.51 | -1.17 (0.24) |
| Extend_AAB | 2, 3, 4 | 0.69 (0.46) | 0.54 | -1.06 (0.18) |
| ID-if-Pattern_AABB | 2, 3 | 0.66 (0.48) | 0.44 | -0.98 (0.19) |
| ID-if-Pattern_ABC2 | 3 | 0.64 (0.48) | 0.25 | -0.91 (0.3) |
| Extend_ABCD | 1, 2, 3, 4 | 0.67 (0.47) | 0.40 | -0.89 (0.16) |
| Abstract_AB | 1, 2, 3, 4 | 0.65 (0.48) | 0.53 | -0.85 (0.18) |
| Extend_ABCC | 3 | 0.72 (0.45) | 0.60 | -0.80 (0.37) |
| ID-if-Pattern_ABCD | 3 | 0.61 (0.49) | 0.28 | -0.75 (0.29) |
| Abstract_AABC | 2, 3, 4 | 0.62 (0.49) | 0.48 | -0.71 (0.18) |
| Abstract_ABC | 3 | 0.70 (0.46) | 0.29 | -0.67 (0.36) |
| Extend_ABC | 2, 3, 4 | 0.61 (0.49) | 0.31 | -0.66 (0.17) |
| Extend_AB | 1, 2, 3, 4 | 0.62 (0.49) | 0.54 | -0.63 (0.16) |
| Abstract_ABBB | 3 | 0.68 (0.47) | 0.67 | -0.56 (0.36) |
| Abstract_AAB | 1, 2, 3, 4 | 0.61 (0.49) | 0.63 | -0.53 (0.16) |
| Abstract_ABCD | 1, 2, 3, 4 | 0.60 (0.49) | 0.24 | -0.49 (0.16) |
| Extend_AABB2 | 3 | 0.55 (0.5) | 0.40 | -0.44 (0.29) |
| Extend_AABC | 3 | 0.55 (0.5) | 0.50 | -0.44 (0.29) |
| Abstract_ABB2 | 3 | 0.48 (0.5) | 0.64 | -0.14 (0.29) |
| Abstract_ABB | 3 | 0.46 (0.5) | 0.21 | -0.05 (0.29) |
| Abstract_ABCC | 3 | 0.45 (0.5) | 0.57 | 0.01 (0.29) |
| ID-Unit_ABC | 4 | 0.56 (0.5) | 0.62 | 0.08 (0.39) |
| ID-Unit_AB | 4 | 0.31 (0.47) | 0.24 | 1.35 (0.41) |
| ID-Unit_AABB | 4 | 0.28 (0.45) | 0.54 | 1.51 (0.42) |
| ID-Unit_ABCD | 4 | 0.25 (0.44) | 0.50 | 1.68 (0.44) |
Note. Items with 2s in their name are labeled as such because two items of the same pattern type were included.
Procedure
Two rounds of data collection were collected in person using paper materials in a quiet place at the children’s school with a trained researcher. Two additional rounds of data collection were collected online through synchronous Zoom meetings, using Open Lab, an online platform. In Round 3, children were randomly assigned to a control group or an experimental group, which included a box frame around the pattern unit on some items. However, the results did not differ by condition (Yildirim et al., 2024), so we collapsed data across conditions. The study lasted approximately 20 minutes for each round. For online data collection, parents were present and cautioned not to interfere with the research.
Each assessment began with an example pattern and a researcher explained “Look at this pattern. This is a repeating pattern because it has a part that repeats.” Next, the researcher asked the child to complete a practice ID-if-pattern item. Each child selected yes or no (it is/is not a pattern) and the researcher gave feedback, “This is not a pattern because there is no part that repeats.” The remaining items were completed in a fixed order, without feedback.
Analytic Strategy
We calculated the item total correlations and proportion correct for each item using “corr” and “means” functions in R. We then analyzed data using a Rasch model, a one-parameter member of item response theory (IRT) models. IRT Rasch models were used to identify the difficulty level of different types of items, within a Construct Modeling Approach (Wilson, 2023), allowing us to assess construct validity by comparing IRT results to our previous construct map (Table 1). Rasch models estimate the probability that a participant answers an item correctly by considering the participant’s ability and item difficulty (Rasch, 1980). The unidimensional dichotomous Rasch model was performed using the irtoys and LTM packages in R version 4.4.2 (2024-10-31). Logits (log-odds units) were used as the measurement scale, where higher logits represent more difficult items, and lower logits represent easier items.
A Wright Map was generated using the WrightMap package to visually display the relationship between respondent abilities (on the left) and item difficulties (on the right). Item difficulty statistics were summarized. The complete R scripts, including the custom Wright Map function, are available for reproducibility at the following links:
U.S. dataset: https://osf.io/w5xbv/
U.S. Rasch Model R code: https://rpubs.com/Patterning/raschUS
U.S. Wright Map R code: https://rpubs.com/Patterning/wrightmapus
Results and Discussion
The reliability of the repeating pattern assessment across the four rounds of data collection was acceptable (α = 0.73, using multiple imputation to account for missing data across rounds). Thus, it was a reliable measure for assessing 4- to 7-year-old children’s understanding of repeating patterns in the United States. Table 3 presents the proportion correct, item-total correlations, and item difficulty organized by easier to more difficult based on item difficulty.
The Wright Map is displayed in Figure 2. This map illustrates respondent abilities and item difficulties on the same logit scale, labeled with pattern task type (colors) and specific pattern type (names). Item difficulties ranged from approximately -2.50 logits, representing the easiest items, to 1.68 logits for the most difficult items, with an overall mean difficulty of -1.32 logits (SD = 0.27). Completing pattern items were the easiest items, with the item with an AB pattern as the easiest. There was a fair amount of overlap in ID-if-pattern, extend, and abstract items. ID-Unit was the most difficult task. The complexity of the pattern unit (e.g., AB, ABC, AAB) did not seem to systematically impact item difficulty, although the AB pattern unit item was a bit easier than other items for completing and abstracting, but not extending.
Figure 2
Wright Map of Participant Abilities and Item Difficulty for Study 1 (U.S. Sample)
We also assess construct validity by comparing the Wright Map to our previous construct map in Table 1. This comparison demonstrates mixed evidence for construct validity. Consistent with our ordering of tasks in Table 1, ID-Unit items were the most difficult while complete items were the easiest. However, although extend and abstract items were easier than ID-Unit, they overlapped in difficulty with each other and with the ID-if-pattern items, which was an item type not included in our previous construct map. Overall, the evidence suggests that although some aspects of the construct are upheld, notable inconsistencies remain that call for further refinement of the construct. Thus, in the general discussion, we introduce a revised construct map that integrates these ideas with the findings in Study 2.
Characterizing Children’s Knowledge
As shown in Figure 2, the density graph of person ability estimates (left) compared to item difficulty estimates (right) demonstrates that almost all participants were able to successfully answer completing pattern items. Next, identifying if a pattern, extending, and abstracting were within the range of respondent ability estimates such that participants with low to moderate ability were able to successfully ID-if-pattern, extend, and abstract pattern items. Finally, only participants who demonstrated high ability were able to successfully respond to ID-Unit items. Thus, there was a good distribution of items to capture varied abilities in this sample. Notably, including ID-Unit items was important to capture differences in ability at the upper range.
Study 2
Patterning is a more explicit and consistent part of early childhood education in Türkiye than in the United States. According to the Turkish Ministry of National Education (2013, p. 22), early childhood education emphasizes patterning as a key learning objective. The curriculum specifies that children should be able to create patterns using materials and demonstrate pattern recognition skills. The key indicators for this objective include: (1) forming a pattern by observing a model, (2) identifying the rule in a pattern with up to three elements, (3) recognizing and completing a missing element in a pattern, and (4) creating an original pattern using objects. These competencies are expected to be developed through hands-on activities that encourage children to analyze and extend patterns in meaningful ways. Turkish schools are expected to follow these standards. In contrast, in the U.S., patterning is not part of the Common Core State Standards for Kindergarten (National Governors Association Center for Best Practices & Council of Chief State School Officers, 2011). Although patterning is included in the National Association for the Education of Young Children’s (NAEYC) curriculum standards for preschool (e.g., Standard 2.F.08: “Children have chances to recognize and name repeating patterns”, NAEYC, 2022, p. 28), adoption of these standards is voluntary and not widespread.
Method
Participants
Data were collected from 111 Turkish children. However, four participants were dropped: two because of parental interference, one because they withdrew assent after the 19th item, and one because of technical problems with their device. Therefore, the final sample included 107 Turkish participants between the ages of 3.92–7.08 years old (M = 5.46, SD = 0.84, 53.3% were female). Based on a parent survey, the majority of participants attended school (12% attended preschool, 48% attended kindergarten, and 21% attended first grade), while 5% of the parents reported their child did not attend school. The majority of participants had parents who were college-educated (64.8% of their mothers and 57.1% of their fathers).
Measure
The online version of the EPA-Repeating was adapted into Turkish, using the same items as in Round 3 of data collection in Study 1. However, changes were made to the demo and practice items to verbally emphasize that a repeating pattern follows a rule. Specifically, at the beginning of the assessment, the researcher said “Look at this pattern. This is a repeating pattern because it has a part that repeats. So, it has a rule. Blue-Blue-Red is the repeating part" (italics indicate adapted and additional language). After the practice item for ID-if-pattern, the researcher gave feedback "Your answer is correct/not correct. It's not a pattern because it doesn't have a part that repeats. So there's no rule.” Children's patterning knowledge was assessed using three multiple-choice tasks (see sample items in Figure 1): ID-if-pattern (5 items), extend (7 items), and abstract (7 items) in a fixed order. See Table 4 for item-level summary statistics.
Table 4
Description of and Summary Statistics for Study 2
| ItemType_Unit | Proportion Correct (SD) | Item Total Correlation | Item Difficulty (SE) |
|---|---|---|---|
| Extend_AABB | 0.84 (0.04) | 0.43 | -2.16 (0.29) |
| Extend_AAB | 0.79 (0.04) | 0.52 | -1.72 (0.26) |
| ID-if-Pattern_ABB | 0.79 (0.04) | 0.39 | -1.79 (0.26) |
| Extend_AABB2 | 0.79 (0.04) | 0.47 | -1.79 (0.26) |
| ID-if-Pattern_AABB | 0.77 (0.04) | 0.45 | -1.58 (0.25) |
| Extend_AB | 0.77 (0.04) | 0.49 | -1.58 (0.25) |
| Abstract_AB | 0.77 (0.04) | 0.46 | -1.58 (0.25) |
| Abstract_AABC | 0.76 (0.04) | 0.30 | -1.52 (0.25) |
| Abstract_ABB2 | 0.76 (0.04) | 0.50 | -1.52 (0.25) |
| ID-if-Pattern_ABC2 | 0.75 (0.04) | 0.29 | -1.46 (0.25) |
| Extend_ABC | 0.74 (0.04) | 0.24 | -1.40 (0.25) |
| Extend_AABC | 0.74 (0.04) | 0.56 | -1.40 (0.25) |
| Abstract.ABCD | 0.71 (0.04) | 0.23 | -1.22 (0.24) |
| ID-if-Pattern_ABC | 0.66 (0.05) | 0.34 | -0.94 (0.23) |
| Abstract_ABB | 0.66 (0.05) | 0.58 | -0.94 (0.23) |
| Abstract_AAB | 0.65 (0.05) | 0.57 | -0.88 (0.23) |
| Extend_ABCD | 0.61 (0.05) | 0.34 | -0.62 (0.23) |
| ID-if-Pattern_ABCD | 0.60 (0.05) | 0.04 | -0.57 (0.23) |
| Abstract_ABCC | 0.58 (0.05) | 0.59 | -0.47 (0.23) |
Note. All 19 items in Study 2 are included in Round 3 of Study 1. Items with 2s in their name are labeled as such because two items of the same pattern type were included.
Procedure
Data were collected online through synchronous Zoom meetings using the Open Lab platform between November 2021 and December 2022 using the same procedure as Round 3 of Study 1, including random assignment to a control or frame condition. We combined the two conditions because there were no differences by condition (Yildirim et al., 2024). The study lasted approximately 20 minutes.
Analytic Strategy
Like Study 1, we calculated the item total correlations and proportion correct in R, and item difficulty statistics were summarized. The unidimensional dichotomous Rasch model was performed using the TAM package in R version 4.4.2 (2024-10-31). A Wright Map was generated using the WrightMap package to visually display the relationship between respondent abilities (on the left) and item difficulties (on the right) in Logits (log-odds units), allowing us to assess construct validity by comparing IRT results to our previous construct map (Table 1). The complete R scripts, including the custom Wright Map function, are available for reproducibility at the following links:
Türkiye dataset: https://osf.io/w5xbv/
Türkiye Rasch Model R code: https://rpubs.com/Patterning/raschTR
Türkiye Wright Map R Code: https://rpubs.com/Patterning/wrightmaptr
Results and Discussion
The reliability of the repeating pattern assessment was good (α = 0.83). Thus, it was a reliable measure for assessing 4-7-year-old children’s understanding of repeating patterns in Türkiye. Table 4 presents the proportion correct, item-total correlations, and item difficulty organized from easier to more difficult based on item difficulty for Study 2.
The Wright Map for Study 2 is displayed in Figure 3. This map illustrates respondent abilities and item difficulties on the same logit scale, labeled with pattern task type (colors) and specific pattern type (names). Item difficulties ranged from -2.16 logits, representing the easiest items, to -0.47 logits for the most difficult items, with an overall mean difficulty of -1.32 logits (SD = 0.46). For example, Extend_AABB was the easiest item with a logit difficulty of -2.16, and Abstract_ABCC was the hardest item with a logit difficulty of -0.47. Similar to study 1, there was considerable overlap in ID-if-pattern, extend, and abstract tasks, inconsistent with our ordering of tasks in our previous construct map. Abstract AB remained the easiest among the abstract items, while Extend AB was more challenging, suggesting that AB units of repeat were not necessarily easier than other pattern units. Additionally, the complexity of the pattern unit (e.g., AB, ABC, AAB) did not appear to systematically affect item difficulty. However, while the AB pattern unit was slightly easier for abstract, this advantage did not appear for the extend task.
Figure 3
Wright Map of Participant Abilities and Item Difficulty for Türkiye Data
Characterizing Children’s Knowledge
See Figure 3 for the density graph of person ability estimates (left) compared to item difficulty estimates (right). All items were well within the range of respondent ability estimates. Findings were similar if we excluded participants who attended primary school. Importantly, respondents’ abilities were clustered in the middle and upper portions of the scale, while item difficulties were primarily located lower on the scale. This mismatch suggests that the test items were not sufficiently challenging for the sample. Unfortunately, the most difficult items from Study 1, ID-Unit items, were not included in the EPA-Repeating online version, so those items were not given to Turkish participants.
General Discussion
Overall, the current study provided evidence for the value of the Early Patterning Assessment – Repeating, a new, selected-response assessment that can be administered online or in-person, and evidence for the relative difficulty of different patterning tasks. There was substantial overlap in the difficulty of several patterning tasks (e.g., extend and abstract tasks). Analysis of the Wright Maps and task difficulties provided insights into potential operationalizations of repeating patterning understanding. We discuss our results with attention to the type of pattern task and implications for potential operationalizations of repeating patterning understanding.
Pattern Difficulty
The Wright Map analyses revealed important findings about the difficulty of different patterning tasks on the EPA-Repeating. Overall, we found some evidence for construct validity indicated by the similarities in task difficulty in our Wright Maps in Figure 2 and our previous construct map in Table 1. Therefore, Table 5 proposes a revised construct map for children’s understanding of repeating patterns expanded from Rittle-Johnson et al. (2013) to integrate evidence from our Wright Maps from Study 1 and 2. Matching previous work (Clements & Sarama, 2009; Lüken & Sauzet, 2021), we found complete items to be the easiest task, although this task was only measured in Study 1 and thus only included U.S. participants. Because we did not include any duplicate tasks on our measure, we relied on past findings to place it as an additional easier task on our revised construct map (Clements & Sarama, 2009; Lüken & Sauzet, 2021). Previous work using constructed responses have found a clear distinction between extend and abstract tasks as moderate difficulty and higher difficulty tasks respectively (e.g., see Table 1). However, the difficulty ranking of these tasks overlapped and indicated moderate difficulty in our two studies using a selected-response format. Our results also indicated that a new task—ID-if-pattern—overlapped in difficulty ranking with extend and abstract tasks; although we had anticipated it would be an easier task. This new task does not seem to add value in assessing patterning knowledge beyond more commonly used tasks and may be removed in the future. Finally, although only included in the last round of data collection in the U.S., explicit identification of the unit of repeat was the most difficult task, matching previous work (e.g., Lüken & Sauzet, 2021; Rittle-Johnson et al., 2013).
Table 5
Revised Construct Map for Understanding of Repeating Patterns
| Level | Sample Task | Skill | Evidence |
|---|---|---|---|
| Level 3 | Identifying the unit of repeat (ID-Unit) | Identifies the pattern unit | Study 1 as most difficult task |
| Level 2 | Abstract the pattern | Identifies or creates a pattern with the same structural rule but different specific items | Study 1 & 2 as easy or moderate difficulty, overlapping with extend |
| Extend the pattern | Extends pattern | Study 1 & 2 as easy to moderate difficulty overlapping with abstracting | |
| Level 1 | Complete the pattern | Completes patterns by filling in a missing element | Study 1 as the easiest task |
| Duplicate the pattern | Duplicates patterns | Not included in Study 1 or 2 |
Note. Revised and expanded from Rittle-Johnson et al. (2013) (see Table 1).
Contrary to our expectations, there was very limited evidence that AB patterns were easier than more complex pattern units, and there were no systematic differences in the difficulty of items based on the pattern unit. Thus, our construct map does not consider pattern unit complexity. Previous research suggests that AB patterns may be particularly accessible for 2- and 3-year-old children (Clements & Sarama, 2009) and thus may be important to consider with younger samples.
The EPA-Repeating, especially without ID-Unit items, is not sufficient to distinguish between older and higher-performing children. This finding is consistent with prior research highlighting the need for more challenging tasks to better assess advanced abilities (Yitzhak et al., 2016). Future assessments could incorporate more complex and diverse items, such as multi-step or novel patterning tasks, to better target the upper range of abilities and address the ceiling effects observed in some samples. Prior research has demonstrated knowledge of repeating patterning is foundational for later math success (Fyfe et al., 2017; Rittle-Johnson et al., 2017; Wijns et al., 2019a; Zippert et al., 2019) and engaging students in identify-if-pattern, extend, and abstract patterns could support the development of algebraic reasoning and broader mathematical skills (Clements & Sarama, 2009; Papic et al., 2011). However, to further challenge students, educators may consider introducing tasks that require higher-order reasoning, such as identifying more complex or recursive patterns or growing patterns, that increase or decrease by a set amount, are also more challenging for children (e.g., Wijns et al., 2019a). An additional approach future researchers and educators may consider as a component of patterning understanding could be spontaneous focusing on patterns (SFOP; such as asking a child to build with blocks and seeing if they will create a repeating pattern). Specifically, children who demonstrated SFOP in a building task performed better on patterning and broader math tasks compared to children who randomly arranged the blocks (Wijns et al., 2020). Integrating a measure of SFOP would help assess children’s understanding of repeating patterns, specifically, in line with claims for a metacognitive component of patterning understanding.
Operationalizing Patterning Understanding
What are the implications of these findings for operationalizing patterning understanding? We consider three operationalizations for patterning understanding proposed in the introduction and how each definition aligns with our results. A fourth operationalization, using unit of repeat procedures, could not be considered in the current study. Past research that has used strategy reports has found that a unit of repeat strategy was rarely used (e.g., 8% of trials by 5-year-olds; Lüken & Sauzet, 2021), so this operationalization for patterning understanding is setting a very high bar for children.
Operationalization as Identifying the Unit of Repeat
Children’s ability to ID-Unit was the most difficult task in our measure (albeit limited to one round of data collection in Study 1). Compared to the other pattern tasks, which were all selected response formats, we used an explicit measure where we asked participants to mark the smallest part of the pattern that repeats (unit isolation task). This task was much more difficult than other tasks and thus could be a viable way to define repeating pattern understanding. However, children may have a more implicit understanding of repeating patterning that is not tapped by this task. Although viable, we suspect that this definition of repeating patterning understanding would underestimate children’s understanding.
Operationalization as Success With Pattern Abstraction
Findings from our study argue against operationalization as success with abstract items, given that the abstract task was similar in difficulty to extend and ID-if-pattern. Others have also found that the abstract items were not harder than extend items for repeating patterns (Junker et al., 2025; Wijns et al., 2019a). While Junker and colleagues used the constructed response format items in their study, Wijns and colleagues used a four-option selected response format in extend items and constructed response format in abstract and ID-Unit items. This result challenges previous findings suggesting that abstract tasks are inherently more difficult because of the cognitive demands of generalizing patterns to novel materials (Clements & Sarama, 2009; Rittle-Johnson et al., 2013).
Abstracting items in our assessment relied on a selected response format, and it could be that selecting that same kind of pattern, rather than constructing the same kind of pattern, elicits different strategies. Selecting the same kind of pattern could rely more on comparison-based procedures than constructing the same kind of pattern. Further, strategy analysis on constructed response versions of an abstract task indicates that children can succeed at abstract tasks without attending to the unit of repeat (Junker et al., 2025; Lüken & Sauzet, 2021). Thus, recent evidence suggests that success on abstract items is likely not a strong operationalization of pattern understanding.
Operationalization as Spontaneous Use of Correct Procedure(s) Across Multiple Tasks
Spontaneous use of correct procedure(s) across multiple tasks and problem features is a promising operationalization of repeating patterning understanding. This definition aligns with Bisanz and LeFevre’s (1992) suggestion that understanding can be defined and measured as “using an appropriate procedure spontaneously on one particular task” or “a similar procedure on a variety of related tasks” (p. 117). Based on the current findings, repeating patterning understanding could be defined as success on at least one moderately difficult task (extend, abstract, ID-if-pattern) for at least 2 different pattern units (e.g., AB and ABB); success on the second task could be a second moderately difficult task or an easier task (complete or duplicate the pattern) for at least 2 different pattern units. Alternatively, given that knowledge growth is not all-or-none, children’s ability estimates on measures that include a variety of tasks and pattern units, such as our EPA-repeating, may provide a good measure of understanding. Children with ability estimates at the lowest level of the Wright Map would indicate poor understanding, with understanding increasing with increasing ability estimates. Few children in the current studies had poor understanding.
Limitations and Future Directions
Several limitations warrant discussion. First, the selected-response question format might affect the cognitive demand and difficulty level of tasks as well as children’s strategies. Previous patterning instruments used constructed response items, but our assessment was designed as a selected-response multiple-choice format, thus it might include a greater tendency to use comparison-based procedures. Second, ID-Unit and complete the pattern tasks were not included in Study 2’s version of the assessment, and our assessment did not include a duplicate the pattern task, so conclusions should be made with caution, and future research including all patterning tasks is warranted. Third, the unexpected equal difficulty level of ID-if-pattern, extend, and abstract items highlights the need for further research on the difficulty of pattern tasks, with different response formats, as part of careful consideration of assessment design. Fourth, our results indicated the EPA-repeating assessment consists of relatively easy items, allowing the assessment to distinguish among low-performing students, but not high-performing students. In addition to revising the assessment to include more difficult tasks, it is important to emphasize that the EPA-repeating was designed and developed for 4- and 5-year-old children in the United States, the 6- and 7-year-old children who participated were older than the age the assessment targeted.
Finally, we must consider the instructional focus on patterning activities in early childhood and classroom practice. Based on curriculum standards and the inclusion of primary school children, the Turkish participants may have received more instructional focus on patterning activities in their curriculum and classroom practice than the U.S. participants. This exposure could have improved students’ abilities, thereby making the assessment too easy for Turkish kindergarten and primary school children.
Conclusions
We suggest operationalizing children’s understanding of repeating patterns as demonstrating spontaneous use of correct procedure(s) across multiple tasks and pattern units. The person scores from our IRT analyses demonstrate a continuum from low to high understanding. Our findings provide insights for informing instructional practices and assessment tools.
This is an open access article distributed under the terms of the