Empirical Research

Revisiting and Refining Relations Between Nonsymbolic Ratio Processing and Symbolic Math Achievement

Yunji Park*1, Percival G. Matthews1

Journal of Numerical Cognition, 2021, Vol. 7(3), 328–350, https://doi.org/10.5964/jnc.6927

Received: 2020-07-30. Accepted: 2021-03-05. Published (VoR): 2021-11-30.

Handling Editors: Mojtaba Soltanlou, University of Surrey, Guildford, UK; Krzysztof Cipora, Loughborough University, Loughborough, UK

*Corresponding author at: Department of Educational Psychology, University of Wisconsin-Madison, 1025 W. Johnson Street, Madison, WI 53706-1796, USA. Tel: +708 374 5461. E-mail: ypark246@wisc.edu

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In their 2016 Psych Science article, Matthews, Lewis and Hubbard (2016, https://doi.org/10.1177/0956797615617799) leveled a challenge against the prevailing theory that fractions—as opposed to whole numbers—are incompatible with humans’ primitive nonsymbolic number sense. Their ratio processing system (RPS) account holds that humans possess a primitive system that confers the ability to process nonysmbolic ratio magnitudes. Perhaps the most striking finding from Matthews et al. was that ratio processing ability predicted symbolic fractions knowledge and algebraic competence. The purpose of the current study was to replicate Matthews et al.’s novel results and to extend the study by including a control measure of fluid intelligence and an additional nonsymbolic magnitude format as predictors of multiple symbolic math outcomes. Ninety-nine college students completed three comparison tasks deciding which of two nonsymbolic ratios was numerically larger along with three simple magnitude comparison tasks in corresponding formats that served as controls. The formats included were lines, circles, and dots. We found that RPS acuity predicted fractions knowledge for three university math placement exam subtests when controlling for simple magnitude acuities and inhibitory control. However, this predictive power of the RPS measure appeared to stem primarily from acuity of the line-ratio format, and that predictive power was attenuated with the inclusion of fluid intelligence. These findings may help refine theories positing the RPS as a domain-specific foundation for building fractional knowledge and related higher mathematics.

Keywords: nonsymbolic ratio magnitudes, number sense, ratio processing system, fractions knowledge, higher mathematics, individual differences, replication

Numerous studies have demonstrated the importance of fractions knowledge for mastery of algebra and more advanced higher mathematics (e.g., Bailey, Hoard, Nugent, & Geary, 2012; Booth & Newton, 2012; Siegler et al., 2012; Siegler, Fazio, Bailey, & Zhou, 2013). Despite the importance of fractions knowledge, however, it has been reported as one of the most difficult mathematical topics for children to grasp (e.g., Lesh, Post, & Behr, 1987; Ni & Di Zhou, 2005; Vamvakoussi & Vosniadou, 2004, 2010). Indeed, fractions knowledge often proves problematic even for high school students (Lortie-Forgues, Tian, & Siegler, 2015) and adults (Stigler, Givvin, & Thompson, 2010).

One prevalent account explaining these widespread struggles with fractions suggests that fractions are difficult because they cannot be grounded in a primitive nonsymbolic number sense, the approximate number system (ANS) (Bonato, Fabbri, Umiltà, & Zorzi, 2007; Feigenson, Dehaene, & Spelke, 2004; Ni & Di Zhou, 2005; Wynn, 1995). On this argument, because the ANS is dedicated to processing numerosities (i.e., sets of discrete objects), it is well-suited to serve as foundation for natural conceptual analogs to numerosities—the whole numbers (Dehaene, 2011; Feigenson et al., 2004). In contrast, these researchers have argued that innate constraints of the ANS make it inappropriate for processing fractions, concluding that fractions learning (in contrast to whole number learning) cannot be supported by phylogenetically ancient protonumerical abilities. Ni and Di Zhou (2005) referred to such theories as innate constraints accounts, and Matthews, Lewis, and Hubbard (2016) was framed largely as a response to such accounts with regard to fractions. Here, we replicated a novel finding of Matthews et al. (2016) and extended the study to put this controversial account regarding symbolic fraction acquisition into the test.

Processing Nonsymbolic Ratio Magnitudes

However, growing evidence suggests that there may be a primitive nonsymbolic number sense that is different from the ANS (Jacob, Vallentin, & Nieder, 2012; Lewis, Matthews, & Hubbard, 2015; McCrink & Wynn, 2007; Sidney, Thompson, Matthews, & Hubbard, 2017). This primitive ability is dedicated to processing nonsymbolic ratios (e.g., ratios instantiated by juxtaposing two line segments; Figure 1). Multiple studies have shown that human adults (Bonn & Cantlon, 2017; Jacob & Nieder, 2009a; Matthews & Chesney, 2015; Meng, Matthews, & Toomarian, 2019), infants (McCrink & Wynn, 2007) and even some nonhuman species (Bastos & Taylor, 2020; Drucker, Rossa, & Brannon, 2016; Vallentin & Nieder, 2010) are capable of representing ratio magnitudes when presented nonsymbolically. Against this backdrop, Matthews et al. (2016) leveled a challenge against the innate constraints account that fraction learning is hard because it goes beyond the limits of our basic representational capacities (e.g., Feigenson et al., 2004). They proposed that a nonsymbolic ratio processing system (RPS) might help children acquire symbolic fractions knowledge efficiently (see also Jacob et al., 2012; Lewis et al., 2015; Matthews & Chesney, 2015). They suggested that the RPS might potentially serve as a neurocognitive foundation for learning symbolic fractions much as the ANS supports symbolic whole number learning. They further argued that the RPS may provide an additional perceptual route that can expand cognitive primitive accounts of numerical cognition to apply to fractions—and perhaps to all real numbers (Sidney et al., 2017; see also Gallistel & Gelman, 2000).

Click to enlarge
jnc.6927-f1
Figure 1

Example Stimuli for Simple Magnitude (Top) and Ratio Magnitude (Bottom) Comparison Tasks

Note. Line, circle, and dot formats were presented.

Several studies have provided evidence consistent with the hypothesis that the RPS and symbolic fraction representations are compatible. One line of studies has demonstrated that both adults and children are capable of rapidly translating magnitudes across nonsymbolic and symbolic formats (Binzak, Matthews, & Hubbard, 2019; Kalra, Binzak, Matthews, & Hubbard, 2020; Matthews & Chesney, 2015; Matthews & Lewis, 2017; Meert, Greǵoire, Seron, & Noël, 2013). In these studies, adults and children completed cross-format comparison tasks, whereby they determined the larger between a nonsymbolic and a symbolic ratio. The nonsymbolic ratios were presented in several formats, including lines (Kalra et al., 2020), circles or dots (Matthews & Chesney, 2015). Regardless of nonsymbolic format, participants’ responses were rapid and generally accurate. In fact, cross-format comparisons are typically completed faster than within-format comparisons of symbolic fractions. Furthermore, these comparisons showed a distance effect whereby performance improves as the distance between two magnitudes decreases (Moyer & Landauer, 1967), suggesting that both symbolic and nonsymbolic ratios were represented as analog magnitudes and that those analog magnitudes were compatible enough to facilitate rapid comparison.

A few neuroimaging studies have similarly demonstrated processing of nonsymbolic and symbolic ratios at the neural level (Ischebeck, Schocke, & Delazer, 2009; Jacob & Nieder, 2009a; Jacob et al., 2012; Mock et al., 2018). In these studies, adults performed symbolic fraction comparisons (Ischebeck et al., 2009; Jacob & Nieder, 2009a), nonsymbolic ratio comparisons (Jacob & Nieder, 2009b), or both (Mock et al., 2018). Researchers have found that the posterior parietal cortex, especially the intraparietal sulcus (IPS), is recruited for both nonsymbolic and symbolic fraction processing, which indicates a possible shared processing region.

The RPS and the Acquisition of Symbolic Fractions Knowledge

One important aspect of the RPS account is that it posits that RPS acuity can help support the acquisition of fractions knowledge and other downstream mathematics, such as algebra. Specifically, Matthews et al. (2016) hypothesized 1) that both formal and informal learning helps generate links among nonsymbolic ratios and their corresponding symbolic fractions, 2) that individual differences in RPS acuity might moderate the effects of instruction, and 3) that the RPS, which has been proven to operate even when it serves as a task irrelevant dimension (Jacob & Neider, 2009a; Matthews & Lewis, 2017), exerts its effects on learning even when it is not an explicit pedagogical focus.

On this hypothesis, RPS ability should be associated with fractions knowledge, and perhaps even higher mathematics such as algebra which require an understanding of relational magnitude. To test this hypothesis, Matthews et al. (2016) investigated possible relations between the RPS and symbolic math achievement scores. The authors used a series of ratio magnitude comparison tasks to construct a composite measure of RPS acuity and also used simple dot and line comparison tasks to measure acuities for number of dots and line-lengths as controls. In a novel finding, they observed that RPS acuity predicted symbolic numerical measures, including fractions knowledge and algebra assessment scores measured at college entry. These relations were significant even after controlling for inhibitory control, number acuity, and line-length acuity.

To date, however, this novel result has not been replicated. Although two studies with children also showed similar relations between nonsymbolic ratio comparison performance and symbolic fractions ability (Hansen et al., 2015; Möhring, Newcombe, Levine, & Frick, 2015), even these closest existing reports were quite different from Matthews et al. (2016) in at least three important ways: 1) Neither study measured RPS acuity explicitly, 2) they were conducted with child participants rather than adults, and 3) as a result, neither assessed relations between the RPS and advanced mathematics, such as algebra.

With the present study, we aimed to replicate Matthews et al.’s novel results using some identical tasks, a similar protocol, and a sample drawn from roughly the same population (i.e., students from the same introductory courses at the same university). At the same time, the current study aimed to refine and extend the results in three ways. First, we included the additional domain general control measure of fluid intelligence along with the inhibitory control measure from the original study. Fluid intelligence, the ability to solve novel and abstract problems, has been known to be related with mathematical attainment and higher order mathematics (Preusse, Elke, Deshpande, Krueger, & Wartenburger, 2011; Primi, Ferrão, & Almeida, 2010). Thus, including this measure imposes a tougher test of the extent to which RPS acuity can explain unique variance of higher order mathematics. Second, whereas Matthews et al. examined only algebra subtest scores from the university placement exam, we acquired additional subtests of higher order math achievement from the same exam. These were trigonometry and math fundamentals, which tested a combination of basic arithmetic, algebraic and geometry skills. Even though the math fundamentals featured some algebra items, they were confined to linear equations, whereas the algebra subtest consisted of more advanced materials such as non-linear equations and complex functions. Similar to the algebra subtest, both math fundamentals and trigonometry seem to be more distal outcomes when compared to fractions comparisons. Although there is no preexisting evidence to guide our hypotheses, our theory suggested the possibility that RPS acuity may be predictive of these two additional measures. Thus, we sought to explore how the RPS might predict these other tests of higher-order mathematics. Third, we added ratio and simple magnitude comparisons for circle stimuli (see Matthews & Chesney, 2015; Meng et al., 2019) as a new format of nonsymbolic comparison tasks along with the line and dot formats from the original study. We added circles for three reasons: 1) Unlike dot arrays, they have no obvious whole number analogs; 2) unlike lines, they are not easily partitioned such that count based strategies are plausible; and 3) despite their use in other RPS studies (e.g., Park, Viegut, & Matthews, 2020; Matthews & Chesney, 2015; Meng et al., 2019), their relations to symbolic mathematics performance have yet to be investigated.

In an analytical extension, we investigated the comparative predictive power of ratio processing ability for each separate format. Prior work has shown that the RPS acuities differ depending on format (Park et al., 2020). Moreover, other work suggests that the relations between acuity for magnitude in a specific format and math achievement may be specific to particular subdomains of mathematics (Lourenco, Bonny, Fernandez, & Rao, 2012; Odic et al., 2016; Park & Cho, 2017). However, Matthews et al. combined performance across different ratio formats to create a composite RPS measure that obscured potential difference by format. In the current study, we included acuity in each format as separate predictors to determine whether RPS acuity in each format differentially relates to mathematical achievement. Moreover, prior work has shown that perceptual acuity for magnitudes can vary substantially by format (i.e., Odic, 2017; Odic, Libertus, Feigenson, & Halberda, 2013; Starr & Brannon, 2015). For example, acuities for continuous magnitudes are typically found to be more accurate compared to acuity for number of dots (Odic, 2017; Odic et al., 2013; Park & Cho, 2017; Starr & Brannon, 2015). For instance, Odic (2017) found that acuity for discriminating line segments was the most accurate followed by acuities for discriminating area and numerosity. Based on this prior research, we hypothesized that our participants would demonstrate highest acuity for our line-based stimuli, followed by that for circles, and then dots. Finally, we expected that ratio comparisons would be more difficult compared to that of simple magnitude as previously found by Matthews et al. (2016).

Method

Participants

Ninety-nine undergraduate students from a large Midwestern university (85 Female; Mage = 20.12, SD = 1.14) participated for course credit.

Measures

Because this study was a conceptual replication of Matthews et al. (2016) we used several of the same measures from Matthews et al., and added a few more as well. For the chief predictors, we used three nonsymbolic ratio comparison tasks: separated formats of the dot and line ratio tasks from Matthews et al. and circle ratios, adapted from Meng, Matthews, and Toomarian (2019). We also included other cognitive tasks as covariate predictors. First we used simple magnitude comparison tasks in dot, line and circle formats to control for the ability to process the absolute magnitudes of the components of ratios (in contrast to their relative magnitudes). Each of these tasks in simple formats has been studied for decades (e.g., Krueger, 1984; Stevens, 1957; Teghtsoonian, 1965). We also included a flanker task to account for differences in inhibitory control and Raven’s Standard Progressive Matrices as a measure general intelligence.

Three of the outcome measures were identical to those in Matthews et al. (2016)—symbolic fractions comparisons, a fractions knowledge assessment (FKA), and Algebra scores from participants’ university placement examinations. We also included two additional placement examinations subtest scores—Trigonometry and Math fundamentals (all the measures are listed in Table 1). Each measure is described in more detail below. Task materials, data collected, and the R scripts are available online via the Open Science Framework (https://osf.io/c75xy).

Table 1

The List of Measures Used in the Current Study

Variable Type / Measures Conducted in Matthews et al. Computerized Taska
Independent Variables
Line ratio comparison Yes Yes
Dot ratio comparison Yes Yes
Circle ratio comparison No Yes
Simple line comparison Yes Yes
Simple dot comparison Yes Yes
Simple circle comparison No Yes
Raven’s Progressive Matrices No No
Dependent Variables
Symbolic fractions comparison Yes Yes
Fractions Knowledge Assessment Yes No
Algebra (placement exam) Yes No
Math fundamentals (placement exam) No No
Trigonometry (placement exam) No No

aAll computerized tasks were presented on 1,920 × 1,080 resolution screens using E-prime software (Schneider, Eschman, & Zuccolotto, 2002).

Nonsymbolic Comparisons (RPS Acuity and Simple Magnitude Control Tasks)

Nonsymbolic comparison tasks were blocked by type (i.e., ratio or simple magnitude) and format (i.e., dot, line or circle stimuli). For all nonsymbolic comparisons, participants were simultaneously presented with two stimuli and instructed to choose the larger one. Participants indicated their choices via key press—pressing “j” for right and “f” for left. Each trial began with a fixation cross for 200 ms, immediately followed by brief presentation of two comparison stimuli (Figure 1). Per Matthews et al. (2016), nonsymbolic ratio comparisons were presented for 1,500 ms before disappearing, and simple magnitude control trials were presented for only 750 ms. Trials did not advance until participants responded. Each ratio block started with 10 practice trials followed by 40 experimental trials, and each simple magnitude control block started with five practice trials followed by 40 experimental trials.

Task difficulty varied from trial to trial and was operationalized as the ratio distance, or the ratio between compared stimuli in a trial. Difficulty increased as ratio distance approached 1:1. Note that for ratio comparison trials, ratio distance was a ratio of ratios. The ranges of line and dot ratio distances were varied by format per Matthews et al. (Table 2). For circle stimuli, we adopted the distances for lines from Matthews et al., as our prior piloting demonstrated that participants had similar discrimination abilities in that format.

Table 2

Ratio Bins Indicating Difficulty in Each Task and Format

Task
Ratio Comparison
Magnitude Comparison
Format Line Circle Dot Line Circle Dot
Maximum 8:7 8:7 6:5 15:14 8:7 8:7
Minimum 1:2 1:2 1:3 12:11 2:1 2:1
Line-Ratio Comparisons

Line ratio stimuli were constructed by juxtaposing white and black line segments with jitter per Matthews et al. (2016). White segments ranged from approximately 24 to 128 pixels long, and black segments ranged from approximately 30 to 254 pixels. We followed Matthews et al.’s controls to minimize the likelihood that participants would choose based on the physical length of components as opposed to irrelevant dimension such as overall raw size. Hence, we controlled stimuli such that the larger ratio had longer summed lengths (numerator plus denominator) in half of all trials, and the larger ratio had shorter summed lengths in the other half of trials.

Simple Line Comparisons

Individual black line segments appeared on each side of the screen. Segments ranged from approximately 64 to 162 pixels in length. The two lines were always jittered relative to each other so that participants would be encouraged to consider the entire lengths of each line as opposed to merely focusing on the tops of the lines as would be possible if they were aligned at the bottom.

Circle-Ratio Comparisons

Stimuli were constructed of white circles in the numerator/top position and black circles in the denominator/bottom position. The size of white circles ranged from approximately 2,826 to 12,070 square pixels, and the size of black circles ranged from approximately 3,847 to 18,617 square pixels. We controlled summed areas such that the larger ratio had a larger summed area in half of all trials, and the larger ratio had smaller summed area in the other half of trials.

Simple Circle Comparisons

Two black circles were presented on each side of the screen. The size of circles ranged from approximately 1,661 to 5,539 squared pixels.

Dot-Ratio Comparisons

Ratio stimuli were constructed from juxtaposed pairs of white dot arrays against black backgrounds (numerators) and black dot arrays against white backgrounds (denominators). The number of dots in the numerators ranged from 11 to 67, and the number of dots in the denominators ranged from 30 to 118. We controlled the summed numerosities (i.e., the summed number of white and black dots) such that in half of all trials, the larger ratio featured a greater summed number of dots, and in the other half, the larger ratio had a smaller number of summed dots.

Simple Dot Comparisons

An array of black dots against a rectangular gray background appeared on each side of the screen. The number of dots in arrays ranged from 50 to 200 to preclude the possibility of counting given the rapid rate of response typical for such tasks (i.e., <1,000 ms). In half of the trials, the summed area of dots was constant across the two arrays, and in the other half, the dot size was constant across two arrays. Thus, in the first case, dot size was anticorrelated with numerosity, and in the other case, the cumulative area and density were correlated with numerosity.

Symbolic Fractions Comparisons

Participants selected the larger of two symbolic fractions via keypress. All fractions stimuli were irreducible and composed of single-digit numerators and denominators. We used the same 30 pairs used by Matthews et al. (2016), which excluded fractions pairs sharing common components (e.g., 3/5 vs. 3/6) to minimize dependency on componential strategies (i.e., judgments based solely on numerator or solely on denominator comparisons, rather than on overall fraction magnitude). On each trial, a fixation cross appeared for 200 ms followed by presentation of comparison stimuli until the participant responded or until the trial timed out at 5,000 ms. Symbolic comparison blocks started with five practice trials followed by 30 experimental trials. The side on which the larger fraction appeared (left/right) was counterbalanced across trials. Order of trial presentation was randomized.

Flanker Task

Our version of this measure of inhibitory control was identical to that from Matthews et al. (2016). Participants were asked to decide which direction the center arrow among five was pointing. On each trial, a fixation cross was presented for 500 ms followed by an array of five evenly spaced arrows, which appeared for up to 800 ms or until the participant’s response. Participants first received 12 practice trials followed by 80 test trials. Half of experimental trials were congruent, in which all stimuli pointed the same direction, and the other half of the trials were incongruent, in which the center stimulus pointed the opposite direction from the four flanking arrows. We used the congruity-based difference in RTs (RTincongruent − RTcongruent) in our analyses.

Fractions Knowledge Assessment

The FKA was a 38-item pencil-and-paper test constructed by Matthews et al. (2016). Items were culled from key national and international assessments (e.g., National Assessment of Educational Progress and the Trends in International Mathematics, and Science Study), and from assessments developed by psychology and math education researchers (Carpenter, Corbitt, & National Council of Teachers of Mathematics, 1981; Hallett, Nunes, Bryant, & Thorpe, 2012). Items were intended to measure aspects of conceptual knowledge and of fraction arithmetic procedures. The conceptual knowledge items covered ordering of fractions, density, and how fractions operations affect magnitudes plotted on a number line (see sample items in Appendix). Participants had 20 minutes to complete the test. All participants were able to finish the test within the time limit.

College Mathematics Placement Exams

We obtained scores from three subtests of the math placement exam taken by all incoming freshman: Advanced Algebra (AALG), Math fundamentals (MFND), and Trigonometry & analytic geometry (TAG). The exams were taken by all freshman once admitted to the University for placement purposes and have been subject to years of validation work by the university testing services. As noted above, Math fundamentals tested a combination of basic arithmetic, algebraic and geometry skills. The subtests were composed of 30, 25, and 20 items respectively. The internal consistency reliability of each test (Cronbach’s α) was .89 for MFND, .88 for AALG, and .85 for TAG. The mean normalized assessment score for each test is 500 with a standard deviation of 100.

Raven’s Progressive Matrices

Raven’s is a widely-used standardized test measuring fluid intelligence (Raven, Raven, & Court, 1998). The test is composed of five sets of 12 items each. Each item requires analyzing a pattern of figures and reasoning about what figure would complete the pattern if placed in the blank. As the test progresses, the test set becomes more complex. Participants were given 20 minutes to finish the test. Each item of the test was worth 1 point, with a total possible raw score of 60. This test has been used across a wide range of population, with test-retest reliabilities of .83–.93 (Raven, 2000; Raven et al., 1998). We used the raw score in analysis.

Experimental Procedure

The experiment was divided into two sessions, each on a different day (Mgap = 5.25 days, SD = 2.8). In session 1, participants completed all comparison and flanker tasks on computers. First, participants completed the nonsymbolic comparison tasks. The order of format was always dots, lines, and then circles. In each format, participants completed the simple magnitude control tasks first followed by the ratio magnitude comparison task. Next, participants completed symbolic fractions comparison followed by the flanker test. In session 2, the participants completed the FKA followed by Raven’s Progressive Matrices.

Missing Cases and Outlier Removal

Data for simple line comparisons from one participant, simple circle comparisons from another, and the FKA for another were not collected due to experimenter error. Also, FKA and Raven’s scores were unavailable for six participants who failed return for session 2. Additionally, we were unable to secure placement exam scores for one participant. These specific data elements were missing for individual participants whose data remained otherwise intact. Remaining data for these participants were included in the analyses whenever possible. However, because regressions were run using listwise deletion, when an element was missing, participants’ with missing data elements were removed entirely from those regressions. We indicated the included sample size for each analysis in the corresponding tables for reference.

For all computerized tasks, trials with reaction times (RTs) shorter than 250 ms and trials with RTs more than 3 SD from a participant’s mean RT were trimmed. These steps resulted in the loss of 1.6 to 3.91% of the data for each computerized task. Additionally, for all comparison tasks, we excluded data from participants who scored beyond 3 standard deviations from the group mean for that task and from participants who scored below chance level. This step resulted in the exclusion of four participants’ ratio comparison task data (2 from circle-ratio, 1 from dot-ratio, 1 from line-ratio, 3 from symbolic fractions comparison). This resulted in a trimmed analytic sample of 80 participants. Note, we used this trimmed sample in the hierarchical regressions, except those predicting symbolic fractions comparisons, so that the sample would not vary depending on the number of predictors in each step. For models predicting symbolic fractions comparison, an additional three participants were excluded for below chance level performance, resulting in analytic sample of 77 participants.

Weber Fraction Analysis

We used task accuracy as the measure of acuity in most of our analyses. However, to compare acuity across different comparison tasks, we computed internal weber fractions (hereafter, w) as opposed to accuracy as measures of acuity. In comparison tasks, W represents the smallest discriminable difference expressed as a ratio between the two magnitudes (e.g., Halberda, Mazzocco, & Feigenson, 2008; Pica, Lemer, Izard, & Dehaene, 2004), with smaller w indicating better discrimination acuity. Because the weber fraction is invariant to differences in specific ratio difficulties used to measure it, w allows direct comparison across tasks. By contrast, it is inappropriate to compare discriminability across tasks using accuracy when the tasks use different ratio distances for comparisons.

The model for Weber’s Law assumes that the internal representation for magnitude can be represented as a Gaussian function. For instance, in the case of dot comparisons, if we use n 1   and n 2 to indicate the number of dots in each array, the Gaussian error function (erfc; Eq. 1) computes the degree of overlap between the two Gaussian functions. The overlap can be expressed as a new Gaussian with a mean of ( n 1   - n 2 ) and a standard deviation of w n 1 2 + n 2 2 . For each individual, the w can be calculated using the Levenberg-Marquardt algorithm for nonlinear least squares fit on mean accuracy as a function of magnitude ratio (Eq. 2).

Eq. 1. Gaussian Error Function

1
erfc x = 1 2 π 0 x e t 2 d t

Eq. 2. Expected Percentage Correct

2
Percentage correct = 1 1 2   erfc   ( n 1 n 2 2 w n 1 2 + n 2 2 )

We calculated w for each individual participant in each nonsymbolic comparison task. Prior to analysis, we excluded three participants’ ws from circle ratio comparisons and one participant’s w from dot ratio comparison due to extremely large values (>10, which would imply difficulty discriminating between 10 and 110 dots). We took such extreme values as indicators of noncompliance (see Odic [2017] for similar approach). We also excluded participants’ w data that fell beyond 3 standard deviations from the group mean for each task. This step resulted in the exclusion of several w data points distributed across tasks (3 from dot ratio, 5 from circle ratio, 1 from line ratio, 1 from dot and 1 from line comparisons).

Statistical Power

We conducted two different types of analysis of our results: the first analysis was to compare acuities across different tasks which using w as the dependent variable, the second type of analysis replicated Matthews et al.’s (2016) regressions, using acuity indexed by accuracy as independent variable. Because replication was our main goal, we powered the study for the analyses using accuracy as the chief independent variable. We used the “pwr” package in R to calculate statistical power based on the effect sizes of .18–.25 observed in the hierarchical models predicting symbolic fractions comparison (f2 = .18), FKA (f2 = .25), and Algebra (f2 = .20) in Matthews et al. (2016) and α = .05 (Figure 2). Given these conditions, our initial recruitment of 99 participants would have resulted in power β = .96 for FKA, β = .91 for algebra, and β = .85 for symbolic fractions comparison. After cleaning, our regression framework with the trimmed samples using comparison task accuracy as predictors had the power to defect the above listed effects with β = .90 (n = 80) for FKA, β = .82 (n = 80) for Algebra, and β = .72 (n = 77) for symbolic fractions. For the supplementary analysis with models using ws as predictors, power was β = .87 (n = 74) for FKA, β = .78, (n = 74) for Algebra and β = .68 (n = 71) for symbolic fractions comparison (see Table S4, Supplementary Materials).

Click to enlarge
jnc.6927-f2
Figure 2

Expected Power (β) Changes Depending on the Sample Size

Results

RPS and Simple Magnitude Acuity

To compare acuity across magnitude types and formats, we conducted linear mixed effects regression models to account for within-subject correlation using “Imer” function of lme4 package in R software (Bates, Mächler, Bolker, & Walker, 2015). We regressed acuity (ws) against task (2 levels, ratio = 0, simple magnitude = 1) and format (3 levels, dot = 0, circle = 1, line = 2). We also included a task × format interaction term in the model to check whether observed format differences differed depending on whether participants were comparing ratios or making simple magnitude comparisons. To facilitate evaluation of our hypotheses for how ws would vary with format and task (i.e., the ratio > magnitude task and dot > circle > line), we used a backward difference coding scheme to compare adjacent levels of variables (i.e., coded so that the mean of a given level was compared with the mean of the immediately prior level). We estimated fixed effects for all predictors with random intercepts. Results from these regressions are presented in Table 3.

Table 3

Linear Mixed Effects Regressing ws Against Tasks and Formats

Regressor β t p
Intercept .18 31.00 < .001**
Simple magnitude −.21 −27.03 < .001**
Circle—Dot −.07 −7.31 < .001**
Line—Circle −.02 −2.32 .021*
|Simple—Ratio magnitude|*|Circle—Dot| .02 1.27 .209
|Simple—Ratio magnitude|*|Line—Circle| .02 1.05 .296

*p < .05. **p < .01.

Participants showed significantly higher acuity (lower ws) for simple magnitude comparisons than for ratio comparisons (β = −.21, p < .001; Figure 3, Table 3). Moreover, there was a significant format effect. Consistent with our hypotheses, acuity was higher for lines than for circles (β = −.02, p = .021) and higher for circles than for dot arrays (β = −.07, p < .001). There were no significant interactions between tasks and formats, which suggests that main effects were additive (see Figure 3).

Click to enlarge
jnc.6927-f3
Figure 3

Task and Format Differences in ws

Note. Lower weber fractions indicate higher acuity. Simple magnitude acuity was sharper than that for ratio magnitudes, as all ws in the panel on the right are lower than those on the left. Acuity also varied by format with acuity increasing (i.e., w decreasing) from dot to circle to line format.

Relations Between Nonsymbolic Comparisons and Symbolic Math Outcomes

Revisiting Matthews et al. (2016)

We first replicated the analyses from Matthews et al. (2016). We calculated a composite RPS acuity by taking the mean of line ratio and dot ratio accuracies per Matthews et al. Zero-order correlations showed that RPS composite scores were significantly correlated with three of the five symbolic outcome measures (FKA: r = .26, p = .019, Math fundamentals: r = .28, p = .014, Trigonometry: r = .23, p = .038) but not with symbolic fraction comparison (r = .16, p = .16) and Algebra (r = .20, p = .069) (Table 4).

Table 4

Bivariate Correlation Among Accuracies From Comparison Tasks and Tests (n = 80, Except for FF: n = 77)

Measure Line Ratio Circle Ratio Dot Ratio Line Circle Dot FF Inhibition FKA ALG MF Trig Raven’s
RPS Composite .80** .54** .90** .44** .30** .42** .16 −.07 .26* .20 .28* .23* .33**
Line Ratio .40** .46** .23* .03 .24* .16 −.07 .32** .28* .35** .29** .32**
Circle Ratio .51** .18 .05 .21 −.02 −.07 .16 .00 .07 .06 .08
Dot Ratio .48** .42** .44** .12 −.05 .16 .10 .15 .13 .26*
Line .47** .34** −.01 −.10 .03 .11 .08 .01 .17
Circle .28* .07 −.03 −.09 .03 −.07 −.01 −.07
Dot −.17 −.01 .04 .25* .15 .16 .22*
FF −.02 .37** .23* .39** .18 .16
Inhibition −.05 −.04 −.12 −.07 −.18
FKA .51** .54** .46** .32**
ALG .82** .72** .42**
MF .68** .43**
Trig .30**

Note. FF = fractions comparison; FKA = fractional knowledge; ALG = algebra; MF = math fundamental; Trig = Trigonometry.

*p < .05. **p < .01.

Next, we conducted a series of two-stage hierarchical linear regressions (Table 5), one for each of the symbolic outcomes (i.e., symbolic fractions comparison, FKA, Algebra, Math fundamentals, Trigonometry). In the first stage, we entered the control variables simple dot acuity, simple line acuity, and flanker performance. In the second stage, we added composite RPS acuity. To facilitate the interpretation of effect size across variables, we reported standardized coefficients in all hierarchical regression models.

Table 5

Results From the Linear Regression Predicting From Each Format of Ratio Comparison Tasks

Regressor Fractions Comparison
Fractions Knowledge
Algebra
Math Fundamentals
Trigonometry
β p sr2 β p sr2 β p sr2 β p sr2 β p sr2
Step 1 R2 = .03 R2 = .004 R2 = .07 R2 = .04 R2 = .03
Line .057 .643 .003 .017 .890 .000 .019 .873 .000 .015 .904 .000 −.059 .629 .003
Dot −.194 .118 .033 .030 .808 .001 .248 .039* .054 .142 .241 .018 .179 .140 .028
Inhibition −.017 .882 .000 −.048 .676 .002 −.038 .734 .001 −.120 .291 .014 −.076 .503 .006
Step 2 ΔR2 = .07 ΔR2 = .08 ΔR2 = .01 ΔR2 = .05 ΔR2 = .04
Line −.046 .723 .002 −.090 .476 .006 −.022 .864 .000 −.076 .544 .004 −.139 .276 .015
Dot −.281 .028* .063 −.069 .580 .004 .211 .094 .025 .058 .639 .003 .106 .399 .009
Inhibition −.010 .933 .000 −.038 .735 .001 −.034 .761 .001 −.111 .317 .012 −.069 .541 .005
RPS composite .298 .027* .064 .327 .014* .078 .124 .344 .011 .277 .035* .056 .244 .064 .043

Note. Line = Line acuity; Dot = Dot acuity.

*p < .05.

Consistent with Matthews et al., we found that RPS composite acuity significantly predicted symbolic fractions comparison (β = .298, p = .027) and FKA performance (β = .327, p = .014) even after controlling for inhibition and simple magnitude acuity. We further found that RPS composite acuity significantly predicted math fundamental scores (β = .277, p = .035). However, unlike Matthews et al., we found RPS composite acuity failed to predict Algebra placement exam scores (β = .124, p = .344). It also did not predict trigonometry scores (β = .244, p = .064).

These results noted, use of an RPS composite does not allow insight into whether RPS acuity in different formats are differentially predictive of math outcomes. Thus, we expanded Matthews et al. (2016) and disaggregated RPS acuity by format and controlled for fluid intelligence using Raven’s Standard Progressive Matrices. We conducted analyses parallel to those above, but this time we included acuity in each format as separate predictors. Although we used accuracy as the indicator of acuity for the analyses reported below, we also conducted supplementary analyses using ws as an alternate measure of acuity. The results with ws were consistent with those reported using accuracy (see Supplementary Materials).

Bivariate correlations showed that line ratio acuity was significantly correlated with four of five symbolic math outcomes (FKA: r = .32, p = .004, Algebra: r = .28, p = .013, Math fundamentals: r = .35, p = .001, Trig: r = .29, p = .009) and with Raven’s scores (r = .32, p = .004). In contrast, dot and circle ratio acuities were not correlated with any of the five symbolic outcomes (Table 4). Regarding Raven’s, it was correlated with all math outcomes (FKA: r = .32, p = .004, Algebra: r = .42, p < .001, Math Fundamental: r = .43, p < .001, Trig: r = .30, p = .007) except for symbolic fractions comparison (r = .16, p = .16). Because our regressions used trimmed samples, we also conducted supplementary bivariate correlation analysis without using list-wise deletion (Table S2, Supplementary Materials), and the results showed similar correlation across variables.

Finally, we performed a new set of three-stage hierarchical linear regressions that extended Matthews et al. (2016) (Tables 6, 7, 8, 9, and 10). In Stage 1, we entered flanker scores and all simple magnitude acuities, in Stage 2, we entered ratio magnitude acuities, and in Stage 3, we added fluid intelligence. Below, we present results from regressions using comparison accuracies, but supplementary analyses conducted with weber fractions and yielded similar results (S4 Table a–e, Supplementary Materials).

Table 6

Results From the Hierarchical Regression Analyses Predicting Scores on Symbolic Fractions Comparison (n = 77)

Regressor Step 1 (R2 = .05)
Step 2 (ΔR2 = .07)
Step 3 (ΔR2 = .02)
β p sr2 β p sr2 β p sr2
Line −.004 .299 .000 −.091 .523 .005 −.108 .448 .007
Circle .137 .894 .014 .107 .445 .007 .156 .279 .015
Dot −.213 .091 .039 −.291 .026* .066 −.321 .015* .078
Inhibition −.021 .900 .000 −.016 .887 .000 .011 .926 .000
Line Ratio .204 .131 .030 .166 .225 .019
Circle Ratio −.140 .310 .013 −.103 .460 .007
Dot Ratio .225 .190 .022 .177 .309 .013
Raven’s .178 .168 .024

Note. Line = Line acuity; Circle = Circle acuity; Dot = Dot acuity; Line Ratio = Line ratio acuity; Circle Ratio = Circle ratio acuity; Dot Ratio = Dot ratio acuity.

*p < .05.

Table 7

Results From the Hierarchical Regression Analyses Predicting Scores on Fractions Knowledge (n = 80)

Regressor Step 1 (R2 = .02)
Step 2 (ΔR2 = .10)
Step 3 (ΔR2 = .05)
β p sr2 β p sr2 β p sr2
Line .078 .563 .004 −.009 .945 .000 −.032 .813 .001
Circle −.145 .269 .016 −.128 .344 .011 −.060 .658 .002
Dot .049 .689 .002 −.036 .777 .001 −.072 .565 .004
Inhibition −.047 .686 .002 −.030 .790 .001 .014 .900 .000
Line Ratio .282 .034* .057 .226 .086 .035
Circle Ratio .018 .894 .000 .066 .623 .003
Dot Ratio .090 .590 .004 .023 .888 .000
Raven’s .258 .039* .052

Note. Line = Line acuity; Circle = Circle acuity; Dot = Dot acuity; Line Ratio = Line ratio acuity; Circle Ratio = Circle ratio acuity; Dot Ratio = Dot ratio acuity.

*p < .05.

Table 8

Results From the Hierarchical Regression Analyses Predicting Scores on Algebra (n = 80)

Regressor Step 1 (R2 = .07)
Step 2 (ΔR2= .07)
Step 3 (ΔR2 = .11)
β p sr2 β p sr2 β p sr2
Line .045 .734 .001 .025 .851 .000 −.007 .956 .000
Circle −.061 .633 .003 −.016 .906 .000 .083 .520 .004
Dot .257 .036* .057 .243 .053 .046 .192 .109 .028
Inhibition −.037 .740 .001 −.030 .785 .001 .034 .750 .001
Line Ratio .305 .020* .067 .223 .075 .035
Circle Ratio −.132 .325 .012 −.062 .626 .002
Dot Ratio −.087 .595 .003 −.184 .245 .015
Raven’s .375 .002** .110

Note. Line = Line acuity; Circle = Circle acuity; Dot = Dot acuity; Line Ratio = Line ratio acuity; Circle Ratio = Circle ratio acuity; Dot Ratio = Dot ratio acuity.

*p < .05. ** p < .01.

Table 9

Results From the Hierarchical Regression Analyses Predicting Scores on Math Fundamentals (n = 80)

Regressor Step 1 (R2 = .06)
Step 2 (ΔR2 = .10)
Step 3 (ΔR2 = .08)
β p sr2 β p sr2 β p sr2
Line .083 .529 .005 .012 .931 .000 −.017 .897 .000
Circle −.163 .206 .020 −.143 .275 .014 −.058 .653 .002
Dot .164 .178 .023 .099 .420 .008 .054 .652 .002
Inhibition −.119 .297 .014 −.108 .326 .011 −.052 .625 .003
Line Ratio .344 .008** .086 .273 .031* .052
Circle Ratio −.129 .327 .011 −.069 .599 .003
Dot Ratio .065 .687 .002 −.018 .906 .000
Raven’s .327 .007** .084

Note. Line = Line acuity; Circle = Circle acuity; Dot = Dot acuity; Line Ratio = Line ratio acuity; Circle Ratio = Circle ratio acuity; Dot Ratio = Dot ratio acuity.

*p < .05. ** p < .01.

Table 10

Results From the Hierarchical Regression Analyses Predicting Scores on Trigonometry (n = 80)

Regressor Step 1 (R2 = .04)
Step 2 (ΔR2 = .07)
Step 3 (ΔR2 = .04)
β p sr2 β p sr2 β p sr2
Line −.037 .782 .000 −.097 .478 .006 −.012 .391 .009
Circle −.052 .691 .002 −.032 .815 .001 .027 .844 .000
Dot .186 .131 .030 .131 .302 .013 .100 .428 .008
Inhibition −.076 .509 .006 −.066 .561 .004 −.028 .807 .001
Line Ratio .293 .028* .062 .245 .067 .042
Circle Ratio −.094 .491 .006 −.052 .700 .002
Dot Ratio .045 .786 .001 −.012 .943 .000
Raven’s .223 .076 .039

Note. Line = Line acuity; Circle = Circle acuity; Dot = Dot acuity; Line Ratio = Line ratio acuity; Circle Ratio = Circle ratio acuity; Dot Ratio = Dot ratio acuity.

*p < .05.

Dot acuity was the only significant predictor from among the controls entered in stage 1. It significantly predicted Algebra performance (β = .257, p = .036). Overall, inhibitory control and magnitude acuities in the models explained between 2 and 7% of the variance for math achievement in each of the three subdomains.

When all ratio acuities were entered in stage 2 it explained an additional 7–10% of the variance in the models. Dot acuity was no longer a significant predictor of symbolic math outcomes. However, line ratio acuity emerged as a significant predictor for all symbolic math outcomes except for symbolic fractions comparison: line ratio acuity significantly predicted FKA (β = .282, p = .034), Algebra (β = .305, p = .020), Math fundamentals (β = .344, p = .008) and Trigonometry (β = .293, p = .028) (Figure 4). No other ratio acuities were significant. Thus, it seems that the predictive power of the RPS composite was driven largely by the predictive power of the ratios presented in the line format.

Click to enlarge
jnc.6927-f4
Figure 4

Correlation Between Accuracies of Line Ratio Comparison Task and Different Math Achievement

When Raven’s scores were added in the final step, it explained additional 2–11% of the variance in the models. Fluid intelligence significantly predicted math achievement in three subdomains: FKA (β = .258, p = .039), Algebra (β = .375, p = .002), and Math fundamentals (β = .327, p = .007), but it was not predictive for symbolic fractions comparison (p = .168) or Trigonometry (p = .076). Because of a strong effect of intelligence and shared variance, line ratio was rendered non-significant for most outcomes, but remained a significant predictor for math fundamentals (β = .273, p = .031).

Finally, we note the unexpected finding that symbolic fractions comparison was inversely correlated with acuity for simple dot comparisons. Upon further analysis, this correlation appears to be coincidental; indeed, we found that it was only present for the sample after it was trimmed for listwise deletion in the regressions. When supplemental bivariate correlations were conducted without list-wise deletion, the correlation disappeared (r = −.04, n = 88). Similarly, when we conducted supplemental hierarchical regressions without Raven’s—which allowed the inclusion of seven more participants from whom we failed to collect Raven’s scores—the relation was once again nonsignificant (Table S5, Supplementary Materials).

Discussion

The current research was a partial replication and extension of Matthews et al. (2016), which previously found an association between nonsymbolic ratio processing ability and symbolic numerical abilities including symbolic fractions comparisons, general fractions knowledge and Algebra. We extended the prior work by including a new stimulus format (circle stimuli) and general intelligence as additional predictors and by including two additional symbolic math outcomes (Math fundamentals and Trigonometry). Our findings confirmed and refined some of the previously observed links between the RPS and symbolic math abilities, but also failed to replicate some of original findings. We discuss the nuances and possible implications of these findings below.

The Links Between RPS Acuity and Symbolic Math Outcomes

Consistent with Matthews et al., when we operationalized RPS acuity as a composite of line and dot ratio performance, we found that composite acuity predicted symbolic fractions comparison and general fractions knowledge. This was true even when controlling for simple magnitude acuities and inhibitory control. On the other hand, the relations between composite RPS acuity and Algebra failed to replicate. However, the RPS composite was predictive of Math fundamentals which also tested some basic algebra concepts.

When we disaggregated the composite to check the predictive power of each format, we found that effects of the RPS composite were largely driven by performance in the line ratio format. Prior to the addition of general intelligence in the third round of our hierarchical regression, the line ratio format predicted performance on 4 of 5 outcome measures—the FKA, Algebra, Math fundamentals, and Trigonometry. Indeed, a standard deviation improvement on line ratio comparisons was associated with anywhere from one-fourth to one-third of a standard deviation improvement on these outcomes. The current findings both corroborate and refine Matthews et al.’s prior results showing that nonysmbolic ratio processing ability was predictive of symbolic math performance, with predictive power confined to the line ratio format.

It is unclear why the line format was the most predictive. Although it was reasonable to expect that acuity would be higher for line ratios than for circles or dots based on prior research (e.g., Odic, 2017; Odic et al., 2013), we had no a priori expectations that the line format would prove more predictive than others. We speculate that it may have to do with the relative simplicity of the line format compared to the other two. The visual complexity of dot arrays is well documented (e.g., Gebuis & Reynvoet, 2012; Leibovich & Henik, 2013; Newcombe, Levine, & Mix, 2015). As a result of this complexity, non-numerical features add noise to the numerosity-based signal of ratio magnitude. Although line ratios are ostensibly simpler than circles, it has been demonstrated that participants can use either area or circumference as an index of circle size (Teghtsoonian, 1965). In contrast, by confining attention to a single dimension, the line format may allow participants relatively access easily to ratio information without unnecessary visual input. Hence, it may be the case that line ratio discrimination offers a cleaner measure of individual differences of participants’ ratio processing acuity compared to other formats. More research is necessary to evaluate this speculative account.

It is striking that the low-level perceptual ability to discriminate line ratios—an ability which has been found even among rhesus macaques (Drucker et al., 2016; Vallentin & Nieder, 2008)—was predictive of higher order symbolic math abilities. This is perhaps more noteworthy in light of the unexpected finding that line ratio comparison failed to predict symbolic fraction performance. After all, nonsymbolic ratio discrimination and symbolic fraction comparison shared several features: 1) both tasks were computerized, 2) both were alternative forced choice comparisons, and 3) both assessed analogous rational number magnitudes. Given this, it seems reasonable to expect that there should be more shared variance between line ratio comparison and symbolic fraction comparison than with ratio comparison and any other outcome. On the other hand, recent work by Bhatia et al. (2020) suggests that we should not expect a positive relation between RPS tasks and nonsymbolic ratio discrimination (Bhatia et al., 2020). Using match-to-sample tasks, the authors found that ratio matching tasks exhibited distance effects whereas fraction matching tasks did not. They interpreted this finding as underscoring the role that strategies—and strategy-inducing foils—can play in symbolic comparisons as opposed to nonsymbolic comparisons. Bhatia et al. hypothesized that the differential role played by explicit strategies in nonsymbolic and symbolic comparisons should render results from the task to be largely independent. Although we found no correlation, per Bhatia et al.’ predictions, more systematic work is necessary to adjudicate between these competing hypotheses.

Beyond Magnitude

Why would ratio processing ability predict higher order mathematics? RPS theorists (Jacob et al., 2012; Lewis et al., 2015; Matthews et al., 2016) have hypothesized that the ability to process nonsymbolic ratio magnitude might serve as a cognitive primitive that imbues symbolic fractions with meaning. According to Matthews and Chesney (2015) “…we might eventually come to teach what a fraction symbol like 1/3 represents in much the same way that we teach young children what the symbol 4 represents or what a ‘dog’ or a ‘cat’ is” (p. 52). However, we argue that there are two rather large problems with this account given the current evidence. First, according to RPS as cognitive primitive hypothesis, symbolic fraction comparison should have been predicted by RPS acuity. Second, it fails to account for the complexity of the higher order mathematics abilities we measured. Even the simplest of them—the FKA—involves multiple arithmetic operations. In addition to the arithmetic operations, each college placement exam adds use of variables, math specific vocabulary, and multi-step problems. Cognitive primitive accounts that focus on numerical magnitude do not explain why understanding the size of fractions (or whole numbers for that matter) should confer proficiency with this added complexity. Thus, it is worth considering that more than magnitude per se is at play in the relations we found.

One possibility is that the most effective aspect of nonsymbolic ratio processing lies less in the ability to accurately map from a given nonsymbolic to a specific symbolic and resides more in the ability to focus on the relations between ratio components. That is, it may be that performance on RPS tasks effectively measures participants’ abilities to attend to the fact that there is a multiplicative relation between components. If this is the case, then this sort of nonsymbolic relational reasoning may fuel the development of more general relational reasoning, even in the case that some perceptual bias results in an inaccurate map between fractions symbols and their nonsymbolic analogs.

Two pieces of evidence are consistent with this account. First, Matthews and Chesney (2015) did find consistent biases in cross-format comparisons of symbolic fractions and nonsymbolic ratios, whereby participants overestimated the magnitudes of nonsymbolic stimuli. This emerged despite participants exhibiting extremely well-behaved sigmoid response patterns when considering how stimulus choice depended upon inter-stimulus distance. Notably, these biases were found with circle and dot ratios which were not predictive in our study. However, recent work has found similar, but smaller biases for comparisons of line ratios with symbolic fractions (Binzak et al., 2019). To the extent that these biases exist widely, it suggests that the predictive power of ratio processing may not necessarily lie in the ability to accurately ground symbolic fraction magnitudes in their nonsymbolic analogs.

This potential individual difference in attending to multiplicative relation between ratio components may parallel the construct of spontaneous focusing on relational information (SFOR) which has been described in recent work (McMullen, Hannula-Sormunen, Laakkonen, & Lehtinen, 2015; McMullen, Hannula-Sormunen, & Lehtinen, 2014). SFOR can be roughly defined as the unguided focusing of attention on relational quantitative aspects of environment and the tendency to make use of these relations in action. It may be that individual differences in SFOR play a role in our ratio comparison tasks and contribute to the establishment of links between nonsymbolic ratios and their symbolic analogs. This speculative explanation is consistent with prior findings that SFOR uniquely predicts rational number knowledge and even algebra scores concurrently and longitudinally (McMullen, Hannula-Sormunen, & Lehtinen, 2017) and findings that it is distinctive from spontaneous attention to quantitative information per se (McMullen et al., 2014). However, RPS acuity and SFOR have yet to be measured concurrently on the same participants. Given the potential connection, future studies should explore the relations between SFOR and RPS acuity.

Second, there appears to be important shared variance between line ratio processing ability and performance on our measure of fluid intelligence—Raven’s Standard Progressive Matrices. As detailed above, we observed a significant effect of fluid intelligence on FKA, Math Fundamentals, and Algebra scores. Moreover, adding Raven’s scores to our models rendered line ratio nonsignificant for all but Math Fundamentals. Here it is important to note both 1) that Raven’s is often characterized as a test of relational reasoning (e.g., Carpenter, Just, & Shell, 1990; Crone et al., 2009; Waltz et al., 1999) and 2) that nonsymbolic ratio processing has been described as inherently relational (e.g., Bonn & Cantlon, 2017; Lewis et al., 2015; Matthews & Ellis, 2018; Matthews & Lewis, 2017). Thus, Raven’s and line-ratio tasks may both serve as indices of a sort of relational reasoning that applies beyond consideration of magnitude and drives mathematical competence. Of course, the two are not identical, as they were correlated only at the .33 level, and line ratio remained significant for Math Fundamentals even after controlling for Raven’s performance. Further study with diverse criterion measures of general intelligence and relational reasoning is needed to explore the dynamics between domain general relational reasoning and the RPS.

Limitations and Future Directions

We would like to note two important limitations of our study. First, there were the failures to find the expected relations between multiple predictors and symbolic fraction comparisons: line, circle and dot ratio performance failed to predict symbolic fractions performance, and performance on dot comparisons was actually negatively correlated with symbolic fractions comparisons. We have no strong explanations for these results. It is well known that there can be considerable variability in the strategies that people attempt to use when comparing symbolic fractions, and these vary depending upon attributes of the fractions compared (Morales, Dartnell, & Gómez, 2020; Obersteiner, Alibali, & Marupudi, 2020; Obersteiner, Van Dooren, Van Hoof, & Verschaffel, 2013). However, we used the same stimuli as Matthews et al., who seemed to encounter no problems on this end. Because we only included 30 symbolic fraction comparison trials, our data are not suited for exploring this result further. Use of a larger number of trials with more systematic attention to fractions compositions would allow for much more confidence in the results.

Second, although the current study confirms an association between perceptually-based ratio processing abilities and symbolic math outcomes, our design was not adequate for testing proposed mechanisms connecting the two. One of the most interesting predictions of RPS-based theories is that ratio processing ability might be effectively leveraged to improve intuitions about symbolic fractions, thereby improving math performance. Our study cannot speak to this issue empirically, and neither can other existing RPS studies to our knowledge beyond nods to the idea that number line estimation may in some way leverage RPS ability (e.g., Matthews & Hubbard, 2017; Sidney et al., 2017). Therefore, future studies need to investigate these relations across age cohorts using cross-sectional and—ideally—longitudinal designs to answer such developmental questions. Recent evidence demonstrates that even preschoolers show individual differences in RPS acuity (Park et al., 2020), but researchers have yet to examine the relations between RPS acuity and general symbolic math ability among children. Additionally, if studies of the RPS are ever to yield information about its practical potential to enhance children’s developing mathematical competence, then they must also go beyond exploration of individual differences to examine RPS-based interventions.

Conclusion

The current study replicated Matthews et al.’s (2016) findings that nonsymbolic ratio magnitude perception is associated with symbolic math abilities. We further refined Matthews et al.’s results by demonstrating that the predictive power of nonsymbolic ratio processing was specific to a particular format—line ratios. However, this novel association between primitive perceptual ability and higher-order mathematics still needs to be further unpacked. Beyond the magnitude feature of nonsymbolic ratio processing, we speculate that a substantial portion of its explanatory power stems from its links to relational reasoning. Future research should investigate the extent to which RPS ability is related to both domain-general and spatial- or math-specific relational reasoning and the extent to which each of these is responsible for explaining performance in higher-order mathematics. Overall, the present findings stand to help frame new questions and refine existing theories positing the RPS as a domain-specific foundation for fractional knowledge and related higher mathematics (Lewis et al., 2015).

Funding

The authors have no funding to report.

Acknowledgments

The authors have no additional (i.e., non-financial) support to report.

Competing Interests

The authors have declared that no competing interests exist.

Data Availability

For this article, a dataset is freely available (Park & Matthews, 2020).

Supplementary Materials

The Supplementary Materials contain the following additional information (for access see Index of Supplementary Materials below):

  1. Analysis of the relations between ratio acuities measured by Weber fractions (ws) and math abilities.

  2. Bivariate correlation tables relating performance among various comparison tasks and math tasks, as well as inhibitory control. Separate tables were constructed, alternatively using accuracy and ws as the performance metric. Tables were also constructed with and without the use of listwise deletion.

  3. Results from the hierarchical regression analyses predicting math abilities from ws.

  4. Results from the hierarchical regression analyses predicting symbolic fraction comparison without including Ravens as a covariate.

Task materials, data collected, and the R scripts for analysis are available via the Open Science Framework (https://osf.io/c75xy).

Index of Supplementary Materials

  • Park, Y., & Matthews, P. G. (2020). Supplementary materials to "Revisiting and refining relations between nonsymbolic ratio processing and symbolic math achievement" [Research data, task materials, and code]. OSF. https://osf.io/c75xy

  • Park, Y., & Matthews, P. G. (2021). Supplementary materials to "Revisiting and refining relations between nonsymbolic ratio processing and symbolic math achievement" [Additional information]. PsychOpen GOLD. https://doi.org/10.23668/psycharchives.5159

References

  • Bailey, D. H., Hoard, M. K., Nugent, L., & Geary, D. C. (2012). Competence with fractions predicts gains in mathematics achievement. Journal of Experimental Child Psychology, 113(3), 447-455. https://doi.org/10.1016/j.jecp.2012.06.004

  • Bastos, A. P. M., & Taylor, A. H. (2020). Kea show three signatures of domain-general statistical inference. Nature Communications, 11(1), 1-8. https://doi.org/10.1038/s41467-020-14695-1

  • Bates, D., Mächler, M., Bolker, B. M., & Walker, S. C. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1-48. https://doi.org/10.18637/jss.v067.i01

  • Bhatia, P., Delem, M., Leone, J., Boissin, E., Cheylus, A., Gardes, M., & Prado, J. (2020). The ratio processing system and its role in fraction understanding: Evidence from a match-to-sample task in children and adults with and without dyscalculia. Quarterly Journal of Experimental Psychology, 73(12), 2158-2176. https://doi.org/10.1177/1747021820940631

  • Binzak, J. V., Matthews, P. G., & Hubbard, E. M. (2019). On common ground: Evidence for an association between fractions and the ratios they represent [Preprint, data, and analyses]. https://doi.org/10.17605/OSF.IO/R3F7G

  • Bonato, M., Fabbri, S., Umiltà, C., & Zorzi, M. (2007). The mental representation of numerical fractions: Real or integer? Journal of Experimental Psychology: Human Perception and Performance, 33(6), 1410-1419. https://doi.org/10.1037/0096-1523.33.6.1410

  • Bonn, C. D., & Cantlon, J. F. (2017). Spontaneous, modality-general abstraction of a ratio scale. Cognition, 169, 36-45. https://doi.org/10.1016/j.cognition.2017.07.012

  • Booth, J. L., & Newton, K. J. (2012). Fractions: Could they really be the gatekeeper’s doorman? Contemporary Educational Psychology, 37(4), 247-253. https://doi.org/10.1016/j.cedpsych.2012.07.001

  • Carpenter, P. A., Just, M. A., & Shell, P. (1990). What one intelligence test measures: A theoretical account of the processing in the Raven progressive matrices test. Psychological Review, 97(3), 404-431. https://doi.org/10.1037/0033-295X.97.3.404

  • Carpenter, T. P., Corbitt, M. K., & National Council of Teachers of Mathematics. (1981). Results from the second mathematics assessment of the National Assessment of Educational Progress. National Council of Teachers of Mathematics. Retrieved from https://eric.ed.gov/?id=ED204107

  • Crone, E. A., Wendelken, C., Van Leijenhorst, L., Honomichl, R. D., Christoff, K., & Bunge, S. A. (2009). Neurocognitive development of relational reasoning. Developmental Science, 12(1), 55-66. https://doi.org/10.1111/j.1467-7687.2008.00743.x

  • Dehaene, S. (2011). Number sense: How the mind creates mathematics. Oxford, United Kingdom: Oxford University Press, USA.

  • Drucker, C. B., Rossa, M. A., & Brannon, E. M. (2016). Comparison of discrete ratios by rhesus macaques (Macaca mulatta). Animal Cognition, 19(1), 75-89. https://doi.org/10.1007/s10071-015-0914-9

  • Feigenson, L., Dehaene, S., & Spelke, E. (2004). Core systems of number. Trends in Cognitive Sciences, 8(7), 307-314. https://doi.org/10.1016/j.tics.2004.05.002

  • Gallistel, C. R., & Gelman, R. (2000). Non-verbal numerical cognition: From reals to integers. Trends in Cognitive Sciences, 4(2), 59-65. https://doi.org/10.1016/S1364-6613(99

  • Gebuis, T., & Reynvoet, B. (2012). The interplay between nonsymbolic number and its continuous visual properties. Journal of Experimental Psychology: General, 141(4), 642-648. https://doi.org/10.1037/a0026218

  • Halberda, J., Mazzocco, M. M. M., & Feigenson, L. (2008). Individual differences in non-verbal number acuity correlate with maths achievement. Nature, 455(7213), 665-668. https://doi.org/10.1038/nature07246

  • Hallett, D., Nunes, T., Bryant, P., & Thorpe, C. M. (2012). Individual differences in conceptual and procedural fraction understanding: The role of abilities and school experience. Journal of Experimental Child Psychology, 113(4), 469-486. https://doi.org/10.1016/j.jecp.2012.07.009

  • Hansen, N., Jordan, N. C., Fernandez, E., Siegler, R. S., Fuchs, L., Gersten, R., & Micklos, D. (2015). General and math-specific predictors of sixth-graders’ knowledge of fractions. Cognitive Development, 35, 34-49. https://doi.org/10.1016/j.cogdev.2015.02.001

  • Ischebeck, A., Schocke, M., & Delazer, M. (2009). The processing and representation of fractions within the brain: An fMRI investigation. NeuroImage, 47(1), 403-413. https://doi.org/10.1016/j.neuroimage.2009.03.041

  • Jacob, S. N., & Nieder, A. (2009a). Notation-independent representation of fractions in the human parietal cortex. Journal of Neuroscience, 29(14), 4652-4657. https://doi.org/10.1523/JNEUROSCI.0651-09.2009

  • Jacob, S. N., & Nieder, A. (2009b). Tuning to non-symbolic proportions in the human frontoparietal cortex. European Journal of Neuroscience, 30(7), 1432-1442. https://doi.org/10.1111/j.1460-9568.2009.06932.x

  • Jacob, S. N., Vallentin, D., & Nieder, A. (2012). Relating magnitudes: The brain’s code for proportions. Trends in Cognitive Sciences, 16(3), 157-166. https://doi.org/10.1016/j.tics.2012.02.002

  • Kalra, P. B., Binzak, J. V., Matthews, P. G., & Hubbard, E. M. (2020). Symbolic fractions elicit an analog magnitude representation in school-age children. Journal of Experimental Child Psychology, 195, Article 104844. https://doi.org/10.1016/j.jecp.2020.104844

  • Krueger, L. E. (1984). Perceived numerosity: A comparison of magnitude production, magnitude estimation, and discrimination judgments. Perception & Psychophysics, 35(6), 536-542. https://doi.org/10.3758/BF03205949

  • Leibovich, T., & Henik, A. (2013). Magnitude processing in non-symbolic stimuli. Frontiers in Psychology, 4, Article 375. https://doi.org/10.3389/fpsyg.2013.00375

  • Lesh, B., Post, T., & Behr, M. (1987). Representations and translations among representations in mathematics learning and problem solving. In C. Janiver (Ed.), Problems of representations in the teaching and learning of mathematics (pp. 33–40). Hillsdale, NJ, USA: Lawrence Erlbaum.

  • Lewis, M. R., Matthews, P. G., & Hubbard, E. M. (2015). Chapter 6—Neurocognitive architectures and the nonsymbolic foundations of fractions understanding. In D. B. Berch, D. C. Geary, & K. M. Koepke (Eds.), Development of mathematical cognition (pp. 403-413). https://doi.org/10.1016/B978-0-12-801871-2.00006-X

  • Lortie-Forgues, H., Tian, J., & Siegler, R. S. (2015). Why is learning fraction and decimal arithmetic so difficult? Developmental Review, 38, 201-221. https://doi.org/10.1016/j.dr.2015.07.008

  • Lourenco, S. F., Bonny, J. W., Fernandez, E. P., & Rao, S. (2012). Nonsymbolic number and cumulative area representations contribute shared and unique variance to symbolic math competence. Proceedings of the National Academy of Sciences of the United States, 109(46), 18737-18742. https://doi.org/10.1073/pnas.1207212109

  • Matthews, P. G., & Chesney, D. L. (2015). Fractions as percepts? Exploring cross-format distance effects for fractional magnitudes. Cognitive Psychology, 78, 28-56. https://doi.org/10.1016/j.cogpsych.2015.01.006

  • Matthews, P. G., & Ellis, A. B. (2018). Natural alternatives to natural number: The case of ratio. Journal of Numerical Cognition, 4(1), 19-58. https://doi.org/10.5964/jnc.v4i1.97

  • Matthews, P. G., & Hubbard, E. M. (2017). Making space for spatial proportions. Journal of Learning Disabilities, 50(6), 644-647. https://doi.org/10.1177/0022219416679133

  • Matthews, P. G., & Lewis, M. R. (2017). Fractions we cannot ignore: The nonsymbolic ratio congruity effect. Cognitive Science, 41(6), 1656-1674. https://doi.org/10.1111/cogs.12419

  • Matthews, P. G., Lewis, M. R., & Hubbard, E. M. (2016). Individual differences in nonsymbolic ratio processing predict symbolic math performance. Psychological Science, 27(2), 191-202. https://doi.org/10.1177/0956797615617799

  • McCrink, K., & Wynn, K. (2007). Ratio abstraction by 6-month- old infants. Psychological Science, 18(8), 740-745. https://doi.org/10.1111/j.1467-9280.2007.01969.x

  • McMullen, J., Hannula-Sormunen, M. M., Laakkonen, E., & Lehtinen, E. (2015). Spontaneous focusing on quantitative relations as a predictor of the development of rational number conceptual knowledge. Journal of Educational Psychology, 108(6), 857-868. https://doi.org/10.1037/edu0000094

  • McMullen, J., Hannula-Sormunen, M. M., & Lehtinen, E. (2014). Spontaneous focusing on quantitative relations in the development of children’s fraction knowledge. Cognition and Instruction, 32(2), 198-218. https://doi.org/10.1080/07370008.2014.887085

  • McMullen, J., Hannula-Sormunen, M. M., & Lehtinen, E. (2017). Spontaneous focusing on quantitative relations as a predictor of rational number and algebra knowledge. Contemporary Educational Psychology, 51, 356-365. https://doi.org/10.1016/j.cedpsych.2017.09.007

  • Meert, G., Greǵoire, J., Seron, X., & Noël, M. P. (2013). The processing of symbolic and nonsymbolic ratios in school-age children. PLoS One, 8(11), 1-11. https://doi.org/10.1371/journal.pone.0082002

  • Meng, R., Matthews, P. G., & Toomarian, E. Y. (2019). The relational SNARC: Spatial representation of nonsymbolic ratios. Cognitive Science, 43(8), Article e12778. https://doi.org/10.1111/cogs.12778

  • Mock, J., Huber, S., Bloechle, J., Dietrich, J. F., Bahnmueller, J., Rennig, J., . . . Moeller, K., (2018). Magnitude processing of symbolic and non-symbolic proportions: An fMRI study. Behavioral and Brain Functions, 14(1), 1-19. https://doi.org/10.1186/s12993-018-0141-z

  • Möhring, W., Newcombe, N. S., Levine, S. C., & Frick, A. (2015). Spatial proportional reasoning is associated with formal knowledge about fractions. Journal of Cognition and Development, 17(1), 67-84. https://doi.org/10.1080/15248372.2014.996289

  • Morales, N., Dartnell, P., & Gómez, D. M. (2020). A study on congruency effects and numerical distance in fraction comparison by expert undergraduate students. Frontiers in Psychology, 11, Article 1190. https://doi.org/10.3389/fpsyg.2020.01190

  • Moyer, R. S., & Landauer, T. K. (1967). Time required for judgements of numerical inequality. Nature, 215, 1519-1520. https://doi.org/10.1038/2151519a

  • Newcombe, N. S., Levine, S. C., & Mix, K. S. (2015). Thinking about quantity: The intertwined development of spatial and numerical cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 6(6), 491-505. https://doi.org/10.1002/wcs.1369

  • Ni, Y., & Di Zhou, Y. (2005). Teaching and learning fraction and rational numbers: The origins and implications of whole number bias. Educational Psychologist, 40(1), 27-52. https://doi.org/10.1207/s15326985ep4001_3

  • Obersteiner, A., Alibali, M. W., & Marupudi, V. (2020). Complex fraction comparisons and the natural number bias: The role of benchmarks. Learning and Instruction, 67, Article 101307. https://doi.org/10.1016/j.learninstruc.2020.101307

  • Obersteiner, A., Van Dooren, W., Van Hoof, J., & Verschaffel, L. (2013). The natural number bias and magnitude representation in fraction comparison by expert mathematicians. Learning and Instruction, 28, 64-72. https://doi.org/10.1016/j.learninstruc.2013.05.003

  • Odic, D. (2017). Children’s intuitive sense of number develops independently of their perception of area, density, length, and time. Developmental Science, 21(2), Article e12533. https://doi.org/10.1111/desc.12533

  • Odic, D., Libertus, M. E., Feigenson, L., & Halberda, J. (2013). Developmental change in the acuity of approximate number and area representations. Developmental Psychology, 49(6), 1103-1112. https://doi.org/10.1037/a0029472

  • Odic, D., Lisboa, J. V., Eisinger, R., Olivera, M. G., Maiche, A., & Halberda, J. (2016). Approximate number and approximate time discrimination each correlate with school math abilities in young children. Acta Psychologica, 163, 17-26. https://doi.org/10.1016/j.actpsy.2015.10.010

  • Park, Y., & Cho, S. (2017). Developmental changes in the relationship between magnitude acuities and mathematical achievement in elementary school children. Educational Psychology, 37(7), 873-887. https://doi.org/10.1080/01443410.2015.1127332

  • Park, Y., Viegut, A. A., & Matthews, P. G. (2020). More than the sum of its parts: Exploring the development of ratio magnitude vs. simple magnitude perception. Developmental Science, 24(3), Article e13043. https://doi.org/10.1111/desc.13043

  • Pica, P., Lemer, C., Izard, V., & Dehaene, S. (2004). Exact and approximate arithmetic in an Amazonian indigene group. Science, 306(5695), 499-503. https://doi.org/10.1126/science.1102085

  • Preusse, F., van der Elke, M., Deshpande, G., Krueger, F., & Wartenburger, I. (2011). Fluid intelligence allows flexible recruitment of the parieto-frontal network in analogical reasoning. Frontiers in Human Neuroscience, 5, Article 22. https://doi.org/10.3389/fnhum.2011.00022

  • Primi, R., Ferrão, M. E., & Almeida, L. S. (2010). Fluid intelligence as a predictor of learning: A longitudinal multilevel approach applied to math. Learning and Individual Differences, 20(5), 446-451. https://doi.org/10.1016/j.lindif.2010.05.001

  • Raven, J. (2000). The Raven’s progressive matrices: Change and stability over culture and time. Cognitive Psychology, 41(1), 1-48. https://doi.org/10.1006/cogp.1999.0735

  • Raven, J. C., Raven, J. E., & Court, J. H. (1998). Manual for Raven’s progressive matrices and vocabulary scales. https://doi.org/10.1006/cogp.1999.0735

  • Schneider, W., Eschman, A., & Zuccolotto, A. (2002). E-Prime reference guide. https://doi.org/10.1186/1756-0381-3-1

  • Sidney, P. G., Thompson, C. A., Matthews, P. G., & Hubbard, E. M. (2017). From continuous magnitudes to symbolic numbers: The centrality of ratio. The Behavioral and Brain Sciences, 40, Article e190. https://doi.org/10.1017/S0140525X16002284

  • Siegler, R. S., Duncan, G. J., Davis-Kean, P. E., Duckworth, K., Claessens, A., Engel, M., . . . Chen, M., (2012). Early predictors of high school mathematics achievement. Psychological Science, 23(7), 691-697. https://doi.org/10.1177/0956797612440101

  • Siegler, R. S., Fazio, L. K., Bailey, D. H., & Zhou, X. (2013). Fractions: The new frontier for theories of numerical development. Trends in Cognitive Sciences, 17(1), 13-19. https://doi.org/10.1016/j.tics.2012.11.004

  • Starr, A., & Brannon, E. M. (2015). Developmental continuity in the link between sensitivity to numerosity and physical size. Journal of Numerical Cognition, 1(1), 7-20. https://doi.org/10.5964/jnc.v1i1.2

  • Stevens, S. S. (1957). On the psychophysical law. Psychological Review, 64(3), 153-181. https://doi.org/10.1037/h0046162

  • Stigler, J., Givvin, K., & Thompson, B. (2010). What community college developmental mathematics students understand about mathematics. MathAMATYC Educator, 1(3), 4-16.

  • Teghtsoonian, M. (1965). The judgment of size. The American Journal of Psychology, 78(3), 392-402. https://doi.org/10.2307/1420573

  • Vallentin, D., & Nieder, A. (2008). Behavioral and prefrontal representation of spatial proportions in the monkey. Current Biology, 18(18), 1420-1425. https://doi.org/10.1016/j.cub.2008.08.042

  • Vallentin, D., & Nieder, A. (2010). Representations of visual proportions in the primate posterior parietal and prefrontal cortices. European Journal of Neuroscience, 32(8), 1380-1387. https://doi.org/10.1111/j.1460-9568.2010.07427.x

  • Vamvakoussi, X., & Vosniadou, S. (2004). Understanding the structure of the set of rational numbers: A conceptual change approach. Learning and Instruction, 14(5), 453-467. https://doi.org/10.1016/j.learninstruc.2004.06.013

  • Vamvakoussi, X., & Vosniadou, S. (2010). How many decimals are there between two fractions? Aspects of secondary school students’ understanding of rational numbers and their notation. Cognition and Instruction, 28(2), 181-209. https://doi.org/10.1080/07370001003676603

  • Waltz, J. A., Knowlton, B. J., Holyoak, K. J., Boone, K. B., Mishkin, F. S., De Menezes Santos, M., . . . Miller, B. L., (1999). A system for relational reasoning in human prefrontal cortex. Psychological Science, 10(2), 119-125. https://doi.org/10.1111/1467-9280.00118

  • Wynn, K. (1995). Infants possess a system of numerical knowledge. Current Directions in Psychological Science, 4(6), 172-177. https://doi.org/10.1111/1467-8721.ep10772615

Appendix: Sample Items of Fraction Knowledge Assessment

Click to enlarge
jnc.6927-fA1
Figure A.1

Items for Testing Conceptual Knowledge of Fractions

Click to enlarge
jnc.6927-fA2
Figure A.2

Items for Testing Fractions Arithmetic Procedures