Prelude
When we proposed this Special Issue to John Towse, Journal of Numerical Cognition (JNC) Editor-in-Chief back in May 2019, and first spread the news via Twitter and during the second Mathematical Cognition and Learning Society (MCLS) meeting in Ottawa, the world was different. It was a very long time before we had heard of COVID-19. The MCLS was enjoying a fantastic in-person meeting and we were all looking forward to MCLS 2020.
As guest editors we had multiple ideas on how to make the best out of the Special Issue when it comes to adoption of Open Science practices, methodological rigor, and using Open Science infrastructure to streamline the process. We were delighted to see so much enthusiasm, encouragement, and support from colleagues, when they heard about the initiative. It was also great to see that we received multiple pre-submission abstracts. Even though the world has changed a lot since then, it has been challenging to everyone involved, and our own roads towards this moment have sometimes been bumpy, we are here, extremely happy, and proud to present the Special Issue on Direct and Conceptual Replication in Numerical Cognition. It comprises 12 replication papers and three commentaries by authors of replicated papers. The contributions are covering most of areas of numerical and mathematical cognition spanning from hard-core basic cognitive effects through developmental studies and interventions (cf. Figure 1 and Table 1). We are even more delighted that finally all our ideas (e.g., data and material sharing, Open Review Reports) on how to get the most out of the Special Issue found their way. At the same time, we are glad to see that since we proposed the special issue, the JNC evolved. For instance, by adopting the Open Science badges it gives even more visible credit for following these practices than it did before. It seems that Open Science including replication attempts is going to be more established and valued in our field.
Figure 1
Graphical Summary of Contributions to the Special Issue
Note. ES = Effect Size.
Table 1
Detailed Summary of the Contributions to the Special Issue
Study | Topic | Participants | Study being replicated | Type of replication | Effect(s) replicated | Effect size compared to original study |
---|---|---|---|---|---|---|
Artemenko et al. (2021) | Mathematics anxiety | Adults | Hembree (1990); Side observations of multiple studies | Conceptual # | Partially | NA |
Bahnmueller et al. (2021) | Multi-digit number processing | Adults | Bahnmueller et al. (2015) | Conceptual | Yes | Comparable, in some cases smaller, in some larger |
Di Lonardo Burr et al. (2021) | Math story problems | Adults | Mattarella-Micke & Beilock (2010) | Conceptual | No | No effects of interest observed |
Campbell et al. (2021) | Spatial biases in mental arithmetic | Adults | Mathieu et al. (2016) ‡ | Conceptual | No | No effects of interest observed |
Chesney et al. (2021) | Approximate arithmetic training | Adults | Park & Brannon (2013, 2014) | Conceptual | Partially | Smaller |
Ellis, Ahmed, et al. (2021) (EF and math) | Math skills and executive function | Children | Schmitt et al. (2017) ‡ | Conceptual | Partially | Comparable, in some cases larger |
Ellis, Susperreguy, et al. (2021) (7 datasets) | Number line estimation and math skills | Children | Schneider et al. (2018) ‡ | Conceptual # | Yes | Comparable, minimally smaller |
Grimes et al. (2021) | Training effects | Children | Dias (2016) | Conceptual | Yes | Smaller |
Park & Matthews (2021) | Nonsymbolic ratio processing and math skills | Adults | Matthews et al. (2016) | Conceptual | Yes | Comparable, smaller when domain-general factors accounted for |
Reynvoet et al. (2021) | Relations between number sense and math | Adults | Multiple previous studies | Conceptual # | Yes | NA |
Rousselle & Vossius (2021) | Development of number concepts | Children | Wynn (1992) | Conceptual | Partially | NA |
Steiner et al. (2021) | Language effects on multi-digit number processing | Children | Zuber et al. (2009) | Direct | Yes | NA |
# - studies aimed at replicating effect sizes reported in meta-analyses / multiple prior studies. ‡ - A commentary by these authors is available.
Replicability in Numerical Cognition
The replication crisis is an ongoing topic in social sciences including psychology (Ioannidis, 2005). The inability to replicate studies has serious consequences. Theories can be grounded in unreproducible experimental work, without even mentioning personal struggles of early career researchers (ECRs), whose careers are threatened by problems with replicating earlier findings. There are several reasons for the failed replications. Selective reporting of significant findings, optional discontinuation of data collection whenever a result is significant, the rounding down of a p-value, nonpublication of non-significant results (file drawer effect), storytelling based on exploratory analyses, manipulation of outliers (for a review see Schimmack, 2020), and high prevalence of underpowered studies (Button et al., 2013; Maxwell, 2004) are claimed to have filled the literature with non-replicable, and potentially false positive findings. It is also possible that the base rate fallacy is an important contributor to the replication crisis: due to the incompleteness or weakness of our theories, most hypotheses being tested are false, and positive test results are more likely to represent a false positive rather than a true effect (Bird, 2020).
Putting aside its reasons, the replicability crisis seems to be an issue. The first open empirical study of reproducibility in Psychology – the Reproducibility Project (Open Science Collaboration, 2015) found that out of 100 empirical studies, only 36 were reproducible. Another striking finding of this initiative was that even in the case of successful replications, the effect sizes observed in replications were considerably smaller than those reported in primary studies. Therefore, in agreement with several researchers, we believe that replication is an essential issue, as it reduces bias in science and increases public trust (for a discussion see Zwaan et al., 2018).
Replications should have their place in journals, and their authors deserve due credit for their efforts. Surprisingly, only 1.6% of all psychology publications were replication studies during the last 100 years (Makel et al., 2012). Luckily, some recent special issues on replication in different journals such as Perspectives on Psychological Science, Cognitive, Affective, & Behavioral Neuroscience, and Social Psychology offered support for replication in different domains of psychology. We are happy to hereby join this movement with JNC.
We are not aware of any such comprehensive initiative in the field of numerical and mathematical cognition. Even though there were numerous papers that aimed to replicate earlier findings (e.g., Colling et al., 2020), and gladly, some of them have been published in JNC (e.g., Ganley et al., 2021), we hope that this Special Issue will foster replication efforts in our field.
In this Special Issue, we concentrate on direct and conceptual replications (Simons, 2014). While direct replications provide evidence of the robustness of an observed effect itself, conceptual replications play a crucial role in theory development, as they verify that a certain observation is not solely dependent on the specific experimental setup such as characteristics of the sample or the design (for a more extensive replication taxonomy see Hüffmeier et al., 2016). We, as a domain, aspire to inform learning and educational practice. Therefore, it is important that we are confident that our findings hold under a variety of setups, so that it is more feasible to bring them into educational practice. For these reasons, conceptual replications can also be seen as important robustness tests, which should be conducted before scaling up takes place.
Overview of Contributions: Replicability in Numerical Cognition
The overview of papers is presented in Figure 1, and more thoroughly in Table 1. Fortunately, there is an equal representation of studies testing adults and children. Another striking observation is that the vast majority of studies were conceptual replications, which go beyond providing a robustness check of the effects under scrutiny. On top of that, the authors were also often aiming at extending the findings. In three cases, replications were focused on reproducing previously reported effect sizes in a couple of relevant studies rather than a single study. We marked them with “#” in Table 1. The majority of studies at least partly replicated the effects of interest in the original studies, and we also observed that effect sizes in replications were either similar or smaller than the original studies.
Having seen the overview, one may ask, where are we as a domain when it comes to replicability crisis? The answer to this question is far from trivial: the picture drawn by the Special Issue looks quite optimistic, nevertheless, we would be careful in making firm conclusions. Our sample of replication studies is fairly small (N = 12), and studies to be replicated have not been selected randomly. Importantly, direct replications were heavily underrepresented in the Special Issue, and we would be happy to see more of them in the future. In general, the Special Issue neither opens the replicability discussion in the field of numerical and mathematical cognition, as there were several efforts notably even before the replication crisis started (see e.g., Wood et al., 2006), nor closes that. However, we hope that it brings it more to mainstream and reminds colleagues that replication efforts can and do find their place in the published literature, and they are as important as the novel studies.
Reflections on Open Science Practices
Open Data and Materials
While submitting manuscripts, the authors were required to explicitly address the issue of data and materials sharing. In cases where data sharing was not possible, we encouraged sharing synthetic data (Quintana, 2020), and a careful visualization. We are glad to see that the authors followed these recommendations very carefully, and extra data and materials are available for all papers. We hope that the ideas and recommendations we prepared for the Special Issue (available at https://osf.io/xjbrq/), along with references there helped with organizing such materials. We mention it here in a hope that it can be useful for future authors as well. With all reservations due to the observational nature of this data, bibliometric studies show considerable citation advantages for papers, in cases in which data was shared (Colavizza et al., 2020). We hope and wish that our authors benefit from the work they put into sharing their data in terms of both citations and potential future collaborations stemming from this effort.
Sharing Preprints and Author Accepted Manuscripts
We decided to encourage the Authors to share the preprints of their papers online, which some of them did. Bibliometric studies show that where preprints were shared, the papers gain some citation advantage (Fu & Hughey, 2019). As JNC publishes papers on a basis of volumes and advance online publication does not exist, to foster the dissemination of papers as soon as they have been accepted, we have uploaded Author Accepted Manuscripts to openly available OSF repository. We have also encouraged the authors to share the link to the repository, and we ourselves announced all acceptances on social media. Those Author Accepted Manuscripts have been downloaded over 350 times, much before papers have become available directly in JNC. This is a massive underestimate of the true interest in these papers, since as we mentioned before, several of them have been posted on preprint servers by the authors themselves and gained hundreds of downloads from there. Noteworthy, JNC has initiated sharing the Author Accepted Manuscripts via dedicated PsychArchives repository, where it clearly states that the paper has been accepted for publication in JNC. Several authors took this opportunity and uploaded their works there as well. Now that such a service exists it may look a bit messy and redundant with what we did. However, the PsychArchives opportunity was introduced while the Special Issue had already been rolling. Hereby, we encourage future JNC contributors to take advantage of sharing their preprints as well as the Author Accepted Manuscript via PsychArchives.
Open Review Reports
One of our ideas was to introduce Open Peer Review Reports, that is to share the history of exchanges between the Reviewers, the Authors, and the Editors. This practice has been adopted (at least as an optional choice) in several journals such as Royal Society Open Science, PeerJ, Annals of the New York Academy of Sciences, and eLife. Even though, as with everything such a practice has both advantages and downsides (for a thorough discussion see Ford, 2013; Ross-Hellauer, 2017; Ross-Hellauer et al., 2017; Schmidt et al., 2018), we believed that its advantages outweigh the drawbacks. Firstly, it ensures transparency, which we consider to be a value in its own right. Interested readers might see which points have been questioned, and how the critiques have been addressed by the authors. Secondly, we believe that it can be beneficial particularly to trainees and ECRs in their roles as readers, authors, and reviewers. As readers, they can gain insights on how the ideas developed, and how the input from the reviewers shaped the final research product. It could even be good practice to discuss them in research seminars. As authors, trainees and ECRs may feel insecure, whether they would be able to write a decent paper, especially when most of their experience with the literature has been reading published papers, and not having access to the tedious process of creating such a paper, including peer review. Seeing the number of changes in the written work of others, including senior researchers of the field, might give them more realistic insights into the process. Reading reviews of published papers can also help trainees and ECRs adjust their perceptions on how to read reviews of their own papers. The ability to interpret the reviews and respond to them seems to be a part of the hidden curriculum of academia. Most of us learned this from our mentors, but not all of us had such opportunities. Researchers from underrepresented countries / communities might struggle with these issues even more, so we believe that resources such as Open Review Reports are a good way towards democratizing science. Finally, Open Review Reports can be also helpful for ECRs willing to improve their skills as reviewers. It provides insights into how reviews might look like, and what they should and should not do while reviewing papers. Again, such skills are often not directly trained, and most of us learned from our mentors and more senior colleagues. Hopefully this material will extend the available pool, and might be particularly beneficial to colleagues from underrepresented countries / communities.
In this Special Issue, the Open Review Reports were optional. It was organized in an opt-in mode by everyone involved in each submission (i.e., the authors and all reviewers). The reviewers remained anonymous. We are grateful to all the authors and reviewers, who agreed for the peer review history to be shared, and acknowledge the others’ work who did not want to opt-in for any reason. We have them available for a total of 9 papers, and they are linked to each corresponding paper. We are also grateful to Editorial Board of JNC, who agreed to implement this initiative even though it is not a part of the Journal’s policy. We hope that this successful process helps the Editorial Board to make an informed decision whether and how to implement Open Review Reports (Ford, 2013; Ross-Hellauer, 2017; Ross-Hellauer et al., 2017; Schmidt et al., 2018).
A Big Thank You
Last but not least, this special issue is not our achievement. It wouldn’t be here without great support from three Editors-in-Chief of JNC in the past two years: John Towse, who trusted us as two ECRs, and provided extremely valuable feedback and support; Wim Fias, the interim Editor-in-Chief provided us continuing support, shared his experience and expertise, and was very open to our ideas and initiatives; and André Knops, the current Editor-in-Chief, who kept supporting us at the time when most of the work had to wrap up. We thank the publisher, for their openness and resolving numerous issues that were raised underway. We are grateful to all of the reviewers, who agreed to volunteer their time and expertise evaluating the submissions and providing constructive feedback to the authors. We are also very grateful to all of the authors who submitted their work, and to authors of the original papers, who agreed to contribute commentaries. Science is a collective effort in pursue of truth. We hope that this special issue is a record of such a pursuit.