Abstract
Students with disabilities experience differential levels of achievement in mathematics when compared with their nondisabled peers. Identifying and implementing evidence-based practices (EBPs) is essential to increase their mathematics achievement. However, an argument is re-emerging that calls into question the effectiveness of well-known EBPs, such as explicit/direct instruction. This argument is based on opinion and conjecture, rather than student outcome measures. In this commentary, we use research on explicit and direct instruction with mathematics to (a) highlight the confusion over theoretical implications, (b) call for researchers to reduce personal bias within research, and (c) emphasize the need for improving outcomes of students with disabilities that affect mathematics learning.
A rising number of students in the United States are facing enduring and consequential difficulties with mathematics. An average of one in four fourth graders performed below basic in math on the 2022 National Assessment of Educational Progress (NAEP; U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, 2022), compared with the all-time low of one in five in 2019 (U.S. Department of Education, 2019). This decline in mathematical proficiency is the largest since the initial assessments in 1990 for both fourth- and eighth-grade students. Moreover, disparities in math achievement continue to persist based on socio-economic status, race, and disability. These differences in outcomes have been significant since 1996 (U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, 2022) and were further exacerbated by the COVID-19 pandemic, which had a greater impact on math outcomes than reading particularly for Black, Latino/a, and students with disabilities, as forecasted by the Northwest Evaluation Association (NWEA) in Fall 2020 (Kuhfeld et al., 2020).
The alarming findings from the 2022 NAEP should increase the urgency for all stakeholders to examine and improve math instruction. It is essential that all students gain proficiency in mathematics, achieving key standards that, in turn, improve postschool outcomes in employment, education, and independent living.
In the United States, both state and federal laws mandate use of evidence-based practices (EBPs). EBPs are activities, approaches, interventions, and strategies that maximize student potential in learning derived from research measuring outcomes provided by school, teacher, and student performance (Cook et al., 2015). Highlighting the need for improving student performance, the Individuals with Disabilities Education Improvement Act of 2004 (IDEA, 2004) mandates the use of EBPs to improve the outcomes of students with disabilities. In consensus with prior legislation through the No Child Left Behind Act (NCLB, 2002), the Every Student Succeeds Act (ESSA, 2015) calls on local education agencies to use EBPs that best improve academic outcomes of all students. Given the intent of ESSA to encourage evidence-based decision-making as standard practice, there is a critical need for researchers to engage in scholarship that will support student success in all skills for all students, including mathematics for students with disabilities.
Schools are responsible for providing students with disabilities a free and appropriate public education in the least restrictive environment (LRE) at no cost to the child’s family. Students receiving special education services have goals on their Individualized Education Programs (IEPs) that are meant to increase academic achievement outcomes and/or functional performance and require specially designed instruction tailored to individual needs. Student progress toward these IEP goals is typically assessed using quantitative data collected through progress monitoring. In line with ESSA and IDEA, EBPs are supported by a large quantity of methodologically rigorous research (Spencer et al., 2012) and used to improve identified areas of student needs through specially designed instruction. Student progress toward IEP goals is of critical importance, as it is used to determine the degree of educational benefit received by students. Therefore, the impact of specially designed instruction using EBPs on IEP goal progress must be frequently assessed and communicated to parents.
Purpose
The purpose of this article is to respond to a recent systematic review of the literature that characterized explicit instruction, an EBP for teaching mathematics to students with disabilities, as a “dehumanizing,” “medicalized,” and “ableist” approach (Tan et al., p. 896). We begin by highlighting essential theoretical and methodological limitations as well as potential sources of bias that prevent reliable conclusions. Next, we summarize evidence from meta-analyses supporting the effectiveness of explicit instruction to improve mathematics outcomes for students with disabilities. Finally, we conclude with recommendations for education researchers.
Examining “A Critical Review of Educator and Disability Research in Mathematics Education”
Recently, an argument is re-emerging that calls into question the effectiveness of explicit instruction. Specifically, “A Critical Review of Educator and Disability Research in Mathematics Education: A Decade of Dehumanizing Waves and Humanizing Wakes,” published in
Tan and colleagues (2022) are fair in their critique that research in mathematics instruction for students with disabilities is centered primarily on initial stages of learning (i.e., acquisition; Haring & Eaton, 1978). For example, a recent systematic review of experimental studies published between 1975 and 2018 that taught mathematics to students with intellectual disability found few that even measured maintenance (Park et al., 2020). We agree that research on academic instruction for students with disabilities should include measures of maintenance.
We also agree with Tan and colleagues that all students with IEPs deserve specially designed instruction to build the habits of mind needed to independently use mathematical practices in their everyday lives, such as meta-cognitive and other self-regulating capacities. BIPOC students with disabilities experience worse outcomes in mathematics than white peers (Stevens & Schulte 2017; Wei et al., 2013). To that end, we concur with Tan et al. on the need for researchers and teacher educators to attend to issues of intersectionality in evaluating and recommending instructional practices.
Disparities in educational outcomes have been attributed to unequal distributions of access and opportunity (Burns et al., 2019). We applaud Tan et al. (2022) for calling out segregation of students with disabilities during mathematics instruction. It is our belief that all students deserve equal opportunities to access instruction on the mathematics standards of their grade level. For students with disabilities, this instruction must be delivered using EBPs identified from methodologically rigorous causal research. A widely acknowledged issue in the pool of research evidence from which EBPs are identified is racial and ethnic representation. As Steinbrenner et al. (2022) recently argued, the lack of reporting of race and ethnicity and overrepresentation of white participants limits our understanding of
The field needs an intellectual space where these complex issues can be addressed to determine where alliances may be formed in mathematics between general and special education to improve mathematical learning experiences and outcomes for all students. However, instead of providing support and ideas, Tan et al. (2022) used their analysis to call out a specific EBP, explicit instruction, as “dehumanizing,” “ableist,” and “racist” rather than identifying practices that may enable students with disabilities to gain the mathematics proficiency needed to understand the world around them—a necessary responsibility for which schools are held accountable (Portz & Beauchamp, 2022).
To improve the validity of research surrounding students with disabilities, we call for the field to use and advocate for sound theoretical and methodological decisions across education disciplines. Below we highlight theoretical limitations and methodological errors in the methods, analyses, and conclusions of Tan et al. (2022). We aim to show how bias, rather than rigor of research, can drive conclusions that steer educators away from EBPs and supports that some students with disabilities need to have opportunities for and access to effective, evidence-based mathematics instruction.
Theoretical Limitations
Empirical research should describe and follow a process that results in transparency and replicability in the methods while reducing the potential source of bias (Page et al., 2021). Hence, an essential step in the literature review process is to thoroughly operationalize the constructs of interest in the research. Precise definitions reduce ambiguity and subjectivity, enhancing the reliability and trustworthiness of the research findings (American Education Research Association [AERA], 2006). Sometimes researchers present definitions in a footnote (e.g., Tan et al., 2022) or not at all, leaving the main text devoid of precision and targeted arguments. The vague definitions of direct instruction in Tan et al. (2022) included brief lists of words, such as “tell-show-try-apply” and “model-lead-test” (p. 21); however, they do not include a thorough definition of direct instruction. This lack of precision will likely lead to misinterpretations of the authors’ main points. In an article where the authors call out direct instruction as “ableist” and “dehumanizing,” they include a footnote in reduced font at the end of the article that stated, Direct instruction is not in and of itself ableist in all contexts; however, when only one pedagogical approach is applied to all disabled students regardless of their understanding and/or the mathematics at hand, we see this as ableist. Nondisabled students are not given the same limitations on how they can learn mathematics. (Tan et al., 2022, p. 29)
In other words, when tailored to individual students’ needs, they contend direct instruction is not “dehumanizing” or “ableist.” We prefer the term
Explicit instruction includes essential components of the following: (a) focus on important elements of the skill, (b) segmentation of those skills, (c) high degree of student engagement throughout the lesson, (d) faded supports to build student independence, and (e) purposeful practice (Hughes et al., 2017). When put into practice, there are few viable effective alternatives to explicit instruction that will significantly improve student learning outcomes. One approach often touted as an alternative is constructivism (Kroesbergen et al., 2004).
Constructivism occurs when people learn information through interactive experience. Importantly, constructivism was initially a theory of how people learn but later became adopted to be a theory of instruction (Elliott et al., 2000). In a critique of instructional research, however, Mercer et al. (1994) clarified that constructivist approaches actually include a majority of explicit instruction features, such as using teacher scripts, goal setting, modeling the target strategy, using mnemonics, using teacher think alouds, engaging in guided interactive dialogue (scaffolding), verbal rehearsal, monitoring progress, and providing teacher feedback. In other words, constructivism works through explicit instruction despite the fact that they are often presented as a false dichotomy, as was the case in Tan et al. (2022).
In a randomized control trial study in mathematics across 39 schools, Agodini and Harris (2016) analyzed the effects of different curricula on student achievement. They concluded that, generally, students who learned mathematics through curriculum materials defined more by explicit instruction principles outscored students who learned mathematics though curriculum materials defined more by constructivist instruction principles. These results held consistent over multiple years. Furthermore, when teachers held stronger supportive beliefs about constructivism while teaching from a more constructivist curriculum, students’ scores were even lower. Therefore, the more adherence to explicit instruction components included in mathematics, the higher potential for student performance. Morgan et al. (2015) concluded similarly in their research, finding that teacher-directed instruction led to better outcomes for students who were less-proficient in mathematics, it was at least equal to student-centered approaches for students who were achieving on or above grade level. Thus, when put into action, all students, from academically or intellectually gifted to low-achieving, benefit from effective explicit instruction.
Methodological Limitations
Having clear and reproducible methods is essential when reporting findings from reviews (Pigott & Polanin, 2020). The title of Tan et al.’s 2022 paper indicates it is a critical review, with the method stating that authors used a conceptual review process, though the reference given (Kennedy, 2007) does not provide a definition or guidelines for this. Different reviews of the literature have different goals in mind. However, when the focus of instruction is on students who do not score at least proficient in content performance, achievement must be considered. Purposefully, Tan et al. (2022) excluded studies “that used mathematics achievement data as the sole source of connection to mathematics education” (p. 10). This exclusion is misguided and represents a meaningful limitation. Achievement data is a key student outcome for students identified with disabilities. The assessment protocol for identifying students as eligible for special education services is predicated on achievement data. Relatedly, evaluation of the effectiveness of instruction requires quantitative analysis of student mathematical behavior. Thus, we are not certain how a thorough review of special education research can exist without reviewing studies focused primarily, or even solely, on achievement data.
Instead, articles that Tan et al. (2022) selected for inclusion varied considerably in terms of their methodological design, from case studies (e.g., Xin et al., 2016) to teacher surveys (e.g., Bailey et al., 2015) to panel studies (e.g., Hostins & Jordão, 2015), and exploratory designs (e.g., Karvonen et al., 2013). The only reference to methodological inclusion criteria was “original empirical studies.” Although different methodologies form a valuable part of the research process, such as case studies that provide insights into a single individual’s life, many of these designs inherently should not be generalized, making it difficult to draw conclusions based on their outcomes.
In conducting their search for eligible articles, the authors did not adhere to their own review protocol, let alone Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA guidelines) (Page et al., 2021). Tan et al. (2022) stated, We added two additional studies (Eriksson, 2008a, 2008b) through a hand search of the
While an additional hand search may be warranted in analyses to identify appropriate articles from high-impact journals, we find it curious how the authors missed these studies in their electronic search, since the
We find it difficult to replicate the authors’ search results, a key element of high-quality reviews (AERA, 2006; Polanin & Pigott, 2020). We recommend researchers use precision in their analyses and be clear as to the studies they review and why these studies were selected. Hand searches are not atypical in systematic reviews. However, a potential for bias increases when researchers must hand select articles that push researchers’ predetermined narrative rather than following the evidence gleaned from a systematic review. When researchers selectively include or exclude studies or change critical keywords in the search process to deliberately eliminate studies that do not support their predetermined conclusions, their study samples are biased, leading to flawed and/or incorrect conclusions.
Sources of Potential Bias
Many scientists hold preconceptions about the results of their study before completing it. However, how the study is conducted can determine whether the researchers’ biases affect the outcome. The use of reliable methods is crucial for producing high-quality empirical research that can be both replicated and interpreted appropriately. In the Tan et al. (2022) article, the authors stated, We do not present interrater reliability for this study, as we reached a consensus on all coding categories through multiple research meetings. Because we aimed to understand this research literature and did not have preexisting categories for coding, our collaborative process focused on collective development and meaning making around each code. This meant that our analysis process was biased and circumscribed to our positioning as disability studies education scholars during, for example, the examination of each article’s methodology. (p. 28)
The authors expand on their own bias further stating, Another limitation is that all article inclusion decisions were primarily made by the first author during the early stages of the process. This could have primed subsequent collective conversations among the writing team. We also recognize that we may have excluded relevant research for this review. (Tan et al., 2022, p. 28)
This example illustrates that the authors are aware of their mistakes in methodology and their own personal biases; thus, these significant shortcomings preclude it from being considered a high-quality systematic review (Hammersley, 2020). PRISMA guidelines for meta-analyses include risk of bias in five of their 27 review items (Page et al., 2021). Such errors require more than an acknowledgment and should have triggered a restart of their research. We urge scientists, editors, and reviewers to use PRISMA and What Works Clearinghouse guidelines to thoroughly evaluate their procedures to reduce the influence of their own biases. Manipulating or disregarding research of high quality to further a personal bias, even in conceptual reviews, represents a distortion of evidence rather than an exploration of what is known or not known about a topic (Kennedy, 2007).
Effectiveness of Explicit Instruction
The effectiveness of explicit instruction for improving mathematics outcomes for students with disabilities is well documented. The intent of explicit instruction is to help students acquire, retain, and generalize information through scaffolded supports by first working directly with someone who has the requisite knowledge, namely, a teacher, who then fades the support from that teacher until the student can use the skill or strategy independently in multiple settings and contents, when appropriate. Since there is great variability in specific learning needs of students with disabilities, this gradual release of instruction to student independence reveals a continuum of learning that must be flexible.
The rate of fading instruction for students with disabilities varies considerably due to essential factors, particularly differences in their cognitive profiles and abilities that impact performance. For example, many students with disabilities, such as those with low vision or who are blind or deaf or hard of hearing, have typical cognitive processing (e.g., working memory, information processing, and adaptive reasoning) that affect their performance. Hence, these students can learn concepts to function independently more quickly and require fewer instructional scaffolds than their peers in disability categories characterized by extensive communication and support needs, such as students with intellectual disability and those considered developmentally delayed. These students require more substantial instructional support delivered through explicit instruction for extended periods to improve their academic, social, and emotional outcomes (Friend, 2018). Still, some students with disabilities demonstrate academic giftedness and may require less guidance to grasp academic concepts (Tofel-Grehl & Callahan, 2017). Teachers must consider the heterogeneity in these students’ cognitive and adaptive capacities to make the most informed educational decisions about selecting and applying EBPs.
Principles of explicit instruction align with everyday life experiences students may have when receiving on-the-job training in the workforce to supplement what can be viewed as limited background knowledge or skill in the area. For example, how-to books and videos have been used for years and by millions to learn how to accomplish everyday tasks, such as household repairs to applying an instructional approach. These videos usually align with some principles of explicit instruction in that they provide models, examples, and guidance to complete the task. In addition, many occupations use an apprentice model to train employees to perform essential job duties (e.g., electricians, plumbers, and doctors). When used effectively, explicit instruction makes content manageable by sequencing and breaking down content at a level aligned with an individual’s needs (Archer & Hughes, 2011). It is in fact never intended to be “applied to all students regardless of their understanding and/or the mathematics at hand” (Tan et al., 2022, p. 29). Moreover, explicit instruction is often used alongside other instructional approaches, such as cognitive strategies (e.g., cognitive mnemonic) and multimodal instruction (e.g., Concrete–Representational–Abstract and Virtual–Representational–Abstract; Bouck et al., 2018; Peltier et al., 2020).
Empirical evidence from meta-analyses of mathematics interventions for students with disabilities have consistently supported explicit instruction as an effective practice. In 1998, Swanson and Hoskyn completed a meta-analysis on 180 interventions for students with LD and found that direct instruction and cognitively guided strategy instruction were associated with positive outcomes across all content areas. In 2003, Kroesbergen and Van Luit conducted a meta-analysis on 58 mathematics interventions for elementary students with disabilities and found direct instruction was among the highest effect sizes, compared with mediated instruction. In 2009, Gersten et al. found significant effects of explicit instruction in their multiple regressions meta-analysis. They highlighted that explicit instruction elements are found in many interventions along with other instructional components, such as visual representations and student verbalizations. They concluded by confirming that explicit instruction is an important tool in the teaching of mathematics to students with LD. In a follow-up practice guide, Fuchs et al. (2021) also supported explicit instruction in a broader term, systematic instruction.
In an analysis of 328 studies from 1966 to 2016, Stockard et al. (2018) reported positive outcomes with Direct Instruction across multiple content areas, including mathematics. Lein et al. (2020) studied 33 high-quality problem-solving interventions and found that a large percent incorporated explicit instruction, revealing positive effects. Most recently, Myers et al. (2022) found that mathematics problem solving approaches with the highest effect size incorporated components of explicit instruction, such as modeling. Thus, for students with disabilities, explicit instruction encourages problem-solving, further leveling the field of learning between students with disabilities and those without. Importantly, consortiums of experts in mathematics have corroborated the findings of meta-analyses supporting the effectiveness of explicit instruction in supporting the mathematics learning of students with disabilities. As stated by the National Mathematics Advisory Panel (2008), “Explicit instruction with students who have mathematical difficulties has shown consistently positive effects on performance with word problems and computation.” (p. xxiii)
Although explicit instruction includes important tools for teaching and learning most content areas, including mathematics, it isn’t the only highly effective approach to teaching mathematics to students with mathematics difficulties. Other approaches and strategies, such as cognitive instruction (e.g., SRSD; Cuenca-Carlino et al., 2016), manipulative-based instructional sequences (e.g., Concrete-representational-abstract sequence of instruction (Bouck et al., 2018; Peltier et al., 2020)), heuristics (e.g., Freeman-Green et al., 2015; Jitendra et al., 2016; Myers et al., 2022) incremental rehearsal (Zaslofsky et al., 2016), systematic instruction (e.g., Spooner et al., 2019) and spaced learning (e.g., Kang, 2016) have evidenced success in different research studies. Importantly, however, many of them use explicit instruction as a tenet of their design and delivery. In summary, explicit instruction consistently improves students’ math outcomes.
Drawing Conclusions
Explicit instruction is a demonstrably effective set of tools for teaching students with disabilities but is not without criticism and certainly not the only proven, successful approach for helping students gain proficiency and become more independent learners. As is expected of all researchers, we welcome and consider alternative approaches. However, to draw meaningful inferences about the dehumanizing nature of interventions based on explicit instruction will require researchers to provide empirical data that support this position, such as evidence derived from appropriately designed studies that refutes the efficacy of interventions presented through explicit instruction. Critics of explicit instruction, like Tan et al. (2022), have used a conceptual review (Kennedy, 2007) to disparage an EBP without providing sufficient or reliable evidence. It is imperative for researchers to continue to investigate and evaluate teaching methods while also striving for better outcomes for all students, particularly students with disabilities. Ultimately, improving performance in mathematics is empowering, not dehumanizing.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
