Abstract
Developed first in the late 1990s by the Centre for Applied Special Technology, the pedagogical framework known as “Universal Design for Learning” (UDL) has drawn increasing investment from K-12 and post-secondary institutions. The promoters of UDL often frame the approach as being “based in neuroscience,” and further as an “evidence-based approach” to instructional design in teaching and learning. While the rhetoric is promising, no rigorous published research has demonstrated any improvement in an education intervention designed with UDL principles in mind. Furthermore, the community of practice around UDL appears to be hostile to questions around the rigor of analysis used to promote UDL interventions. Studies of UDL approaches do not follow best practices in terms of research design, and often solicit anecdotes rather than testing the effectiveness of the approach. The purpose of this policy research note is to survey the state of the art in researching UDL and to clarify the origin of the pedagogical theory. Because the effectiveness of this theory has not been proven, there are no grounds for UDL implementation plans to be framed as “evidence-based” decisions. Further, the reluctance of UDL advocates to rigorously study the effectiveness of their intervention raises important questions about their confidence in the theory. For these reasons, the only evidence-based conclusion that can be made about UDL is that further study is required, as its core claims remain unproven. Institutions of any educational level should proceed with caution before devoting significant resources to implementation of UDL.
Keywords
A wise man, therefore, proportions his belief to the evidence (Hume, 2007: 97).
David Hume’s aphorism, found in the
It is with an eye to the importance of evidence in guiding pedagogical practice that I would like to reflect in this policy research note on the latest trend in popular pedagogy: Universal Design for Learning (UDL). An output of the Centre for Applied Special Technology (an educational research and development organization based in Wakefield, Massachusetts), UDL advocates for a revolution in pedagogy, including the widespread—if not full—integration of technology, multiple modes of representation and assessment, and a recognition of learner and context variability (Meyer et al., 2014; Rose and Meyer, 2002). As will be discussed below, while the call to action is compelling, the policy changes called for by UDL advocates lack an evidentiary basis of success in prior applications. Much of the literature on UDL has been self-published by the Centre for Applied Special Technology, and the small proportion that has passed through peer review has almost entirely focused on instructors’ ability to adopt UDL practices rather than any effectiveness from the student perspective. Lacking evidence of effectiveness, continued investment into UDL involves the uptake of disproven ideas from learning styles theory, over-scaffolding of complex skills, and the promotion of expensive digital technologies in spite of evidence suggesting the inferiority of screen-based reading.
My aim in this policy research note is to call attention to the seriously problematic dearth of evidence to justify UDL’s application in educational institutions. Given the widespread
Universal Design for Learning: A brief introduction
As mentioned above, the UDL framework was developed by an organization named the Centre for Applied Special Technology. As two of the centre’s founders note in
The UDL framework advocates for multiple means of engagement, representation, and action/expression, each broken down further into specific objectives (Meyer et al., 2014: 111). The framework claims that three separate networks in the brain (recognition, strategic, and affective) should be considered while planning lessons. By focusing not on individual needs of students but an expected population-level variability, “we don’t even need to know our particular students to plan for the range of variability in a given dimension” (Meyer et al., 2014: 110). The practical application of UDL, then, involves standard practices of goal-setting, effective and appropriate assessment, and consideration of method and materials. What separates UDL from the generic lesson planning approach would be: (1) the hypothesis that there are three specific learning networks that map onto areas of the brain; and (2) the argument that providing every student with multiple means to engage with learning, supported by technologies that can take over the skills not being directly assessed in that particular case.
Where’s the evidence?
UDL is consciously framed as a scientific approach, and advocates often state that the three networks of learning were identified through “advances in neuroscience” (Meyer et al., 2014: 5). More recent works published in-house at the Centre for Applied Special Technology refer back to the original Rose and Meyer presentation of the three networks (Rose and Meyer, 2002: 11–40). Rather than citation of scientific articles, as might be expected given the strong claims of being grounded in neuroscience, the authors present images altered from an introductory neuroscience textbook. While there is nothing scientifically invalid about casually observed correlations forming the basis of a
Further questions arise when looking at specific claims of UDL. The claim that students know best and can self-select to activities that best suit their learning style has been resoundingly rejected as a neuromyth (Kirshner and van Merrienboer, 2013; see also See Rose and Meyer, 2002: 148–9). While UDL’s general advocacy of scaffolding skill development follows the scholarly consensus, the specific argument that digital scaffolding can isolate specific skills to be assessed removes the opportunity for students to develop an understanding of how skills are applied in complex contexts. 2 Finally, the constant advocacy for screen-based classrooms ignores research demonstrating reduced comprehension when students read on screens versus paper materials (Clinton, 2019; Halamish and Elbaz, 2020).
Does it work?
Skeptics of any innovation can often be silenced when presented with strong evidence of success. In the case of UDL, this evidence is neither presented nor is its collection even prioritized by its advocates. First, there is a problem of research design, as studies involving UDL often fail to control for and discuss demographic data, comparative pre- or post-intervention metrics, control groups, experimental design, or testing of alternatives (Davies et al., 2012; Ok et al., 2017). One major barrier that emerges from this lack of clarity about what exactly UDL entails in practice—because the specifics of the framework are found in the hypothesis about brain processes and digital integration, it is difficult to determine when a lesson studied as an example of UDL really
As Willingham (2012) argues, almost any pedagogical fad can produce anecdotal evidence shared in the form of heartfelt testimonials. While the mere existence of testimonials does not imply that there is anything fraudulent afoot, the continual reliance on anecdotes and testimonials (rather than evidence) in the UDL literature produced by the Centre for Applied Special Technology should raise questions. The problematic relationship to evidence can be captured in two vignettes. First, the 2019 report of New Hampshire’s UDL program only includes one small section discussing students through sample lesson plans. While we are told that the lessons are “universally designed,” it is impossible to determine the specific UDL influence in “mining” the chocolate chips out of cookies or “choos[ing] to explore the work of an artist, a poet or a musician” (CAST, 2019: 7). While the teacher-facing data in the report is rather surface-level as well, relying solely on anecdotes that could easily have emerged from a non-UDL lesson plan demonstrates a shockingly shallow study of student experience.
The second vignette comes from Beerwart’s (2018) study of UDL, and speaks to her experience in a seminar run on the topic of UDL by the pay-per-module Graduate School of Education at Harvard University. “During one particular session,” Beerwart (2018: 5–6) writes: a question was asked about the effectiveness of the framework … The program member in the session seemed to balk at the idea of people wanting data, and the question was never answered. There was a mention of qualitative studies “being conducted,” yet no results were produced, nor were details supplied on where to find that qualitative information.
Conclusion
It is not my intention in this policy research note to state definitively that the theory of UDL is invalid. Because we lack evidence about its effectiveness, it would be premature to reject UDL,
Footnotes
Disclaimer
The views and opinions expressed in this article are the work of the author and do not necessarily reflect an official position of the Algonquin & Lakeshore CDSB.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author wishes to acknowledge the generous support of the Social Sciences and Humanities Research Council of Canada.
Notes
.
