Abstract
As part of the 125th issue of Teachers College Record, this commentary provides an overview of technological innovation with a focus on the emergence of machine learning and artificial intelligence (AI). The presence and use of AI is a pressing contemporary topic in education, raising questions about the information and perspective(s) AI might privilege, as well as the evolving ethical concerns related to the blurred and increasingly indistinguishable boundaries of human and nonhuman entities and practices.
In 1900, the first volume of Teachers College Record was published with a focus “devoted to the practical problems involved in the professional training of teachers” (Russell, p. iii). Within 124 years, Teachers College Record has evolved, and it has expanded its reach to address social justice issues and educational policies through “empirical, critical, methodological, and theoretical disciplinary and interdisciplinary scholarship that emphasizes the intersections between education research and other fields, raises new questions, and develops innovative approaches to our understandings of pressing issues” (Teachers College Record, 2024). No doubt, the journal embraces thought-provoking research to encourage academic conversation, to inspire pedagogical reform, and to support social change.
This commentary, which is included in the 125th issue of Teachers College Record, historically considers human innovation and concerns, as well as meaning making, ethics, and policies in light of advances in artificial intelligence (AI). This commentary begins with an overview of AI and recent policies, followed by a brief exploration of innovation in 1900, the year of Teachers College Record’s inaugural volume. The discussion also includes the 20th-century emergence of AI, a Teachers College Record article related to machine learning, as well as current-day concerns and foci related to AI and education and Teachers College Record’s role as a current and future space for ontological, epistemological, methodological, and pedagogical debate and exploration.
Why AI?
Among pressing contemporary topics in education, AI continues to surface and to resurface with what seem to be spates of energy to reject it, accept it, and possibly regulate it. Although the Organisation for Economic Co-operation and Development (OECD) offered AI Principles in 2019—standards intended to promote and to maintain human rights in the face of innovation—AI regulation feverishly has become a public concern in the past year. Globally, the push-pull tension with AI has resulted in a mélange of commentaries on regulations in countries on six continents—Africa (e.g., Musoni, 2023), Asia (e.g., Potkin & Mukherjee, 2023), Australia (e.g., Taylor, 2023), Europe (e.g., Heikkilä, 2023), North America (e.g., Sanger & Kang, 2023), and South America (e.g., Maffioli, 2023)—and the OECD (2024) continues to feature a live repository of global AI policy initiatives. Additionally, the European Union has made headlines with its “political agreement” (Satariano, 2023a, para. 2) on the A.I. Act, a law intended to regulate AI to reduce the risk of harm in the abuse of power, the perpetuation of biases, and the opaqueness of AI’s presence (Satariano 2023a, 2023b).
Unsurprisingly, social concerns abound. Soon after the launch of ChatGPT in November 2022, AI was perceived as “already threatening to upend how we draft everyday communications,” which, when applied to the lobbying process, could compromise democracy (Sanders & Schneier, 2023, para 1). In January 2023, the New York City Department of Education restricted the use of ChatGPT (Rosenblatt, 2023)—only to rescind the ban four months later, with David Banks, New York City Schools chancellor, acknowledging that the integration of ChatGPT in education should include a discussion of ethical complexities as well as employment possibilities (Banks, 2023). In other words, there is no shortage of dialogue about AI—its capacities, its dangers, and its oversight—which directly and peripherally involves education and educational policies. Given the predictability of human nature (even in the face of unpredictable technology), before moving forward with discussions of AI, it is important to consider the evolution of technology and related human responses.
Revolutionary Waves and Technological Advancements
The year 1900, when the first Teachers College Record volume was published, marked a time of significant precursors to technological booms and discoveries. In the Western world, the Second Industrial Revolution, which spanned the late 1800s and early 1900s, included changes in transportation, energy, and communication (Kiger, 2023). In particular, in 1900, innovations included Max Planck’s work that paved the way for quantum physics (WGBH, 1998); the Wright brothers’ first glider that informed their iterations of the airplane (Smithsonian, n.d.); and Guglielmo Marconi’s telegraph-based patent for “simultaneous transmissions on different frequencies” (Massachusetts Institute of Technology, n.d., para. 3) that advanced a path for wireless communication.
Before the middle of the 20th century, the invention and advancement of the digital computer between 1937 and 1942 (Iowa State University Department of Electrical and Computer Engineering, n.d.) soon inspired the idea of machine learning. In 1946, Arthur L. Samuel, American computer scientist, conceived the idea of a computer using its experience playing checkers to modify future moves (Lee, 2023). Samuel’s work became a reality in the 1950s, when he held a televised debut of machine learning on February 24, 1956 (Samuel, 1959); soon thereafter, in 1959, Samuel coined the term “machine learning” (Press, 2016; Samuel, 1959). Interestingly, this early form of artificial intelligence and others that followed were connected to game play. Around this time, Alan Turing (1950), a British mathematician, logician, and computer scientist, questioned, “Can machines think?” (p. 433). After providing counterarguments to nine possible objections, Turing acknowledged not only that human thinking is tied to human capacities, education, and experience, but also that computers had tremendous potential to “compete with men in all purely intellectual fields” (p. 460).
Machine learning continued as an important topic of discussion, and in the second half of the 20th century, Teachers College Record featured an article by Oleg K. Tikhomirov (1974)—a professor of psychology at Lomonosov Moscow State University—in which he assessed arguments about the connection between human cognition and computer output. Tikhomirov explained that “thought it not just problem-solving; it is problem-forming activity as well” (p. 366), and he challenged the extant theories of substitution (i.e., a computer completely could replace a person’s “mental work,” p. 359) and theories of addition (i.e., computers enhance a person’s ability to process information). Contending that “sameness of man and machine heuristics is nothing but illusion” (p. 361), Tikhomirov deeply considered how humans think—and the emerging and evolving emotions embedded in their decision making—which differed from the then-prescribed approach of computers processing information. Furthermore, as Tikhomirov discussed AI’s popularity and the perspective that “there already exist no limitations on the bringing together of program possibilities and human abilities” (p. 370), he contemplated how human–computer interactions can lead to changes in how humans communicate and solve problems.
Now as we enter the Fifth Industrial Revolution (Nosta, 2023), discovery and invention seem to have both a speed and a capacity to capitalize on an amalgamation of human and computer abilities that Tikhomirov referenced 50 years ago. Currently, there is an unprecedented synergy between human and machine intelligence. This revolution is set to be further amplified by the groundbreaking multimodal capabilities of generative pre-trained transformers (GPT), which add sensory dimensions to artificial intelligence, thereby enriching the human experience in ways previously unimaginable. (Nosta, 2023, para. 3)
For educators, education researchers, education policy makers, and other education stakeholders (e.g., students, parents), these advancements can present, promote, and/or perpetuate new and systemic opportunities and challenges, thereby underscoring the importance of careful, critical thinking about the information that might be presented continually as unvarnished fact.
A Focus on Human Thinking
Whereas the 20th century included debates with computer thinking and machine learning as their fulcrum, the advancement of AI turns the focus onto humans: To what extent are people, in general, and students, in particular, thinking critically about the information encountered daily? In other words, if computers can think, then what ideas—and whose ideas—are propagated, manipulated, and regurgitated? And if AI is supporting unique outputs (e.g., via ChatGPT, DALL·E, DeepDream, AIVA [Artificial Intelligence Visual Artist]), then how does that process affect the quality and veracity of the information generated, and according to whose perspective(s)?
Some scholars have begun to conceptualize ways that learners interact with digital texts, reworking them through AI generators, simultaneously producing and consuming new texts. At the 2023 Literacy Research Association (LRA) conference, Donna E. Alvermann 1 —a University of Georgia-appointed distinguished research professor of language and literacy education—suggested that “readers of digitally remixed texts that travel/mingle with synthetic texts are at risk of encountering deepfakes in core disciplines.” 2 Alvermann’s insightful point provokes questions about shifts in normative conditions: namely, if, how, and when might readers of synthetic texts become inured to the evolving sets of conditions they face? And how, if at all, do these conditions inform (and perhaps reform) pedagogy and practice? Relatedly, Abrams and Hanghøj (2023) have explored a model for critical AI literacies, and they have contemplated the affordances and constraints of AI’s role in education research.
Underpinning these discussions are lingering and evolving ethical concerns, including, as Baker et al. (2023) acknowledged, AI’s role in shaping and reshaping historical, social, and educational knowledge, concepts, and experiences. More specifically, in their recent Teachers College Record article, Baker et al. (2023) examined chatbot cases to address ethical issues related to ontological, pedagogical, and social shifts, especially given that “the exclusivity of historic features attributed to the human are being reshaped in uncertain and multiple ways” (p. 78). Here, too, in relation to meaning making and education, some might turn to Barad’s (2003) posthuman perspectives to interrogate “the differential categories of ‘human’ and ‘nonhuman’ . . . [and] the practices through which these differential boundaries are stabilized and destabilized” (p. 808). Indeed, in many ways with AI, extant boundaries might be pushed, perforated, and even erased, possibly even fully blurring definitions of human and nonhuman entities and practices.
In closing, it seems appropriate to return to the age-old question about checks and balances. American mathematician Nobert Wiener, whose work contributed to the early development of AI (National Academy of Sciences, 1992), aptly addressed the issue of safety and protection. In response to a question about social planning, he responded as follows: “Who guards the guardians?” has always been a serious problem. Certainly, the sort of public opinion that can be swayed one way or the other by spurious propaganda and advertising methods cannot be counted on to protect us from our protectors. (Wiener, 1959, p. 41)
As with any research—and with that on AI in particular—educators, education researchers, education policy makers, and other education stakeholders need to remain vigilant about how, what, when, where, and why AI plays a role in the practices and the spaces they study, as well as in their research approaches and agendas (e.g., methodologies, philosophies, literature searches).
This is not intended to be alarmist; rather, such vigilance is cautious because the more that human and nonhuman boundaries become obscured (as they inherently do in quotidian actions, such as traveling via car, bus, or train, which have AI-related control and safety systems), the more humans become habituated to capitulating to, and perhaps blindly complying with, what will become new norms. Despite a recent Pew Research Center finding that 81% of adults in the United States are “largely concerned and confused about how their data is being used” by companies (McClain et al., 2023, p. 3), only 6% of respondents never skip reading a company’s privacy policies (i.e., they always read the policies; see McClain et al., 2023, p. 71); this percentage has declined since Auxier et al.’s (2019) study that indicated that only 9% of U.S. adults always read a company’s privacy policies before accepting the terms and conditions online. These and related findings are indicative of the ways of being that most adults have adopted in digital spaces. To protect the ethical sanctity of research, educators, education researchers, education policy makers, and other education stakeholders need to consider continuously and reflexively the boundaries of their work and of their practice, and hold the field to a standard that supports balanced and transparent thinking and scholarship. As a publication mainstay, Teachers College Record continues to serve as an important forum for such scholarship to challenge, to support, and to advance ontological, epistemological, methodological, and pedagogical discussions as we grapple with the seen, the unseen, and the unforeseen in education in and beyond the Fifth Industrial Revolution.
Footnotes
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
