Abstract
Artificial intelligence (AI) is a transformative force in education. Realizing the full potential of AI in education requires a multidisciplinary and holistic AI literacy framework that can inform research, practice, and policy. This novel ED-AI Lit framework includes six components:
Tweet
To realize the full potential of artificial intelligence (AI) in education, a multidisciplinary and holistic AI literacy framework is theory-based and translates into concrete recommendations for policy and practice.
Key Points
Realizing the full potential of AI in education requires a multidisciplinary and holistic AI literacy framework
AI literacy components include
AI literacy standards need to be developed and integrated across academic areas in K-16
Policies should be developed that require transparency and openness in the development of education AI tools, and collaborations between researchers, industrial partners, and education stakeholders
Ethical considerations should be considered and explicitly addressed during the development and implementation of all AI literacy tools and training
In today's rapidly evolving digital landscape, AI has emerged as a transformative force, reshaping various aspects of modern lives. An explosive rise in AI over the next decade projects a value reaching roughly two trillion U.S. dollars by the year 2030 (Next Move Strategy Consulting, 2023). Younger generations find themselves on the edge of an era that will be profoundly transformed by AI—ultimately leaving an enduring imprint on their ways of thinking and their career development. Indeed, a survey conducted in 2022 amongst U.S. professionals found that 29% of respondents from the Generation Z cohort used generative AI products (Fishbowl, 2022).
Given these rapid developments, AI is also critically reshaping the very fabric of education. AI has already begun to have a widespread impact on educational spaces, ranging from the decision-making dynamics of educators to the transformation of classroom tasks and learning experiences (Allen et al., 2021, 2022; Chen et al., 2020; Crossley et al., 2021). Teachers can use generative AI to create assessments, lesson plans, and even to provide early feedback on students’ assignments; conversely, students can now generate essays or other assignments with minimal effort, undermining genuine learning and the very purpose of education (Alexander et al., 2017).
With AI's integration into various facets of life becoming increasingly inescapable, education is poised to embrace this revolution, necessitating a recalibration of curricula that accounts for the strong impact of AI on student learning (McCarthy et al., 2022). In addition to classroom practices, the deluge of AI-generated information has severely exacerbated the misinformation crisis (Kendeou & Johnson, in press), making discerning fact from fiction a complex and difficult task for educators and students alike (Lameras & Arnab, 2021). Students without AI knowledge risk misinterpreting AI-generated content, overlooking the potential of AI tools in enhancing their learning, and remaining ill-equipped to critically evaluate the digital content they encounter daily.
It has now become essential for students to develop knowledge and skills about how AI works and is deployed—that is,
This paper draws on the extant literature on learning, comprehension, and AI to propose
The ED-AI Lit Framework for AI Literacy
AI literacy has recently emerged as a pivotal competency, akin to traditional literacies such as digital literacy (Alexander et al., 2017; Ng et al., 2021; Spante et al., 2018) and media literacy (Hobbs, 2013; Mihailidis & Viotty, 2017).
As AI becomes more integrated into education by virtue of its ubiquity, more challenges arise. There is a need for a thoughtful integration of AI activities in K-16 education, and efforts are underway to develop tools, curricula, and research methodologies to foster AI literacy in K-12 education (Auxier et al., 2020). Broadening participation in AI is crucial to ensure inclusivity and address the historical underrepresentation of specific demographics in AI.
We propose a multidimensional framework for AI literacy that draws on the extant literature on the science of learning, discourse comprehension, and AI. The components of the framework include students needing: (a)
By collaborating with AI in the context of authentic classroom tasks, students will develop a deep understanding of the inner workings of AI and, perhaps most critically, how these technologies impact the way they learn and communicate with others. In turn, developing AI literacy that demystifies AI systems and promotes critical thinking will mitigate potential harm caused by the misuse or misunderstanding of generative (and other forms of) AI, contributing to a more equitable and inclusive AI ecosystem. The next sections provide more information about the components of the framework, highlighting actionable steps that can be taken by educators. The article ends with policy recommendations related to AI literacy in educational spaces.
Knowledge
Knowledge is a foundational component of the AI literacy framework. The importance of prior knowledge in the context of AI literacy can be reinforced by drawing upon research from diverse educational domains that emphasize the critical role of prior knowledge for learning. Prior research supports the idea that prior knowledge serves as a linchpin for comprehension, facilitating the acquisition and assimilation of new information (Alexander et al., 1994; McCarthy & McNamara, 2021). For example, individual differences in prior knowledge impact students’ ability to make inferences about texts and thus their ability to develop a deep understanding of complex concepts (Kintsch, 1988, 1998). These findings highlight the generalizability of the importance of prior knowledge in learning and comprehension (Kendeou et al., 2016; McCarthy & McNamara, 2021). Emphasizing the role of prior knowledge in AI literacy, through these established theories and research, advocates for the integration of foundational understanding as a fundamental component in AI literacy education, enabling learners to effectively engage with and comprehend AI technologies.
The knowledge component involves both knowledge about AI as well as other relevant knowledge that can be used to support learning about AI. First, knowledge about AI involves providing students with a comprehensive grasp of how AI technologies operate, including their underlying principles, algorithms, and applications. However, the knowledge component also implies that the learning of AI should be integrated into learning contexts in which students have domain knowledge already. By incorporating AI literacy into other domains (e.g., literature, math, and history), students can better connect their knowledge of AI to knowledge they already have, thus deepening their understanding of AI across academic areas and contexts. This component is thus essential in empowering individuals to make informed decisions, critically assess, and comprehend AI technologies in various contexts.
Educators play a crucial role in integrating the knowledge component into their classroom practices and curriculum. To instill this foundational knowledge, educators can design lessons and activities that introduce fundamental concepts of AI. This may include explaining the basic principles of machine learning, algorithms, and how AI systems function. Implementing interactive learning experiences, such as hands-on projects or simulations, can be valuable in demonstrating these complex concepts in an accessible manner. Additionally, incorporating discussions on real-world AI applications and ethical considerations can enrich students’ understanding of the broader societal impact of AI technologies.
Integrating AI literacy across different subjects and disciplines can also foster a multidisciplinary understanding of AI, emphasizing its application in science, mathematics, social studies, and even the humanities. This interdisciplinary approach can emphasize the multifaceted nature of AI and its diverse implications across various fields. Encouraging collaboration among educators across different subjects can further reinforce the integration of AI knowledge throughout the curriculum, creating a more comprehensive educational experience.
Evaluation
The skill of critically evaluating AI technologies is essential for AI literacy. The Evaluation component thus mirrors the importance placed on critical thinking and analytical abilities. This component includes the processes necessary for students to assess and critique AI technologies, highlighting their strengths, limitations, and potential biases. Drawing from established theories in critical thinking and evaluation (Ennis, 1987), this component emphasizes the significance of fostering evaluative skills. Such skills empower students to analyze, discern, and appraise the mechanisms, implications, and ethical dimensions of AI technologies.
The Evaluation component not only encompasses the assessment of AI technologies but extends to evaluating information sources (Britt et al., 2019). A wealth of false information is available across a variety of sources on the internet. Source evaluation relies on a complex set of skills that require the reader to process source features (e.g., author and venue) to make decisions about the quality and accuracy of information in texts (Braasch et al., 2018). Attending to source information is crucial to the successful integration of information across documents (Britt et al., 2013; Perfetti et al., 1999; Rouet & Britt, 2011). However, in the absence of domain expertise (and thus, disciplinary knowledge), individuals do not readily engage in these source evaluation processes (Wineburg, 1991), and the effects of source evaluation training have been mixed (e.g., Wiley et al., 2009).
Educators can cultivate evaluative skills in students by incorporating critical thinking prompts and activities into classroom activities. For instance, teachers can design activities that ask students to assess the credibility and reliability of sources presenting AI-related information. Collaborative discussions and debates regarding the ethical implications of AI can further enrich students’ evaluation skills, fostering their ability to approach AI technologies with a critical lens.
New assessment strategies will be essential in evaluating students’ aptitude in critical evaluation of AI technologies. Educators can implement assessments that gauge students’ ability to evaluate the credibility of AI-related information. Projects or case studies that require students to critically analyze AI technologies can also serve as an assessment tool. By evaluating students’ competence in distinguishing credible information from biased or unreliable sources, educators can effectively measure their proficiency in critical evaluation.
Collaboration
The Collaboration component of the framework underscores the importance of nurturing students’ skills in effective communication and interaction with AI technologies and peers. This component aligns with established educational theories emphasizing the development of collaboration skills (Roschelle & Teasley, 1995). It emphasizes the capacity to work collectively with AI systems and individuals, fostering an understanding of collaborative workspaces and interactions. This collaboration skill set is instrumental in preparing students to engage meaningfully in AI-driven societal and professional domains, stressing the importance of cooperative learning and cocreation in the AI landscape.
The focus on collaboration extends beyond interpersonal interaction to cooperative engagement with AI systems, necessitating an understanding of the interaction between humans and machines. Students need to comprehend how collaborative activities with AI are transforming traditional practices across various domains, ranging from medical diagnosis to business analytics. This collaborative skill set not only prepares students for future professional environments but also equips them to adapt to a world increasingly reliant on AI-driven systems.
Educators play a pivotal role in cultivating collaboration skills among students by structuring classroom environments that encourage group projects and team-based activities. For instance, students may engage an AI chatbot in dialog scenarios about a particular topic in the curriculum. When teaching about persuasive writing, the chatbot may simulate a debate with the student, where the chatbot takes on the role of a debate partner. This exercise would allow students to practice persuasive communication with AI, aligning with the collaboration component of the framework.
Assessment methods aimed at evaluating students’ collaboration skills could include team-based projects that require working with AI systems, where effective communication, delegation of tasks, and group coordination are assessed. Assignments promoting interactions with AI platforms, conducting group discussions on the ethical implications of AI, or group problem-solving activities can also serve as evaluative tools. By assessing students’ ability to engage in effective collaboration, educators can measure their proficiency in this domain.
Contextualization
Contextualization emphasizes the critical importance of the real-world application of AI. This component draws from existing research, which demonstrates that learning and understanding are significantly enhanced when individuals can apply acquired knowledge to real-world contexts (Vygotsky, 1978). This concept aligns with the framework's emphasis on contextualization, indicating that deeper AI comprehension occurs when students can practically apply AI knowledge in various settings, such as scientific, social, or everyday scenarios.
Incorporating contextualization into AI literacy enables students to explore how AI functions and applies across different domains, encouraging a holistic understanding of AI's role in different learning contexts. This approach is vital for fostering critical thinking and problem-solving skills, as indicated by prior research in the learning sciences (Chi, Feltovich, & Glaser, 1981). Research on the transfer of learning affirms the significance of understanding how AI concepts can be applied across different contexts, emphasizing that such knowledge application leads to a deeper and more flexible understanding of AI technologies.
For educators, integrating contextualization into AI literacy involves creating learning environments that enable students to apply their knowledge of AI technologies in various contexts. Educators should design activities that prompt students to apply AI concepts in different scenarios, fostering a comprehensive understanding of how AI operates across diverse domains. This approach assists in cultivating students’ abilities to critically assess, adapt, and implement AI technologies in real-world situations. For instance, educators can introduce case studies or scenarios that reflect practical AI applications in various fields, such as healthcare, finance, or environmental conservation. For example, students might explore how AI algorithms are utilized in predicting climate change or understanding AI-based medical diagnostics. This engagement allows them to understand the multifaceted nature of AI and its diverse applications across multiple disciplines.
Additionally, educators can leverage interdisciplinary projects that require students to collaborate across subjects and integrate AI knowledge into different domains. For instance, in a social studies class, students could analyze the ethical implications of AI in society, while in a science class, they might explore the scientific underpinnings of AI and its technological advancements. Encouraging such cross-disciplinary collaboration supports students in recognizing the contextual relevance of AI in varied fields, fostering a deeper understanding of AI technologies within different disciplines.
Autonomy
Autonomy in interacting with AI, where students independently make informed decisions and take responsible actions, is a vital aspect of AI literacy. Research in educational psychology and the learning sciences, drawing from theories on self-determination (Deci & Ryan, 2000) and self-regulated learning (Zimmerman, 2000) frequently highlights the importance of fostering student autonomy in decision-making processes. Autonomy thus stands as a significant component of the framework, emphasizing the importance of students’ capacity for self-determination and decision-making while engaging with AI. Autonomy encourages students to take ownership of their learning, providing them with a sense of control over their actions, and supporting their ability to make informed decisions. When students are granted autonomy, their intrinsic motivation, engagement, and self-regulation are positively impacted, leading to more profound learning experiences (Patall et al., 2008).
In the context of AI literacy, fostering autonomy involves empowering students to explore and make informed choices about their interactions with AI systems. This includes designing learning environments where students are encouraged to investigate and experiment with AI technologies independently. For instance, educators can introduce projects or activities that allow students to explore various AI tools and platforms, providing them with the freedom to select the tools they find interesting and engaging. Additionally, open-ended problem-solving scenarios related to AI applications allow students to navigate diverse choices, stimulating critical thinking and decision-making skills.
Educators can provide avenues for autonomous learning by facilitating open discussions on real-world AI applications. They might encourage students to actively seek out AI-related problems within their communities, allowing them to propose and develop AI solutions or recommendations. This strategy helps students connect AI literacy to real-life scenarios, reinforcing their sense of autonomy by enabling them to tackle authentic problems in their local context. Educators might also facilitate regular peer discussions where students share and compare the different approaches they have taken in AI-related tasks. The aim is to encourage students to reflect on their methods and decisions, promoting autonomy in their learning journey and reinforcing their understanding of AI literacy concepts.
Ethics
Ethics forms the foundational cornerstone of the AI literacy framework, permeating every facet of understanding AI technology. It is essential to recognize that ethical considerations significantly influence the design, development, implementation, and impact of AI systems. From the initial phases of AI design and development to their final output, ethical underpinnings guide decisions and ensure that AI technologies are aligned with societal values and norms. Educators play a pivotal role in instilling the significance of ethics, stressing its omnipresence across AI literacy domains.
Ethical considerations in AI encompass multifaceted dimensions, reflecting the profound impact of biases and ethical concerns in AI systems. Extensive research and scholarly discourse have highlighted embedded biases within AI algorithms (Baker & Hawn, 2021; Hutchinson & Mitchell, 2019). Biases often originate from the datasets used to train AI systems, perpetuating inequities in decision-making processes. Such biases, rooted in data, tend to reinforce societal prejudices and discrimination, leading to social disparities. Prior research has emphasized issues surrounding the interpretability and limitations of AI systems, as well as ethical AI calls for greater transparency and accountability in AI design (Bender et al., 2021; Hutchinson & Mitchell, 2019). These perspectives underscore the critical nature of AI ethics education, guiding students to comprehend the ethical intricacies of AI technology and recognize the consequences of biased algorithms on diverse social groups (Bender & Friedman, 2018; Hutchinson & Mitchell, 2019). Educators should lead discussions around these issues, guiding students to understand how biases manifest in AI, their ethical implications, and strategies to mitigate these biases for more equitable and fair AI use.
In AI literacy, educators should emphasize the ethical considerations involved in AI technology from the outset. This may involve discussing real-world examples of ethical dilemmas raised by AI systems. For instance, students can explore situations where AI algorithms exhibit biases or unfairness in decision-making processes, leading to disparities or discrimination. By critically examining these cases, educators guide students in comprehending the far-reaching ethical implications of AI in various domains. Moreover, integrating case studies into the curriculum allows students to analyze the ethical dimensions in different contexts, such as healthcare, law, or business, enabling a more comprehensive understanding of the ethical complexities in AI applications.
This comprehensive emphasis on ethics is fundamental in cultivating AI literacy among students. By addressing the ethical dimensions associated with AI technologies at every stage, educators can equip students with the critical thinking skills necessary to evaluate, reflect upon, and respond to the ethical challenges presented by AI. This approach ensures that students are well-prepared to navigate and shape the ethical landscape of the AI-driven world.
Policy Recommendations
As noted earlier, AI is advancing exponentially and surpassing AI-related policies. Recent efforts at the executive and legislative branches in the United States (e.g., The White House Office of Science and Technology Policy, 2022; US Department of Education, 2023) and around the world (e.g., European Commission, Directorate-General for Education, 2022) aim to guide the use of AI throughout all sectors of society, including education. The White House executive order on AI (The White House, 2023) is the first of its kind from the U.S. government, calling for safety assessments and equity among other priorities. The need is urgent for education-specific policies designed to enhance AI literacy for students and teachers in K-12 and postsecondary settings. Following the proposed AI literacy framework, these policies can focus (a) on the development of AI literacy standards in K-12, (b) regulation and ethics frameworks for the development of transparent and open AI tools, and (c) strategies for access and standardization of AI literacy training for educators.
Develop and Integrate AI Literacy Standards
AI literacy standards should be developed and integrated across the curriculum in K-12 education. Doing so is essential in empowering individuals to make informed decisions, critically assess, and comprehend AI technologies in various domains and contexts. This effort necessitates a state-level charge, akin to the Common Core State Standards (NGA, 2010) and Next Generation Science Standards (NGSS, 2013) initiatives, in order to help ensure that all students are college and career-ready in AI literacy no later than the end of high school. These standards need to center students, be research and evidence-based, safeguard and advance equity, advance learning outcomes, and mitigate potential risks of AI integration in the learning process. Most importantly, these standards need to be integrated across all academic areas (e.g., Arts, Computer Science, English Language Arts, STEM, and Social Studies) rather than compartmentalized and/or treated as an academic area by itself. As advocated in the framework, integrating AI literacy across areas can foster a multidisciplinary yet contextual understanding of AI.
Transparency and Openness in the Development of AI Literacy Tools
Responsible use of AI technologies is perhaps one of the most important priorities for policy making. Policymakers should develop a human-centered regulatory and ethics framework that ensures transparency and openness of AI models used in education for learning and decision-making. Specific to AI literacy, such frameworks need to enable the evaluation of the qualities of AI models and their alignment to goals for teaching and learning before any recommendation for educational adoption is made. This includes creating widely accepted standards to evaluate AI training data's appropriateness for intended use cases (e.g., data cards; Pushkarna et al., 2022). Decision makers will need to understand how AI models work, so they can better anticipate limitations, problems, and risks. Such transparency and openness will not only ensure the development and adoption of high-quality, fair, and unbiased tools, but also help build trust among education stakeholders. Most importantly, such policy needs to enable and require educators to be closely involved in the design of AI literacy tools.
In this context, funding sustainable partnerships that include research institutions, industry, and departments of education is urgently needed. In doing so, these stakeholders must work together with policymakers to strike the right balance when it comes to issues of data openness and privacy (Draschkow et al., 2023). At the same time, policymakers should increase the strength of privacy and data collection rules for children and enforce existing COPPA regulations that are already in place (Reid Chassiakos et al., 2016).
Develop AI Literacy for (Pre-Service and In-Service) Educators
Given how localized and varied pre-service educator training is in the U.S., policies are needed to identify and mandate basic AI literacy training in pre-service teacher preparation programs, as well as professional development training for in-service teachers. In doing so, and consistent with the development of AI literacy standards, AI literacy needs to be
Concluding Remarks
Overall, the ED-AI Lit framework, as delineated through its components of Knowledge, Evaluation, Collaboration, Contextualization, Autonomy, and Ethics, represents a multidimensional, comprehensive approach that we believe is crucial for effective AI literacy training. This framework emphasizes that AI literacy should move beyond mere factual knowledge about AI, extending into broader educational domains that encourage students to reflect on the interconnectedness of AI in everyday life. The framework therefore stresses the importance of developing a deep understanding of how AI systems function, critically evaluating their implications, and fostering collaborative relationships between individuals and AI.
Further, this framework underscores the significance of contextualizing AI across diverse subjects and disciplines, enabling learners to understand its practical applications in various domains. It acknowledges the need for individuals to exercise autonomy in their interactions with AI technologies, reflecting on the ethical considerations and addressing biases, fairness, and transparency in AI systems. AI literacy, as depicted by ED-AI Lit, is not solely about amassing knowledge but about how AI shapes our experiences, learning, and comprehension of the world, thereby providing a holistic approach to AI literacy.
Finally, drawing on the framework leads to a set of policy recommendations that center on the development of AI literacy standards in K-12, the development of regulation and ethics frameworks for transparent and open AI education tools, and strategies for access and standardization of AI literacy training for educators. We call for broad collaboration amongst key stakeholders that center educators and students in the development and implementation of these recommendations. We also recognize the need for a strong regulatory system at both state and federal levels that facilitates the necessary industry collaborations but also holds industry accountable (akin to how academic institutions ethically regulate research conducted by their employees). The ethical challenges AI tools pose for K-12 are not trivial—unless there is strong political will, these policies will fall short of addressing concerns about privacy, autonomy, surveillance, bias, and discrimination.
Footnotes
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Writing of this article was funded in part by the Guy Bond Chair in Reading and Distinguished McKnight University Professorship to P. Kendeou, as well as the Bonnie Westby Huebner Chair in Education & Technology to L. Allen from the University of Minnesota.
