Abstract
The integration of Artificial Intelligence (AI) into K-12 education holds significant potential for teacher empowerment, yet existing research often neglects the heterogeneity of teachers and the diverse pathways through which empowerment unfolds. This study investigates how AI empowers teachers in China’s K-12 sector, responding to national AI initiatives and addressing the overlooked diversity of teacher experiences. Using a qualitative multi-case design, five teachers were purposefully selected to capture variation in teaching experience, technical proficiency, subject specialization, geography, and pedagogical philosophy. Data were gathered through interviews, classroom observations, and AI usage logs over four months. Analysis reveals that empowerment is not linear but emerges through interconnected processes of efficiency-driven adoption, pedagogical innovation, and ethical reflection. The proposed three-tiered model underscores the situated and differentiated nature of AI integration, shaped by individual characteristics and contextual factors. These findings offer practical implications for policymakers and school leaders, highlighting the need for differentiated support ranging from technical training to ethical guidance. By foregrounding teacher agency and contextual variation, the study advances international discourse on AI in education and emphasizes that the transformative potential of AI depends as much on teachers’ values and professional judgment as on technological affordances.
Keywords
Introduction
The integration of Artificial Intelligence (AI) into education has become a global phenomenon, reshaping teaching practices and professional development across diverse contexts. Scholars note that AI can enhance instructional efficiency, facilitate personalized learning, and support teachers’ ongoing growth (Shang et al., 2025; Holmes et al, 2022; Li et al., 2025). In China, these global developments align with national priorities such as the Opinions on deepening the implementation of the “Artificial Intelligence Plus” action, which emphasizes integrating AI into all elements and processes of education to transform teaching from knowledge impartation to competency enhancement (State Council of the PRC, 2025). Yet, despite increasing interest, existing research often conceptualizes teacher empowerment through AI as a uniform process, overlooking the heterogeneity among teachers in terms of experience, subject expertise, digital proficiency, and cultural background (Huang et al., 2025; Zhang & Long, 2025). This gap is significant, as evidence suggests that teachers’ diverse characteristics and contexts critically shape how AI is adopted and integrated (Liu et al., 2025a; Holmes et al., 2022). Addressing this issue is necessary to move beyond generic models and toward a more differentiated understanding of empowerment. This study therefore seeks to explore the diverse pathways of AI empowerment among Chinese K-12 teachers. Specifically, it asks: RQ1: What differentiated pathways of AI empowerment emerge among teachers in China’s K-12 context? RQ2: How are these pathways shaped by individual characteristics and local contexts?
To frame these questions, the study draws on a three-tiered conceptual model, including technology adoption, pedagogical innovation, and ethical reconstruction, which positions teacher empowerment as multi-layered and non-linear, rather than a single trajectory. This framework highlights both opportunities for innovation and the ethical tensions of AI integration. Methodologically, the research employs a qualitative multi-case design, selecting five teachers with maximum variation in experience, subject specialization, technological proficiency, geographical location, and pedagogical philosophy. This sampling strategy ensures a nuanced comparison of empowerment pathways and illuminates the contextual dynamics shaping teachers’ engagement with AI.
Literature Review
Global Trends of AI in Education and Teacher Empowerment
The integration of AI into education has accelerated worldwide, driven by advances in machine learning, natural language processing, and data analytics. Scholars argue that AI can enhance instructional efficiency, foster adaptive learning environments, and support personalized instruction at scale (Holmes et al., 2022; Roll & Wylie, 2016). Global reports emphasize AI’s potential to empower teachers by automating administrative tasks and providing actionable feedback, allowing more focus on higher-order pedagogical work (Miao et al., 2021; UNESCO, 2025).
In China, these global currents intersect with national strategies aimed at digital transformation of education. Chinese scholars highlight the importance of aligning AI use with teacher professional identity and the national modernization agenda (Jiao, 2023; Yu, 2018). Yu (2018) anticipates AI “teacher roles” such as instructional assistant, learning analyst, and life coach, suggesting that human–machine coexistence will define future schools. Similarly, Jiao (2023) stresses AI’s role in enabling digital transformation of schooling, raising fundamental questions about “what students should learn” and “how teachers should teach.” These perspectives underscore the dual nature of AI integration, as both a technological innovation and a systemic educational reform.
AI and Teacher Professional Development: Opportunities and Challenges
Teacher professional development (TPD) has long been central to educational reform. With AI integration, scholars identify both new opportunities and pressing challenges. On the opportunity side, AI can provide intelligent diagnostics, personalized online training, and collaborative virtual professional learning communities (Huang et al., 2025; Zhang & Long, 2025). These functions contribute to scalable and differentiated TPD, potentially overcoming limitations of traditional one-size-fits-all models. Internationally, systematic reviews also find AI enhances teacher reflection and instructional design by providing evidence-based insights (Bauer et al., 2025; Li et al., 2025).
However, significant challenges remain. Liu et al. (2025a) observes that while competence frameworks exist, Chinese teachers often lack systematic pathways for AI competence development, making professional growth fragmented. Zhou (2025) critiques the risk of “identity crisis,” where technology overshadows teachers’ subjectivity, reducing their work to technical labor. Similar concerns emerge in Western debates: Holmes et al. (2022) and Karakuş et al. (2025) warn of automation bias and ethical tensions when AI mediates classroom decisions. These critiques reveal tensions between empowerment and disempowerment, suggesting that AI-enhanced TPD must carefully balance efficiency gains with preservation of teacher agency.
Generative AI and Pedagogical Innovation
The recent rise of generative AI (GAI), exemplified by ChatGPT, has expanded debates on teaching innovation. Research shows that GAI offers powerful tools for lesson planning, content generation, and adaptive feedback, but its adoption varies by discipline and teacher readiness (Alammari, 2024; Su, 2025; Xia & Lai, 2025). For instance, Xia and Lai (2025) illustrate that while GAI supports text analysis in Chinese composition teaching, it struggles with creative lesson design, requiring hybrid “teacher+ GAI” models. Su (2025) similarly identifies practical applications in chemistry instruction, ranging from resource preparation to assessment, while cautioning about quality and oversight.
Concurrently, scholars highlight both promise and caution. Zawacki-Richter et al. (2019) identify key domains where AI can support higher education teaching but stress the need for robust pedagogy. Ganjoo et al. (2024) and Yan et al. (2024) outline opportunities for GAI to transform assessment and personalized learning but warn against risks of misinformation and overreliance. Within China, researchers argue that while GAI enables collaborative innovation, it may also erode teachers’ creativity, producing a “technological toxicity” that reduces autonomy (Hou & Wang, 2025; Liu et al., 2025a). Together, these findings suggest that pedagogical innovation with GAI is contingent on designing collaborative, complementary models where human judgment remains central.
Ethical and Theoretical Considerations in AI Empowerment
Beyond technical applications, a growing body of work highlights the ethical and philosophical implications of AI in education. Holmes et al. (2022) propose a community-wide ethical framework to govern AI deployment in schools, emphasizing transparency, fairness, and accountability. Similarly, researchers from Spain emphasize professional development for teachers in K-12 schools should focus on ethical concerns and agency, addressing uncertainty, resistance, and transition, and fostering individual and collective agency regarding ethical issues (Mouta et al., 2025).
Chinese scholarship echoes these concerns, framing AI not only as an opportunity but also as a source of existential risks. Zhou (2025) warns of homogenization and “identity crisis,” while Shang et al. (2025) argue that GAI transforms educational ecosystems by restructuring information flows and teacher roles. Hong and Zhu (2025) employ phenomenology to reveal hidden risks: diminished teacher intentionality, blurred authority, and classroom disorder. These debates highlight that AI empowerment is not value-neutral; it entails ethical reconstruction of the teacher role.
From a theoretical perspective, several models emerge. Huang et al. (2025) propose an AI-enhanced teacher professional development (AIeTPD) model that incorporates teacher agency and developmental stages. Researchers from Hongkong SAR propose a human-centered learning and teaching framework using generative artificial intelligence for self-regulated learning to catalyze changes in educational practices (Kong & Yang, 2024). Based on the structural equation modeling of 304 preservice English teachers, Karataş and Ataç (2024) identified varying proficiency levels across TPACK and AI–TPACK dimensions, highlighting a strong correlation between traditional TPACK and AI–TPACK skills and affirming the feasibility of their simultaneous development within teacher training curricula. Together, these frameworks suggest that AI empowerment unfolds along differentiated pathways, shaped by contextual and ethical considerations.
Synthesis and Research Gap
In summary, existing scholarship highlights the transformative potential of AI in education by enabling teacher professional development, supporting pedagogical innovation, and raising ethical and philosophical debates across global and Chinese contexts. While these studies establish a valuable foundation, they also reveal critical limitations. Much of the current literature emphasizes the technological affordances of AI, often treating teachers as a homogeneous group and overlooking the heterogeneity of their professional needs, subject backgrounds, and contextual realities. Similarly, although theoretical models such as AI-enhanced professional development frameworks (Huang et al., 2025) provide structural insights, empirical research on differentiated empowerment pathways remains scarce.
These gaps suggest the need for studies that move beyond generalized claims to examine how AI empowers teachers in diverse, situated ways. Specifically, little is known about the micro-level processes through which individual K-12 teachers in China experience and negotiate AI-enabled empowerment. Addressing this gap, the present study investigates differentiated empowerment pathways through a multi-case design, thereby contributing to both theoretical refinement and practical guidance for AI-integrated teacher development.
Research Methodology
Research Design
This study employs a qualitative multiple-case study design (Yin, 2018) to dissect how diverse factors mediate AI-enabled teacher empowerment in China’s stratified K-12 system. Grounded in the premise that empowerment pathways are shaped by dynamic interactions between teachers and their contexts (Huang et al., 2025), this study adopted purposeful maximum variation sampling to capture a wide spectrum of heterogeneity (Patton Michael, 2015). The sample included teachers at different stages of their professional careers, from novices with only two years of experience to veterans with over two decades in the classroom. Participants also varied in their technical proficiency, ranging from minimal digital competence to expert-level users. Disciplinary backgrounds spanned STEM and humanities fields as well as ideologically oriented subjects, while geographic diversity was ensured by including both metropolitan schools and peripheral, resource-constrained regions. Finally, variation in pedagogical orientation was reflected in approaches that ranged from efficiency-driven instruction to more critically reflective forms of praxis.
To operationalize contextual complexity, each case was conceptualized as a bounded ecosystem (Stake, 2013) where the appropriation of AI was shaped by the intersection of institutional mandates, disciplinary epistemologies, and career-stage development. Institutional pressures differed significantly, from municipal reforms emphasizing technological modernization to equity-driven interventions in under-resourced areas. Disciplinary orientations likewise conditioned engagement with AI, with positivist traditions in science leading to distinct practices compared to constructivist approaches in the humanities. Career stage further mediated these dynamics: while early-career teachers often used AI experimentally, more experienced colleagues integrated it within established pedagogical repertoires. To trace such non-linear empowerment trajectories, the study employed process tracing (Mahoney, 2015), supplemented by constant comparison with existing frameworks to refine conceptual insights. Rather than pursuing statistical representativeness, the research aimed for analytical generalizability (Firestone, 1993) through thick contextualization of each case.
Participants and Sampling
The study engaged five K-12 teachers purposefully selected to embody maximum variation across predetermined dimensions, with each case representing a unique confluence of professional identity and contextual constraints. First, Jenny (pseudonym), a senior Beijing-based Chinese language teacher with 25 years’ experience, was included not only because she belongs to the underrepresented 41–50 age cohort in AI adoption literature, but also due to her current role in Tibet through China’s Aid-Tibet Teacher Program. This dual positioning captures tensions between urban technological affordances and rural infrastructural limitations, while her advocacy for standardized-yet-personalized pedagogy exemplifies veteran teachers’ negotiation of policy mandates in resource-scarce settings.
Subsequently, Lauren (pseudonym), a mid-career (13 years) Chinese language teacher in Beijing, was chosen to contrast Jenny’s equity focus; specifically, Lauren’s student-centered approach and leadership in school-wide AI initiatives reflect metropolitan educators’ responses to Haidian District’s “AI+ Teaching” reforms. Meanwhile, Mark (pseudonym), a mathematics teacher with computer science training, embodies hybrid expertise at the technology-pedagogy interface; conversely, Lily (pseudonym), a junior science teacher (5 years), reveals how early-career practitioners leverage AI for inquiry-based learning despite limited technical fluency. Finally, Henry (pseudonym), a politics teacher with legal academic credentials, provides a critical counterpoint; whereas most participants prioritized efficiency gains, his insistence on preserving “cognitive struggle” and interrogating AI’s ideological accuracy highlights ethical tensions in ideological education.
Collectively, these cases map onto China’s core-peripheral educational axis: three teachers operate in Beijing’s innovation ecosystem (Lauren, Mark, Lily), while Jenny and Henry, though institutionally Beijing-affiliated, engage contrasting peripheries (Tibet’s equity challenges and ideological frontiers, respectively).
Profiles of the Five Case Study Teachers
aWenxinyiyan is the Chinese name for Baidu’s large language model and generative AI chatbot, also branded in English as ERNIE Bot.
Data Collection
Data collection spanned March to July 2024 and employed triangulated methods (Creswell & Poth, 2018) to capture multidimensional facets of AI-enabled teacher empowerment, with semi-structured interviews serving as the primary data source complemented by artifact analysis and usage logs. Specifically, 1–3 rounds of 20–90 minute interviews were conducted per participant (totaling 13 sessions), each guided by a protocol derived from China’s Teacher Digital Literacy Framework (Wu et al., 2023)) and addressing four thematic clusters: (a) professional background and pedagogical identity, (b) generative AI adoption trajectories, (c) perceived affordances/constraints, and (d) contextual support needs. Notably, the protocol’s flexibility allowed probing unanticipated themes, such as Jenny’s articulation of standardized-personalized hybridization in Tibetan classrooms or Henry’s critique of AI’s “ideological hallucinations”, while ensuring cross-case comparability through core question consistency.
Concurrently, digital artifacts were collected to triangulate self-reported practices with material outputs, including AI-generated lesson plans, student feedback reports, and cross-disciplinary project designs. For instance, Mark’s development of Excel-based grading tools and Lauren’s Wenxinyiyan-edited meeting minutes provided tangible evidence of efficiency-pedagogy tensions. Additionally, participants maintained usage logs documenting frequency, duration, and perceived utility of AI tools across four months; these logs were structured using Likert scales for efficiency gains (1 = minimal to 5 = substantial) and open-field annotations for contextual barriers (e.g., Jenny’s intermittent connectivity issues in Tibet).
To contextualize institutional dynamics, policy documents and school-based AI implementation guidelines (e.g., Haidian District’s Smart Education framework) were analyzed, thereby situating individual practices within broader reform ecosystems. All interviews were audio-recorded and transcribed verbatim, with member-checking ensuring accuracy, while observational notes captured nonverbal cues during artifact demonstration sessions.
Data Analysis
All interviews were transcribed verbatim and analyzed using thematic analysis (Braun & Clarke, 2006). Data analysis followed a hybrid inductive-deductive approach (Fereday & Muir-Cochrane, 2006) structured in three iterative phases, with the process exemplified through Lauren’s case to demonstrate methodological rigor. Initially, open coding of interview transcripts and observational notes identified in-vivo codes preserving teachers’ original expressions (e.g., Lauren’s description of AI-generated comments as “linguistically incoherent”). Subsequently, axial coding integrated these codes with deductive categories from the empowerment framework (Technology Adoption/Pedagogical Innovation/Ethical Reconstruction), while thematic mapping synthesized patterns across cases through constant comparison (Corbin & Strauss, 2015).
Analytical Progression: Lauren’s AI Adoption Barriers and Breakthroughs
aClass Manager (班级小管家) is a highly popular all-in-one WeChat mini-program designed to help teachers, parents, and students manage class activities and communication efficiently. It functions as a digital classroom assistant, deeply integrated into the WeChat ecosystem, which makes it extremely convenient for users in China.
Crucially, this process revealed how Lauren’s breakthrough in using Class Manager (班级小管家) for templated comments, despite its limitations—represented a strategic adaptation within institutional constraints, whereby she leveraged AI’s efficiency while preserving agency through selective manual editing. Moreover, negative case analysis (e.g., her unexpected success with Wenxinyiyan for research summaries) refined the category “Contextual Affordance Recognition”, suggesting that tool effectiveness depends on task alignment rather than inherent capability.
The data analysis extended beyond thematic coding to engage in a multi-layered interpretive process, integrating both within-case and cross-case analysis (Yin, 2018) to construct robust explanations for the diverse empowerment pathways observed. This process commenced with the hybrid thematic coding detailed in Table 2 (open-axial-thematic), which served to segment and categorize the data. However, the core of the analysis involved iterative cycling between the empirical data and emerging theoretical insights to discern patterns, mechanisms, and contradictions.
For each individual case, we constructed a detailed narrative account that chronicled the teacher’s AI engagement journey, contextualizing their choices within their specific institutional and professional settings. This within-case analysis was pivotal for understanding the idiosyncratic logic behind each teacher’s actions. For instance, Lauren’s initial resistance to AI-generated feedback was not merely a technical failure but a pedagogical identity struggle; her narrative revealed a deep-seated belief that personalized comments were a core component of her craft, which impersonal AI outputs threatened to erode. Subsequently, we employed explanation building (Yin, 2018) to propose and refine tentative hypotheses about what empowered or constrained each teacher. In Lauren’s case, the tentative hypothesis—Teachers prioritize AI tools that augment rather than replace their pedagogical voice—was tested against her entire dataset, including her eventual adoption of Class Manager for templated tasks, which supported and refined this explanation.
The cross-case analysis followed, seeking patterns that cut across the individual narratives. We utilized pattern matching to compare the empirical patterns with those predicted by our initial theoretical framework (the three-tiered model). Simultaneously, we conducted a thematic synthesis to identify higher-order themes that encapsulated the experiences of multiple participants. Critically, we also searched for crucial exceptions and negative cases that challenged our emerging theories. For example, while the pattern of “efficiency gains” was common (i.e., Lily, Lauren, Mark), Henry’s case served as a negative case for the assumption that efficiency is always desirable, thereby strengthening the theory by defining its boundaries.
Illustrative Data Analysis Progression for Lauren’s Case
Analytical trustworthiness was enhanced through: (1) Peer debriefing resolving coding discrepancies (e.g., whether Mark’s tool customization constituted Pedagogical Innovation or Technology Adoption); (2) Triangulation across interviews, observations, and digital logs; (3) Inviting participants to verify transcripts and interpretations for member checking. Ethical approval was obtained from the host institution. All participants provided informed consent, and pseudonyms were used to protect identities.
Results
Drawing on the multi-case qualitative analysis, the findings reveal distinct yet interconnected pathways of AI-enabled teacher empowerment, which are synthesized in a three-tiered model (Figure 1). The Technology Adoption Layer highlights efficiency-driven uses such as automated feedback and resource generation (i.e., Lily, Lauren). The Pedagogical Innovation Layer emphasizes the redesign of teaching practices through hybrid AI–disciplinary approaches and balancing standardization with personalization (i.e., Mark, Jenny). The Ethical Reconstruction Layer foregrounds critical reflection on educational values, including concerns about factual accuracy, authenticity, and preserving students’ cognitive struggle (i.e., Henry). The bidirectional arrows indicate recursive and non-linear transitions across layers, suggesting that teachers may move fluidly between layers rather than progress linearly. Three-tiered model of AI empowerment pathways
Technology Adoption Layer: Efficiency Gains and Initial Engagement
The first layer of the model, Technology Adoption, reflects teachers’ initial engagement with AI as a tool for efficiency and routine task support. Teachers in this category primarily utilized AI to streamline lesson preparation, generate teaching resources, and provide automated feedback. While such adoption represents an important entry point into AI integration, its scope remained constrained by technical fluency, especially in prompt design and tool customization.
Lily, a primary science teacher with five years of experience, epitomizes this layer. She frequently relied on AI tools to generate practice tasks and inquiry prompts for her students, especially within her extracurricular “Intelligent Creation” club. As she explained: “AI saves me a lot of time on routine work, like creating worksheets or coming up with small project ideas. But when I try to design more open-ended tasks, the suggestions are often too generic, and I still need to adjust them myself.” (Lily, Interview)
Here, AI served as an assistant rather than a co-designer, enabling Lily to allocate more time to hands-on experiments and classroom discussions. However, the dependence on system-generated outputs also highlighted her limited mastery of prompt engineering, which constrained the sophistication of AI-supported teaching materials.
Similarly, Lauren, a Chinese Language teacher and research coordinator, integrated AI primarily for lesson planning and communication tasks. She noted that AI “reduced the repetitive burden of drafting parent notices and generating reading comprehension questions,” yet she expressed skepticism about relying on AI for tasks involving literary nuance. Her case underscores how adoption was shaped not only by technical skills but also by disciplinary epistemologies: whereas science education lends itself to structured tasks and factual prompts, language arts demanded more interpretive sensitivity, limiting AI’s role to logistical rather than conceptual support.
Both cases demonstrate that technology adoption is not trivial but foundational, aligning with prior findings that efficiency gains often mark the earliest stage of teacher-AI interaction (Holmes et al., 2022; Huang et al., 2025). Importantly, Lily and Lauren illustrate how initial adoption is simultaneously empowering and constraining—empowering by reducing workload and opening time for student-centered teaching, yet constraining due to limited skills and AI’s generic output quality. This layer is therefore characterized by pragmatic utility rather than transformative pedagogy, forming the baseline upon which deeper innovation may or may not emerge.
Pedagogical Innovation Layer: Redesigning Teaching Practices
The second layer of the empowerment model, Pedagogical Innovation, captures teachers’ movement beyond efficiency gains toward redesigning their instructional practices with AI as a co-creative partner. Unlike the adoption stage, where AI was primarily a time-saving assistant, teachers in this layer integrated AI into their pedagogical vision, reconfiguring how knowledge was presented, scaffolded, and personalized for students.
Mark, a mathematics teacher with a background in computer science, exemplifies this trajectory. His dual expertise allowed him to experiment with hybridizing AI tools with existing digital platforms, such as GeoGebra and custom-built Excel scripts. During classroom observations, he demonstrated how AI could generate alternative solution pathways to mathematical problems, which he then incorporated into discussions to encourage divergent thinking. As he explained: “AI doesn’t just give me the answers—it helps me see multiple ways of solving the same problem. I use that to show students that math is not just about one formula, but about flexible thinking.” (Mark, Interview)
Here, AI became a pedagogical collaborator, enriching students’ mathematical experience by diversifying problem-solving approaches. This resonates with prior scholarship that highlights AI’s potential to support epistemic diversity in teaching (Zhang & Long, 2025).
Jenny, a senior teacher with 25 years of experience and currently serving in Tibet as part of a national education initiative, approached AI innovation differently. Confronted with resource limitations in her local school, she leveraged AI to standardize lesson structures while embedding opportunities for personalization. For example, she used AI to generate multiple levels of reading comprehension exercises for her students, aligning them with national curriculum standards but tailoring difficulty to varied student profiles. Jenny emphasized that such innovation was not only a matter of efficiency but also of equity: “Out here, resources are limited. AI helps me give each student something closer to what they need, while still meeting the national standards. It’s like building a bridge between fairness and individuality.” (Jenny, Interview)
Her case illustrates the sociocultural dimension of AI empowerment—how technological innovation is situated within broader concerns of educational equity and regional development (Holmes et al., 2022; Liu et al., 2025b).
Together, Mark and Jenny demonstrate that pedagogical innovation with AI does not follow a single trajectory. Instead, it is context-contingent: Mark’s technology-driven experimentation stemmed from his disciplinary and technical expertise, while Jenny’s adaptation was guided by the imperative of educational fairness in an underserved region. Both cases underscore that innovation emerges when teachers reinterpret AI not as an external tool but as an integral component of pedagogical design, reshaping classroom practices to align with their values and contexts.
Ethical Reconstruction Layer: Critical Engagement and Value Reflection
The third layer, Ethical Reconstruction, represents the most complex form of AI empowerment observed in this study. At this stage, teachers move beyond technical adoption or pedagogical redesign to engage with AI as a catalyst for critical reflection on educational values, epistemic authority, and the purpose of schooling. Rather than asking what AI can do for teaching, teachers in this layer interrogate what teaching ought to remain beyond AI’s reach.
Henry, a young politics teacher with a law background, exemplifies this orientation. In interviews, he expressed unease with the uncritical integration of AI into social science teaching, particularly in a subject where interpretation, debate, and ambiguity are pedagogical assets rather than obstacles. As he noted: “AI gives polished answers, but politics is not about polished answers—it’s about questions that don’t have easy solutions. If I let AI answer for students, they miss the struggle of thinking through contradictions themselves.” (Henry, Interview)
In classroom observations, Henry resisted over-reliance on AI-generated content. Instead, he occasionally used AI outputs as provocations. For example, asking students to fact-check or deconstruct a machine-generated explanation of constitutional law. This practice transformed AI from a knowledge provider into a pedagogical foil, sharpening students’ critical thinking skills by highlighting both the strengths and limitations of algorithmic reasoning.
Henry’s case surfaces important tensions in AI adoption for education. On the one hand, AI promises accuracy, efficiency, and breadth of knowledge. On the other hand, its epistemic opacity (the “black box” problem) and tendency to generate overconfident but flawed answers raise concerns about its compatibility with educational goals centered on critical reasoning, intellectual autonomy, and democratic citizenship (Edwards et al., 2018; Holmes et al., 2022). Henry’s insistence on preserving what he called “students’ cognitive struggle” positions him not merely as a user of AI but as a guardian of educational integrity.
This critical stance resonates with recent warnings in both Chinese and international scholarship about the risks of pedagogical outsourcing and the erosion of teacher and student agency (Hou & Wang, 2025; Liu et al., 2025a). By deliberately constraining AI’s role, Henry exemplifies how teacher empowerment may involve ethical boundary-setting as much as technical skill or pedagogical creativity. His case suggests that the highest form of empowerment may lie not in maximizing AI’s integration but in selectively resisting or repurposing it to foreground humanistic values.
Taken together, the Ethical Reconstruction layer underscores that AI in education is not a neutral technological enhancement but a site of ideological contestation. Teachers like Henry remind us that empowerment must be conceptualized not only in terms of efficiency and innovation but also in terms of values, ethics, and the enduring role of human judgment in education.
Summary of Findings
The cross-case analysis revealed that AI empowerment among Chinese K-12 teachers unfolds along heterogeneous and non-linear trajectories, structured into three interrelated layers: Technology Adoption, Pedagogical Innovation, and Ethical Reconstruction. These layers do not represent a strictly sequential progression but rather distinct pathways shaped by teachers’ disciplinary contexts, professional backgrounds, and personal philosophies.
At the Technology Adoption Layer, teachers like Lily and Lauren leveraged AI primarily for efficiency gains, using it to automate repetitive tasks such as worksheet generation, reading exercises, or parent communication. While this adoption reduced workload and freed time for student-centered teaching, its impact was limited by technical fluency and the generic quality of AI outputs.
At the Pedagogical Innovation Layer, teachers such as Mark and Jenny moved beyond efficiency to reconfigure instructional practices. Mark integrated AI with mathematics tools to design hybridized teaching resources, while Jenny envisioned standardized yet personalized lesson designs in a resource-constrained Tibetan context. These cases demonstrate how AI can catalyze curricular creativity and instructional redesign, albeit unevenly across contexts and subjects.
At the Ethical Reconstruction Layer, represented by Henry, AI functioned less as a tool and more as a provocation for ethical inquiry and value reflection. His selective resistance to AI’s authoritative outputs foregrounded the enduring importance of critical thinking, intellectual struggle, and the preservation of human agency in teaching. This layer highlights empowerment not as uncritical integration but as the ability to negotiate AI’s role in alignment with educational values.
Overall, the findings underscore that AI empowerment is situated and differentiated: while some teachers are empowered through technical support or pedagogical enhancement, others derive empowerment from critical distance and ethical boundary-setting. This layered model expands current understandings of teacher–AI interaction by showing that empowerment is not monolithic but multi-dimensional, context-dependent, and value-laden. Such findings invite policymakers and school leaders to design differentiated support strategies: technical training for adopters, pedagogical co-design opportunities for innovators, and ethical guidelines for critical practitioners. More broadly, the results suggest that teacher empowerment in the age of AI is as much about professional agency and ethical discernment as it is about technological competence.
Discussion
Reframing Teacher Empowerment in the Age of AI
This study demonstrates that teacher empowerment in the age of artificial intelligence (AI) must be reconceptualized as a multi-dimensional and context-sensitive process, rather than a linear trajectory of adoption. Prior models such as TAM and UTAUT (Teo, 2011; Venkatesh, 2022) emphasize access and use, assuming a uniform progression from awareness to acceptance. Our evidence challenges this by showing that teachers engage with AI in divergent ways: as adopters pursuing efficiency, as innovators redesigning pedagogy, and as critical actors negotiating ethical boundaries.
The proposed three-tiered model—Technology Adoption, Pedagogical Innovation, and Ethical Reconstruction—captures this diversity. Lily and Lauren’s pragmatic adoption reflects empowerment through efficiency gains, while Mark and Jenny illustrate how AI stimulates instructional redesign. Henry, in contrast, demonstrates that empowerment may also emerge through resistance and ethical reflection, preserving human agency in learning. This reconceptualization aligns with scholarship stressing heterogeneity and contextualization in digital innovation (Holmes et al., 2022), as well as Chinese research advocating plural pathways for teacher development (Huang et al., 2025; Zhang & Long, 2025). By situating empowerment within both technical and ethical domains, the study foregrounds teacher agency as decisive for AI’s future in education.
Differentiated Pathways and Professional Agency
Findings also reveal that empowerment unfolds through differentiated pathways shaped by professional identity, disciplinary culture, and institutional context. AI functions as a situated catalyst rather than a universal progression (Ertmer & Ottebbreit-Leftwich, 2014). Early-career teachers such as Lily and Henry used AI for scaffolding or ethical positioning, while mid-career teachers like Lauren and Mark employed it for pedagogical reinvention. Jenny’s case highlights how AI can address structural inequities, reframing empowerment in terms of educational justice.
These cases confirm that agency is not mere adaptability but the capacity to negotiate, resist, or strategically appropriate AI in pursuit of educational aims. This resonates with Chinese debates warning against over-technologizing teacher development and emphasizing reflexive agency (Hou & Wang, 2025; Zhou, 2025). The model thus demonstrates that empowerment does not progress linearly from adoption to ethics but is contingent upon teachers’ situated judgments. Policies and professional development must therefore move beyond generic digital training toward context-responsive systems that recognize plural pathways of empowerment.
The Ethical-Political Dimension of AI Empowerment
The third layer, Ethical Reconstruction, highlights AI as a catalyst for reflecting on the purposes of education. Henry’s skepticism toward AI-generated content, particularly regarding factual accuracy and student cognitive struggle, aligns with critiques warning against uncritical adoption of algorithmic systems (Williamson & Eynon, 2020). At stake is not only reliability but also the political economy of knowledge production, as AI may embed corporate logics into education and shift authority away from teachers (Mouta et al., 2025). Henry’s resistance exemplifies how empowerment entails reclaiming teachers’ ethical role as gatekeepers of learning.
In China, where state-driven initiatives frame AI as central to modernization, these dilemmas are acute. While policy emphasizes efficiency and innovation, it risks overshadowing ethical concerns. Chinese scholars similarly warn against reducing teachers to technological executors and call for cultivating reflexivity to safeguard values (Hou & Wang, 2025; Zhou, 2025). Henry’s case illustrates how teachers can transform AI from a threat into a provocation sustaining critical inquiry. Thus, empowerment must be seen as ethical-political agency, enabling teachers to deliberate what should be preserved or transformed in the digital era.
Synthesizing the Three Layers: Toward a Multi-Pathway Understanding
Overall, the findings confirm that AI empowerment is not a linear or universal trajectory but a constellation of intersecting pathways. The three-tiered model captures distinct yet overlapping dimensions of teacher agency—from efficiency-seeking to pedagogical innovation to ethical reflection. Importantly, these pathways are fluid and contingent. Lily and Lauren’s efficiency-oriented practices may plateau or evolve, depending on support and capacity. Jenny and Mark’s innovations highlight how socio-cultural and institutional positioning shapes AI use, while Henry shows that ethical reflection can be a primary entry point rather than an advanced stage.
This perspective challenges deterministic narratives of AI adoption and underscores the risks of reducing integration to a purely technical issue. Empowerment is instead situated, relational, and value-laden, shaped by identity, philosophy, and teaching environment. The model highlights how efficiency, innovation, and ethics coexist and sometimes conflict, demanding differentiated and adaptive support systems. Such an understanding is vital for designing teacher development strategies that respect diversity while fostering reflexive engagement.
Implications
The findings carry important implications for policy, practice, and research in AI-enhanced teaching. By framing empowerment through the three-tiered model—Technology Adoption, Pedagogical Innovation, and Ethical Reconstruction—the study emphasizes the multi-pathway and non-linear nature of teacher engagement, underscoring the need for context-sensitive rather than uniform solutions.
At the policy level, national initiatives often assume linear progress from adoption to innovation (Teo, 2011; Venkatesh, 2022), yet findings reveal diverse routes shaped by subject, institution, and values. Policymakers should avoid prescriptive benchmarks of “AI competence” and instead foster ecosystems with multiple entry points, from basic literacy workshops to advanced pedagogical labs and ethics-focused forums. Safeguarding teachers’ professional agency is critical to prevent AI integration from devolving into technocratic control.
At the practice level, empowerment involves affective and ethical dimensions beyond technical skills. For Lily and Lauren, efficiency gains must be paired with scaffolds for creativity; Jenny’s case highlights equity concerns in under-resourced contexts; Mark demonstrates the value of cross-disciplinary experimentation; and Henry’s reflections stress interrogating AI’s impact on cognition and civic values (Williamson & Eynon, 2020). Professional learning communities should thus address not only “how” but also “why” and “to what ends” AI is used. Moreover, AI reshapes professional identity: teachers alternately adopt, innovate, or resist, redefining what it means to teach in digitally mediated contexts. Schools must therefore support reflective practice and identity negotiation.
At the research level, the study calls for nuanced explorations of teachers’ heterogeneous pathways. Existing work emphasizes adoption metrics or innovation, often neglecting ethical and identity-related domains. Future studies should adopt longitudinal, multi-sited, and comparative designs to examine evolving relationships across contexts, and integrate students’ perspectives to capture how AI-mediated learning affects empowerment.
Taken together, these implications demand a reorientation of AI-in-education discourse. AI should be viewed not as a neutral tool but as a socio-technical phenomenon that simultaneously enhances, challenges, and reshapes teaching. For policymakers, this means designing inclusive and flexible systems; for practitioners, cultivating collaborative and reflective spaces; and for researchers, pursuing inquiries that capture the fluid, contested nature of AI integration. Ultimately, AI’s promise will not be realized by technology itself but by teachers’ agency in negotiating its use. Recognizing their diverse pathways of empowerment is essential for ensuring AI becomes a catalyst for meaningful transformation rather than standardization or control.
Conclusion
This study examined how artificial intelligence (AI) empowers K-12 teachers in China by tracing the heterogeneous ways in which educators with diverse professional backgrounds engage with emerging technologies. Through a qualitative multi-case study of five teachers, the findings conceptualized AI empowerment as a three-tiered, non-linear model: the Technology Adoption Layer oriented toward efficiency gains, the Pedagogical Innovation Layer involving redesign of instructional practices, and the Ethical Reconstruction Layer where AI catalyzed reflection on educational values.
The study contributes to the literature in three ways. First, it demonstrates that teacher empowerment with AI is not monolithic, but unfolds along varied trajectories shaped by career stage, disciplinary expertise, and institutional context. Second, it underscores the interplay of technical, pedagogical, and ethical dimensions, extending prevailing models of teacher professional development. Third, by situating the inquiry within China’s rapidly evolving policy landscape, the research highlights how national initiatives interact with local realities to shape teachers’ agency.
At the same time, the study has limitations. The small sample size and relatively short data collection period limit generalizability and the ability to capture long-term transformations. Moreover, student perspectives were not systematically incorporated, which constrains understanding of how AI-mediated teacher empowerment translates into learning outcomes.
Despite these limitations, the study underscores that AI can act both as a tool and a reflective mirror: enabling efficiency, prompting pedagogical innovation, and provoking ethical reconsideration. For policymakers and school leaders, the findings suggest the need for differentiated support strategies that recognize teachers’ diverse starting points and evolving needs. Ultimately, AI should be understood not as a singular solution but as a situated catalyst that interacts with teacher agency to shape the future of education.
Footnotes
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research is supported by Beijing Social Science Fund’s key project for 2023, “Artificial Intelligence Generated Content and Teacher Development” (No: 23JYA004).
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
