Abstract
Understanding motivational drivers behind prospective teachers’ artificial intelligence (AI) integration intentions is critical. While prior models such as the technology acceptance model overlook motivational dynamics, this study used expectancy-value theory to assess how expectancy, attainment, utility, intrinsic value, and cost shape intentions. Data from 454 prospective teachers were analyzed via structural equation modeling using the Behavioral Intention Scale and the Questionnaire of Artificial Intelligence Use Motives. Utility value (β = .29, p < .001) was the strongest predictor, followed by cost (β = −.27), intrinsic value (β = .25), attainment (β = .21), and expectancy (β = .10). The model explained 62% of variance in behavioral intention. Control variables, including gender, class level, and AI usage frequency, significantly influenced intentions. Findings suggested that teacher education programs should enhance AI’s perceived utility, address implementation costs, and strengthen expectancy through training to foster adoption. Emphasizing AI’s practical relevance within supportive environments can bridge its potential with classroom integration.
Keywords
Introduction
Artificial intelligence (AI), characterized by the ability of machines to perform tasks requiring human-like intelligence, holds transformative potential for education. Its capacity to enable personalized learning, adapt to individual student needs, and provide data-driven insights for educators underscores its growing adoption in educational settings (Balcı, 2024; Lampou, 2023; Xu & Ouyang, 2022). However, it is essential to acknowledge ongoing scholarly debates regarding AI’s effectiveness in achieving these outcomes. Critics raise concerns about the empirical rigor and reliability of existing evidence, noting that many studies lack rigorous experimental designs, use small sample sizes, and suffer from methodologic transparency issues (Chen, 2025; Şanlı, 2025). Beyond methodologic limitations, questions persist about whether AI genuinely enables individualized learning or merely creates the illusion of personalization through superficial adaptations that fail to capture critical dimensions of learning, such as creativity and critical thinking (Konstantinidis, 2025). Furthermore, algorithmic bias and equity concerns underscore how AI systems may perpetuate historical inequalities embedded in educational data, resulting in discriminatory outcomes based on gender, race, or socioeconomic status (Al-Zahrani, 2024; Mokoena & Seeletse, 2025). Despite these debates, the field’s growing investment in AI—reflected in policy initiatives and institutional adoption—makes it imperative to understand what drives educators toward or away from AI integration. Regardless of definitive evidence on AI’s effectiveness, implementing these technologies fundamentally depends on educators’ willingness to adopt them, underscoring the critical importance of examining the motivational drivers behind prospective teachers’ behavioral intentions toward AI use.
To better understand these motivational drivers, a systematic review of the literature revealed that existing studies predominantly employ frameworks such as the technology acceptance model (TAM) and the unified theory of acceptance and use of technology (UTAUT) to explain AI adoption (Ayanwale et al., 2022; Ma & Lei, 2024; Scherer, Howard et al., 2019; Zhang et al., 2023). Although these models highlight critical factors such as perceived usefulness, ease of use, and social influence, they often overlook intrinsic motivational constructs, including educators’ values, self-concept, and cost-benefit evaluations (Cheng et al., 2020; Ranellucci et al., 2020). For instance, Ma and Lei (2024) identified AI literacy and subjective norms as predictors of behavioral intention. However, they did not explore how educators’ intrinsic interest or perceived importance of AI mastery shapes their decisions. This narrow focus limits the holistic understanding of AI adoption, particularly among prospective teachers who play a pivotal role in future classrooms.
Addressing this limitation, the expectancy-value theory (EVT; Eccles & Wigfield, 2020, 2023) offers a complementary framework to address this gap. EVT posits that individuals’ choices and persistence in tasks are driven by their expectancy of success and the value they assign to the task, including attainment (personal importance), utility (practical benefits), intrinsic interest (enjoyment), and perceived costs. Despite its relevance, EVT remains underexplored in the field of AI adoption research. Previous attempts to integrate EVT with TAMs, such as those by Ranellucci et al. (2020), have faced methodologic challenges in adapting EVT scales to technological contexts, underscoring the need for context-specific instruments. Although Yurt and Kaşarcı (2024) developed a validated Questionnaire of Artificial Intelligence Use Motives (QAIUM), no study has applied EVT holistically to investigate prospective teachers’ intentions, a critical oversight given their unique role in bridging AI innovation with pedagogic practice.
Building on this theoretical foundation, this study addressed the theoretical and methodologic gaps by systematically analyzing how the core constructs of EVT—expectancy, attainment, utility, intrinsic value, and cost—collectively shape prospective teachers’ intentions to adopt AI. By synthesizing the literature on motivation and technology integration, this work advances a nuanced framework that bridges the divide between practical acceptance factors and psychological drivers. It also responds to recent calls for research that examines both the “how“ and “why“ of AI adoption, particularly in teacher education programs (Chan & Zhou, 2023).
In summary, although TAM and UTAUT have laid the foundation for an understanding of AI adoption, this study pioneers the application of EVT to unravel the motivational complexities underlying prospective teachers’ intentions. Doing so provides actionable insights to address designing teacher education curricula and policies that foster sustainable AI integration in education.
Understanding these motivational dynamics is particularly critical in contexts where AI integration depends more on individual agency than on institutionalized preparation. In Türkiye, teacher education programs are regulated by the Council of Higher Education and follow national curriculum frameworks established by the Ministry of National Education. While these frameworks increasingly emphasize the integration of digital technologies into teaching practices, the extent to which emerging technologies such as AI are explicitly embedded into teacher training curricula remains limited and largely depends on individual institutions’ initiatives (Aljemely, 2024; Tan et al., 2025; United Nations Educational, Scientific and Cultural Organization [UNESCO], 2024). The Turkish Teacher Competencies Framework, published by the Ministry of National Education (Millî Eğitim Bakanlığı [MEB], 2017), emphasizes technological pedagogic competence as a core professional responsibility, requiring teachers to use digital tools to enhance learning outcomes effectively. However, formal AI-specific training is not yet systematically integrated into preservice teacher education programs. Consequently, prospective teachers’ decisions to adopt AI tools in their future practice often rely more on personal motivation and perceived value rather than on institutionalized professional preparation. This situation may create tension between policy expectations and practical realities. While technological competence is defined as a professional responsibility in official frameworks, adopting emerging AI tools largely depends on individual initiative and intrinsic motivation rather than systematic institutional support. This context—characterized by national policy frameworks emphasizing digital competence alongside limited institutionalized AI training—is not unique to Türkiye. Similar patterns have been documented across diverse educational systems, including emerging economies in Southeast Asia, Latin America, and Sub-Saharan Africa, where policy ambitions for technology integration often outpace infrastructural and curricular readiness (Mokoena & Seeletse, 2025; UNESCO, 2023). Even in more developed contexts, such as the European Union and North America, AI integration in teacher education remains fragmented and institution dependent, with significant variation in preparedness across regions (European Commission, 2020; Trust et al., 2023). These parallels suggest that findings from the Turkish context may offer valuable insights for understanding AI adoption dynamics in educational systems facing similar structural challenges—namely contexts where technology integration depends heavily on individual teacher agency rather than on systematic institutional support. This context makes understanding motivational factors, such as expectancy beliefs and value perceptions, particularly crucial for promoting intentional and effective AI integration across such settings. Moreover, examining motivation through EVT in a context with limited formal AI preparation allowed us to isolate intrinsic psychological drivers from external institutional supports, offering clearer theoretical insights into the foundational mechanisms of technology adoption. These insights may be particularly relevant for policymakers and teacher educators in contexts transitioning from general digital literacy initiatives to more specialized efforts involving AI integration. It underscores why an EVT-based investigation is both timely and necessary.
Research Model and Hypothesis
Relationship Between Expectancy and Behavioral Intention
EVT posits that individuals’ behavioral intentions are shaped by their beliefs about their capabilities (expectancy) and the perceived value of a task (Eccles & Wigfield, 2020, 2023). In AI adoption, expectancy refers to prospective teachers’ confidence in effectively integrating AI tools into their teaching practices. It is important to clarify that expectancy in this context encompasses multiple dimensions of confidence. Beyond immediate implementation capabilities, expectancy includes prospective teachers’ beliefs in their capacity to continuously learn and adapt to evolving AI technologies over time. These beliefs encompass their confidence in achieving a one-time successful integration and their belief in sustaining ongoing professional development as AI tools continue to advance. Thus, our conceptualization of expectancy addresses both present competence and future-oriented adaptability—a particularly critical distinction given AI’s rapid evolution in educational contexts. Although existing studies under the TAM and UTAUT frameworks have explored self-efficacy (e.g., Lee et al., 2022; Sanusi, Ayanwale & Chiu, 2024), these models often conflate current self-efficacy with future-oriented expectancy, thereby limiting their ability to capture the motivational dynamics specific to emerging technologies such as AI. This distinction is critical because AI adoption requires competence and confidence in adapting to future technological advancements.
Prior research has predominantly focused on perceived ease of use and usefulness (Ma & Lei, 2024; Scherer, Howard et al., 2019), neglecting the role of expectancy as a standalone motivational driver. For instance, although Sanusi, Ayanwale & Tolorunleke (2024) highlighted the impact of self-efficacy on AI learning intentions, their work did not isolate the unique contribution of expectancy. This gap was addressed in our study by operationalizing expectancy as a distinct construct, grounded in EVT’s theoretical separation of belief in future capabilities. Hypothesis 1 (H1) is formulated to test this gap:
Relationship Between Value and Behavioral Intention
Attainment, a core construct of EVT, refers to the perceived importance of excelling in a task and aligning with an individual’s self-concept and professional identity (Wigfield & Eccles, 2000). In the context of AI adoption, attainment value reflects the extent to which prospective teachers view mastery of AI tools as integral to their professional competence and career success. Although direct empirical evidence linking attainment value to AI adoption remains limited, theoretical foundations from EVT and related studies on technology integration provide a robust basis for this hypothesis.
EVT posits that individuals are likelier to engage in tasks they perceive as central to their self-concept (Eccles & Wigfield, 2020). In educational technology research, studies have demonstrated that teachers’ perceptions of task importance—such as mastering digital tools—positively influence their motivation to adopt innovations (Chan & Zhou, 2023; Cheng et al., 2020). For instance, Cheng et al. (2020) found that teachers who viewed technology proficiency as critical to their professional identity exhibited stronger intentions to integrate it into their practice. Similarly, Chan and Zhou (2023) highlighted that perceived alignment between technology mastery and career goals significantly predicted behavioral intention.
Applied to AI adoption, this suggests that prospective teachers who associate AI proficiency with their professional identity—for example, viewing it as essential for modern teaching practices, student engagement, or career advancement—are more likely to prioritize its integration. This argument aligns with broader EVT principles and is contextualized within the unique demands of AI-driven education, where technological fluency is increasingly tied to pedagogic effectiveness (Lampou, 2023). Thus, building on EVT’s theoretical framework and analogous findings in technology adoption research, this study operationalized attainment value as follows:
Utility value refers to a task’s practical worth in achieving specific goals, providing a pragmatic reason for task engagement (Eccles & Wigfield, 2002). In the context of AI adoption among prospective teachers, utility value is based on the perceived benefits of using AI technologies in achieving educational objectives and enhancing professional development. Empirical research has shown that when individuals recognize the practical benefits of a task, their motivation to engage with it increases (Eccles & Wigfield, 2024). UTAUT also emphasizes the importance of performance expectancy, conceptually similar to utility value, in determining technology adoption intentions (Scherer, Howard et al., 2019). For prospective teachers, the utility value of AI could include improved teaching efficiency, enhanced student learning outcomes, and better classroom management. Studies have indicated that the recognition of AI’s utility in educational settings strongly influences teachers’ acceptance and integration of these technologies, shaping their intention to employ AI to optimize teaching outcomes (Avidov-Ungar & Forkosh-Baruch, 2018; Cheng et al., 2020; Quadir et al., 2022; Sánchez-Prieto et al., 2021; Siyam, 2019; Vongkulluksn et al., 2018). Thus we hypothesize the following:
Intrinsic/interest value refers to the value individuals place on a task when they find it inherently interesting and enjoyable (Eccles & Wigfield, 2002). In the context of AI adoption among prospective teachers, intrinsic value encompasses the perceived enjoyment and interest in working with AI technologies. Research has shown that interest in technological tools and the satisfaction derived from using them significantly impact individuals’ intentions to adopt and integrate those technologies (David & Weinstein, 2024; Raman et al., 2022). Specifically, in educational contexts, teachers’ interest in and enjoyment of working with AI technologies support the integration of AI into teaching practices (Cheng et al., 2020; Ranellucci et al., 2020). Thus we hypothesize the following:
Cost refers to the perceived negative aspects of a task, including effort, time investment, and opportunity costs (Eccles & Wigfield, 2002; Flake et al., 2015). In the context of AI adoption among prospective teachers, cost involves the challenges and perceived costs associated with learning and using AI technologies. Research has indicated that perceived costs significantly impact individuals’ intentions to adopt technologies (Chan & Hu, 2023; Chan & Zhou, 2023; Cheng et al., 2020). The challenges and costs associated with learning and using technologies such as AI can hinder their integration into teaching practices. Based on theoretical and empirical evidence, the following hypothesis is proposed, as shown in Figure 1:

Research model.
Method
Participants and Data-Collection Process
For this research, 454 prospective teachers from different cities in Türkiye were selected using the convenience sampling methodology. Convenience sampling was deemed appropriate for this study because it provided efficient access to prospective teachers enrolled in various teacher education programs across multiple institutions while ensuring adequate representation across different class levels and departments. The sample size (n = 454) exceeds the minimum requirements for structural equation modeling (SEM) analysis. According to established guidelines, SEM typically requires a minimum of 200 cases for reliable parameter estimation (Kline, 2016) or a sample size based on the ratio of 10–20 cases per estimated parameter (Hair et al., 2014). Given that our proposed model included 25 estimated parameters, the current sample size provides sufficient statistical power for robust SEM analysis and ensures the stability and generalizability of the parameter estimates.
Data collection took place during the 2023–24 academic year, following ethical approval from the Bursa Uludağ University Social Science Ethics Committee (Session No. 2024-03, Decision No. 49). The data-collection process spanned approximately 2 weeks, during which 467 prospective teachers were reached. Participation was voluntary, and informed written consent was obtained from all participants prior to data collection. Survey administration was conducted face-to-face in classroom settings, with questionnaire completion requiring approximately 15–20 minutes. Data quality was ensured through preliminary screening for response patterns and careless responding. After removing 13 cases that exhibited systematic response patterns (e.g., straight-lining, identical responses across all items), the final valid sample consisted of 454 participants, yielding a retention rate of 97.2%. All 454 completed surveys were deemed valid and included in the final analysis.
Before completing the survey, participants were provided with a clear and accessible definition of AI to ensure consistency in their understanding and responses. AI was defined as “computer systems and applications that can perform tasks typically requiring human intelligence, such as learning, problem solving, language understanding, and decision making. In educational contexts, this includes tools such as ChatGPT, Google Gemini, Grammarly, intelligent tutoring systems, AI-powered translation tools, and other applications that assist with writing, research, assessment, and personalized learning.” This definition was intentionally broad yet educationally relevant, encompassing various AI applications that prospective teachers might encounter in their professional practice while remaining accessible to participants with varying levels of technical expertise. By providing this operational definition at the outset, we aimed to minimize variability in participants’ interpretations of AI and enhance the construct validity of their responses regarding expectancy beliefs, value perceptions, and behavioral intentions.
Table 1 presents the demographic details of the prospective teachers involved in this study. Of the participants, 68.9% (n = 313) were female, and 31.1% (n = 141) were male. In terms of class distribution, 22.5% (n = 102) taught freshmen, 25.1% (n = 114) taught sophomores, 27.3% (n = 124) taught juniors, and 25.1% (n = 114) taught seniors. The participants were enrolled in nine different departments, with the lowest number from the Turkish Education Department (n = 42) and the highest from the English Education Department (n = 64).
Demographic profile of the participants (n = 454)
Regarding the use of AI tools, 85.2% (n = 387) of participants reported using AI applications, whereas 14.8% (n = 67) stated that they did not use any AI tools. Among those who used AI applications (n = 387), the most frequently used tool was ChatGPT (57.1%), followed by Claude (11.6%), Google Bard (9.8%), Grammarly (7.0%), Turnitin (6.5%), Copilot (5.9%), and other tools (2.1%). In terms of usage frequency, 10.8% (n = 49) reported rare use, 18.1% (n = 82) used AI tools occasionally, 29.5% (n = 134) used them moderately, 25.3% (n = 115) used them frequently, and 16.3% (n = 74) reported intense use. Participants also indicated various purposes for using AI tools, with the most common being writing (30.8%), followed by research (24.5%), homework preparation (18.2%), translation (13.5%), data analysis (9.5%), and other purposes (3.4%).
Variables
Behavioral Intention. To assess prospective teachers’ intentions to integrate AI into their teaching practices, the Behavioral Intention Scale was employed. This scale, adapted from previous studies (Ayanwale et al., 2022; Chai et al., 2020; Ma & Lei, 2024), measures participants’ likelihood of AI adoption in educational settings. It consists of five positively worded items, such as “I intend to use AI to assist my teaching.” It uses a 5-point Likert-type response format ranging from 1 = completely false to 5 = completely true. Higher scores on this scale indicate a stronger intention to use AI in teaching. The complete set of items for this scale is provided in Appendix A.
AI Use Motives. To evaluate the motivations behind prospective teachers’ use of AI in their academic and educational practices, QAIUM was employed. Initially developed by Yurt and Kaşarcı (2024) for university students, this questionnaire was adapted for teacher candidates to assess their motivations within an educational context precisely. Participants were instructed to respond to the scale items by considering the AI applications they use in their academic activities. This guidance ensured that the scale items reflected motivational factors directly tied to their educational roles and practices. QAIUM comprises two primary dimensions: expectancy (beliefs about success in using AI) and task value, with this further divided into subdimensions of attainment value (importance of AI use), utility value (perceived usefulness), intrinsic/interest value (enjoyment of AI), and cost (effort/time tradeoffs). It includes 20 items (e.g., “The ability to use AI effectively is important to me“) rated on a 5-point Likert-type scale (1 = completely false to 5 = completely true). Higher scores indicate more substantial expectations of success and a better perceived value of AI in education. All questionnaire items are presented in Appendix A.
Control Variables. In this study, gender, class level, and AI usage frequency were included as control variables due to their potentially confounding effects on AI usage intention. Gender differences in technology adoption have been documented in prior research, with studies indicating that males often exhibit higher self-efficacy and engagement with digital tools than females (Scherer & Siddiq, 2015; Tondeur et al., 2016). Similarly, class level may influence technology integration patterns because advanced students tend to have greater exposure to academic and technological resources, which, in turn, shapes their willingness to adopt AI tools (Aslan & Zhu, 2017; Teo & Milutinović, 2015). AI usage frequency, in contrast, reflects prior experience, which is a critical predictor of future behavioral intentions: Frequent users are more likely to develop sustained engagement due to familiarity and perceived utility (Khosravi et al., 2022; Liu et al., 2019; Zawacki-Richter et al., 2019).
These variables were selected based on their relevance to individual-level technology acceptance and frequent use in prior studies examining teacher technology adoption. The focus on individual-level factors aligns with the study’s theoretical framework, EVT, which emphasizes personal beliefs and motivations as primary drivers of behavioral intentions. Although institutional and contextual variables (e.g., professional-development opportunities, curriculum design, and institutional support) and additional individual factors (e.g., digital literacy and AI-related anxiety) are acknowledged as potentially important, they were not included in the current model due to the study’s scope and design constraints. By controlling for gender, class level, and AI usage frequency, the study aims to isolate the unique effects of the primary predictors under investigation, thereby enhancing the internal validity of the findings. Future research would benefit from incorporating these additional variables within multilevel analytic frameworks to capture the complexity of AI adoption in teacher education.
Data Analysis
Some assumptions were checked before the analysis. Cook’s distance calculations revealed no outliers (max = 0.10). Skewness (−0.28 to 0.29) and kurtosis (−0.88 to −0.40) values were within the ±1 and indicated normal distribution (Noar, 2003). Variance inflation factors (<3) showed no collinearity issues among the constructs (Hair et al., 2014). Harman’s single-factor test and confirmatory factor analysis were used to assess common method variance bias. The first factor explained only 35% of the total variance. Confirmatory factor analysis showed poor fit (χ2/(275) = 3,404.49, p < .001, χ2/df = 12.38, comparative fit index [CFI] = .70, incremental fit index [IFI] = .70, Tucker–Lewis index [TLI] = .68, root mean square error of approximation [RMSEA] = .16, and standardized root mean squared residual [SRMR] = .08] for a single-factor model, indicating no significant bias (Podsakoff et al., 2003).
The analysis followed Anderson and Gerbing’s (1988) two-stage approach, which involves both measurement and structural models. Model fit indices included χ²/df (<5), RMSEA (<.10), SRMR (<.08), CFI (>0.90), and TLI (>.90) (Hair et al., 2014). Convergent validity was verified using average variance extracted (AVE > .5) and construct reliability (CR > .7). Discriminant validity was tested using the heterotrait–monotrait correlation (HTMT) method, with values of <.90 considered acceptable (Henseler et al., 2015). IBM SPSS Amos 24.0 software (IBM Corp, Armonk, NY) was used for the analysis.
Results
Measurement Model Analysis
The adequacy of the construct validity of the measurement models was examined using confirmatory factor analysis. The models fit the data well (χ2/(248) = 760.42, p < .001, χ2/df = 3.07, CFI = .95, IFI = .95, TLI = .94, RMSEA = .05, and SRMR = .04]. The measurement model was used to determine the validity and reliability of the constructs, which are presented in Table 2. According to Tabachnick and Fidell (2007), factor loadings should be used to evaluate the reliability of indicators. They proposed that an item can be considered reliable if its factor loadings exceed .50. The measurement model retained 25 items, with factor loadings ranging from .57 to .92.
Convergent validity and reliability of the constructs
Λ = Wilk’s lambda, a test statistic that measures the proportion of variance in the dependent variables that is not explained by the independent variable; α = Cronbach’s alpha; CR = composite reliability; AVE = average variance extracted; R = reverse items.
The internal consistency of each measure is strong, with a composite reliability (CR) that is >.70. Additionally, each construct’s average variance extracted (AVE) surpasses the .50 threshold: expectancy (CR = .90, AVE = .69), attainment (CR = .91, AVE = .70), utility value (CR = .89, AVE = .67), intrinsic/interest value (CR = .93, AVE = .74), cost (CR = .81, AVE = .52), and behavioral intention to use AI (CR = 0.93, AVE = 0.72), indicating satisfactory reliability and convergent validity of each construct (Hancock & Mueller, 2006).
In addition, Cronbach’s alpha internal consistency coefficient was calculated as .89 for expectancy, .91 for attainment, .90 for utility value, .93 for intrinsic/interest value, .84 for cost, .83 for analyticity, .84 for open-mindedness, .93 for inquisitiveness, .91 for truth seeking, and .93 for behavioral intention to use AI. The overall scale demonstrated sufficient reliability with a Cronbach’s alpha coefficient of .94.
Also, the HTMT was used to conduct a discriminant validity test. All constructs in the model were found to have HTMT ratios below the benchmark of .90, indicating discriminant validity (.64 ≤ HTMT ≤ .82). Consequently, the model’s constructs were deemed to have both convergent and discriminant validity. As a result, latent scores representing the constructs within the model were calculated and used to assess the structural model.
Structural Model Analysis
Before analyzing the structural model, relationships and descriptive statistics of the variables were established, as detailed in Table 3. A correlation matrix revealed positive relationships between expectancy, attainment, utility, intrinsic values, and behavioral intention but a negative correlation with cost. Likert scale results were categorized from very low to very high based on averages (1–1.8 = “very low”; 1.8–2.6 = “low”; 2.6–3.4 = “moderate”; 3.4–4.2 = “high”; 4.2–5 = “very high”). Analysis of participant perceptions yielded moderate levels for expectancy, attainment, intrinsic, and cost, whereas utility and behavioral intention were rated high.
Correlations matrix and descriptive statistics
M = mean; SD = standard deviation; AIUF = artificial intelligence usage frequency; BI = behavioral intention.
0 = female; 1 = male.
p < .01, n = 454.
The structural model elucidates causal relationships among latent variables and predicts the R2 value, which determines the predictive power of each model, along with path coefficients (Hair et al., 2014). Bootstrap p values are used to evaluate path coefficients in SEM. Table 4 presents the standardized path coefficient, illustrating the hypothesized relationship between the independent variables and the dependent variable: (H1), expectancy → behavioral intention (β = .10, p = .032); (H2), attainment → behavioral intention (β = .21, p < .001); (H3), utility value → behavioral intention (β = .29, p < .001); (H4), intrinsic/interest value → behavioral intention (β = .25, p < .001); (H5), cost → behavioral intention (β = −.27, p < .001). All hypotheses tested in Table 4 were found to be significant. According to the standardized path coefficients, the relative importance of the variables in predicting behavioral intention is as follows: utility value, cost, intrinsic interest, attainment, and expectancy. Additionally, gender, class level, and frequency of AI usage were included as control variables in the model. The findings indicated that gender (β = .12, p = .007), class level (β = .11, p = .009), and AI usage frequency (β = .20, p < 0.001) all have significant positive effects on behavioral intention. Additionally, the R2 value of 62% accounted for the variance observed in behavioral intention.
Standardized path coefficient for the tested model
EX = expectancy; AT = attainment; UV = utility value; IV = intrinsic/interest value; CO = cost; BI = behavioral intention to use AI in education; AIUF = artificial intelligence usage frequency; SE = standard error; CI = confidence interval.
0 = female, 1 = male.
p < 0.001.
Discussion
This study employed the EVT framework to investigate the motivational factors influencing prospective teachers’ intentions to integrate AI into education. The research was carried out with the participation of 454 teacher candidates. SEM analysis was used to test the research hypotheses. The findings revealed significant insights into how various motivational factors impact behavioral intention. We present interpretations of the main findings, theoretical contributions, practical Implications, limitations, and future research recommendations.
Interpretations of the Main Findings
Our descriptive analysis revealed that prospective teachers exhibited moderate levels of expectancy, attainment, intrinsic value, and cost. At the same time, utility and behavioral intention were rated high. This suggests that while prospective teachers recognize the importance and potential benefits of AI integration in education, they have only moderate confidence in using these tools effectively. Additionally, the perception of AI as intrinsically interesting remains moderate, indicating that engagement with AI is likely driven more by practical considerations than by curiosity or enjoyment.
The high ratings for utility suggest that prospective teachers perceive AI as a valuable resource for improving teaching efficiency and educational outcomes. This aligns with prior research emphasizing the role of perceived usefulness in technology adoption (Quadir et al., 2022; Scherer, Howard, et al., 2019; Siyam, 2019). The fact that behavioral intention is also rated high further supports the idea that utility plays a central role in shaping technology adoption among educators (Eccles & Wigfield, 2002). However, the moderate expectancy levels indicate that while prospective teachers acknowledge AI’s benefits, they are not fully confident in integrating it into their teaching practices. This aligns with previous studies suggesting that self-efficacy and future-oriented expectancy beliefs influence AI adoption (Lee et al., 2022; Sanusi, Ayanwale & Tolorunleke, 2024). Eccles and Wigfield (2023) indicated that expectancy beliefs are shaped by prior experiences and contextual factors, suggesting that limited exposure to AI tools in teacher training programs may contribute to the moderate confidence levels observed.
Similarly, the moderate intrinsic value suggests that prospective teachers do not yet perceive AI as enjoyable or engaging. This finding is consistent with research indicating that intrinsic motivation plays a role in technology adoption but may be secondary to more pragmatic concerns (Cheng et al., 2020; David & Weinstein, 2024). This suggests that while AI is viewed as applicable, it may not yet be seen as an exciting or naturally engaging tool for teaching.
Cost perceptions also were moderate, suggesting that while prospective teachers acknowledge the time and effort required to learn AI tools, they do not view these challenges as insurmountable. Prior research has demonstrated that perceived costs, including time investment and cognitive effort, can serve as barriers to adoption (Chan & Zhou, 2023; Flake et al., 2015). However, the fact that cost ratings were not exceptionally high suggests that while concerns exist, they are not necessarily strong enough to deter adoption.
These findings align with EVT, which suggests that motivation is influenced by expectancy (confidence in future capabilities) and value perceptions (Eccles & Wigfield, 2020, 2023). The moderate expectancy levels suggest that while prospective teachers recognize the benefits of AI, they are not fully confident in integrating it into their teaching. This is consistent with prior research on self-efficacy in technology adoption (Lee et al., 2022; Sanusi, Ayanwale & Tolorunleke, 2024). High utility perceptions suggest that AI is valued for its practical benefits, aligning with TAMs (Ma & Lei, 2024; Sánchez-Prieto et al., 2021). However, moderate attainment and intrinsic value indicate that AI is not yet fully integrated into professional identity or seen as inherently engaging (Cheng et al., 2020). The moderate cost perceptions suggest that while challenges exist, they are not significant barriers, highlighting the need for structured training to enhance confidence and perceived relevance. Future research could investigate how increased exposure to AI in teacher education affects these motivational factors.
Our analysis indicated that expectancy beliefs have a significant predictive value for behavioral intention. Prospective teachers’ confidence in AI effectively influences their intention to integrate it into their teaching practices. Although the effect size is modest compared with other variables, the significance of this finding aligns with the EVT, which posits that individuals are more likely to engage in activities they feel competent in (Eccles & Wigfield, 2023; Wigfield & Eccles, 2000). This finding is consistent with previous research (Camilleri, 2024; Lee et al., 2020; Lee et al., 2022; Li et al., 2022; Ranellucci et al., 2020; Romero-Rodríguez et al., 2023; Sanusi, Ayanwale & Chiu, 2024), highlighting the importance of expectancy beliefs in adopting new technologies in educational settings. Enhancing teachers’ confidence through training and support thus could be crucial in increasing their intention to use AI. Participating in an event, volunteering for a task, or organizing future career plans are closely linked to an individual’s expectancy beliefs (Eccles & Wigfield, 2024). This is why people tend to be drawn to areas where they feel competent. Furthermore, the low mean expectancy suggests that prospective teachers may have moderate confidence in their AI capabilities, indicating that while they are somewhat confident, they do not yet feel fully proficient in using AI, underscoring the need for additional training and support. Providing resources and materials that guide prospective teachers on how to use AI also would be beneficial. These resources can help them develop competencies and increase their confidence in their abilities. Lastly, opportunities should be provided for prospective teachers to review their experiences and receive feedback on their use of AI. This could help them assess their performance, identify areas for improvement, and ultimately become more effective in using AI in educational settings.
The analysis also provided evidence that attainment value, which reflects individuals’ importance on succeeding in a task, shows a significant positive relationship with behavioral intention. This result implies that prospective teachers who perceive AI integration as essential for their professional success are more likely to use AI. Moreover, the relatively high attainment value score suggests that many participants recognize the importance of AI for their teaching roles. This finding is supported by the EVT, which emphasizes perceived importance in motivating behavior. According to EVT, attainment value refers to the significance an individual places on doing well in a task, which is closely linked to their self-concept and goals (Wigfield & Eccles, 2000). Attainment, fundamentally anchored within EVT, denotes the perceived importance of excelling at tasks and the consequential impact on an individual’s self-identity. This strong relationship between attainment value and behavioral intention aligns with studies by Ranellucci et al. (2020), who found that perceived importance has a significant impact on technology adoption in education. When individuals attribute attainment to a task, their involvement and their efforts to achieve it increase, leading to the intention to use generative AI (Chan & Zhou, 2023). This is further corroborated by research indicating that individuals are more motivated to engage in tasks that they deem important for their personal and professional development. The recognition of AI’s importance in the teaching profession likely stems from its potential to enhance educational outcomes, streamline administrative tasks, and prepare students for a world driven by technology. Furthermore, the finding that many prospective teachers place high importance on AI integration suggests a growing awareness and acceptance of technology’s role in modern education. This awareness is critical because it can drive the adoption of AI tools and practices, fostering innovation and improving teaching effectiveness. It also highlights the need for targeted professional development programs that not only build teachers’ technical skills but also reinforce the importance of AI in achieving educational goals.
The third key finding of this study was that utility value emerges as the strongest predictor of behavioral intention. This suggests that prospective teachers’ perceptions of the usefulness of AI in achieving their teaching objectives significantly influence their intention to use AI. The high mean score for utility perceptions underscores the recognition of AI’s practical benefits. According to the EVT, individuals are motivated to engage in behaviors that they believe will help them achieve important goals (Eccles & Wigfield, 2020). This result is consistent with previous findings in the technology-adoption literature (Adelana et al., 2024; Avidov-Ungar & Forkosh-Baruch, 2018; Ayanwale et al., 2022; Ma & Lei, 2024; Quadir et al., 2022; Sánchez-Prieto et al., 2021; Sanusi, Ayanwale & Chiu, 2024; Scherer, Howard et al., 2019; Siyam, 2019; Vongkulluksn et al., 2018; Zhang et al., 2023) that highlight the critical role of perceived usefulness in technology acceptance.
This finding implies that when prospective teachers recognize the tangible benefits of AI, such as enhanced student learning outcomes and more efficient administrative processes, they are more likely to intend to integrate AI into their professional practices. In particular, AI-powered tools can assist teachers in automating repetitive tasks (e.g., grading and lesson planning), providing adaptive learning experiences for students, and offering real-time feedback, all of which contribute to a more effective teaching process (Alan & Yurt, 2024; Huang et al., 2023). Understanding that utility is a significant driving force behind the intention to use AI suggests that teachers who are aware of AI’s benefits in their daily, academic, and social lives may be more inclined to use AI in their future careers. This can positively affect their behavior, leading to more effective and individualized student support. Moreover, the potential for AI to meet students’ individual needs more effectively could result in a more tailored educational experience, enhancing overall student performance and satisfaction. Furthermore, the rapid advancements and emerging applications in the AI field make individuals increasingly aware of how AI can simplify various tasks in daily life. For example, the increasing use of AI-driven personalized learning platforms demonstrates AI’s potential to improve student engagement and learning efficiency, further solidifying teachers’ perceptions of AI’s utility. As individuals keep pace with technological developments, they become more informed about the utility value of AI applications and, consequently, more motivated to use them. This awareness and motivation likely will carry over into their professional lives, making them more inclined to incorporate AI into their teaching practices.
The analysis also proved that intrinsic value significantly predicts behavioral intention, reflecting the inherent enjoyment and interest in using AI. This suggests that prospective teachers who find AI intrinsically motivating are more likely to implement it into their teaching. The relatively high intrinsic value score suggests that participants have a substantial interest and enjoyment in AI. The positive and significant relationship aligns with the EVT’s emphasis on intrinsic motivation as a critical driver of engagement (Eccles & Wigfield, 2002). This finding corroborates the studies that identified intrinsic motivation as crucial to individuals’ technology use and adoption (e.g., An et al., 2024; Cheng et al., 2020; Khechine et al., 2020; Ranellucci et al., 2020; Shroff & Keyes, 2017). Intrinsic value plays a pivotal role in influencing behavior because individuals with high intrinsic interest in a task are more likely to view engaging in that task as enjoyable and fulfilling (Deci & Ryan, 2000). These individuals tend to exert more effort, persistence, and dedication toward completing the task, perceiving it as a pleasurable activity rather than a mere obligation (Lepper & Malone, 2021). In the context of AI, prospective teachers who enjoy exploring and using AI applications are more inclined to incorporate these tools into their daily and professional lives. This enjoyment can translate into a more enthusiastic and proactive approach to learning about new AI applications, understanding their functionalities, and using them effectively. Moreover, individuals with a high intrinsic value for AI likely will find it easier to integrate these applications into their routines. Their interest and enjoyment make them more open to experimenting with and adopting new AI technologies, which can lead to a smoother and more seamless integration process. For teachers, this means that those who enjoy using AI are more willing to adopt it and more capable of leveraging its benefits to enhance their teaching practices. This could include using AI to create more engaging and personalized learning experiences for their students, thereby improving educational outcomes.
This study’s fifth and last key finding was that cost perceptions are also a significant predictor of behavioral intention, indicating that a higher perceived cost of integrating AI is associated with a weaker intention to use AI. Also, the mean cost score suggests that participants generally perceive the costs associated with AI use to be above average, indicating moderate concerns about the effort and resources required. This finding aligns with the EVT, which posits that the perceived cost of a task negatively impacts an individual’s intention to engage in it (Wigfield & Eccles, 2000). In AI integration, perceived costs include the time and effort needed to learn and implement AI tools, the potential for technical difficulties, and the stress associated with adapting to new technologies. When these costs are perceived as high, they can deter prospective teachers from considering the use of AI, overshadowing its potential benefits. Within EVT, the cost component represents the perceived negative aspects of engaging in a task (Flake et al., 2015). These can include tangible costs, such as time, effort, and financial resources, as well as intangible costs, including stress, anxiety, and opportunity costs. High perceived costs can significantly lower an individual’s motivation by making the task seem less worthwhile than its demands (Chan & Hu, 2023; Chan & Zhou, 2023; Cheng et al., 2020). Ranellucci et al. (2020) reported that cost components—namely task effort, outside effort, loss of valued alternatives, and emotional cost—reduce prospective teachers’ intention to use technology. For prospective teachers, high perceived costs associated with AI integration could stem from concerns about the additional workload, the steep learning curve of new technologies, and the potential for technical issues. These factors can create a psychological barrier, reducing their willingness to adopt AI tools in their teaching practices. Prospective teachers who perceive high costs may feel overwhelmed and anxious about integrating AI, which can lead to resistance and a lower likelihood of using these technologies.
Although this study focused on the motivational factors influencing AI adoption, it is essential to acknowledge the broader ethical and societal concerns surrounding the use of AI in educational contexts. First, AI systems may carry inherent ideological biases reflecting the data and assumptions embedded during their development (Noble, 2018; O’Neil, 2016). These biases can perpetuate stereotypes, favor specific cultural perspectives, and/or disadvantage marginalized groups. Concerns are particularly critical in education, where equitable access to quality instruction is paramount. Educators therefore must approach AI tools critically, evaluating their outputs for potential bias and ensuring that their use does not inadvertently reinforce inequities. Second, the environmental impact of AI technologies, such as huge language models, has garnered increasing attention. Training and operating these systems require substantial computational resources, contributing to significant carbon emissions and energy consumption (Strubell et al., 2019). As prospective teachers develop intentions to integrate AI into their practice, teacher education programs should foster awareness of these ethical dimensions, encouraging responsible and reflective use that balances pedagogic benefits with social and environmental responsibilities. Future research should investigate how prospective teachers’ awareness of these issues affects their decisions and practices regarding AI adoption.
Theoretical Contribution
This study makes a significant contribution to the literature by applying the EVT to understand the motivational factors influencing prospective teachers’ intentions to integrate AI into education. With an R2 value of .62, the model demonstrates robust explanatory power, accounting for more than half the variance in behavioral intention through these motivational factors. The findings align with EVT, reinforcing the importance of expectancy beliefs and value perceptions in predicting technology adoption intentions. However, although EVT effectively captures the role of motivation in AI adoption, its traditional framework does not fully account for the specific challenges faced in educational settings, such as differences in digital confidence levels, the availability of AI-related instructional resources, and concerns about balancing AI integration with traditional teaching methods. This study provides a comprehensive understanding of how these motivational factors impact behavioral intentions, offering valuable insights for theoretical advancement and practical application in educational settings.
By addressing both the motivational and practical aspects of AI integration, this study offers a comprehensive understanding that can inform the development of effective strategies to support prospective teachers in adopting AI technologies. This approach enhances the theoretical understanding of AI adoption. It provides practical recommendations for education policymakers and institutions to create a supportive environment for integrating AI in education. Additionally, EVT focuses primarily on individual motivation but may not fully account for the influence of collaborative decision-making processes and institutional policies that shape AI adoption in schools. Future research could explore how EVT can be expanded or integrated with complementary frameworks, such as the TAM or the UTAUT, better to capture the complex interplay between individual and contextual factors. These insights bridge the gap in the existing literature by combining EVT with recent advancements in AI adoption, presenting a model that can inform future studies on education technology and contribute to more effective AI integration strategies in teaching practices.
Practical Implications
The findings of this study offer actionable strategies for teacher education programs and policymakers to enhance the integration of AI in educational settings. Given that prospective teachers’ confidence in their AI capabilities (expectancy) and their perception of AI’s importance (attainment) were moderate, there is a clear need for structured training initiatives to enhance these areas. AI training programs should be designed with differentiated instruction to address varying levels of digital literacy among prospective teachers. For instance, beginner-level modules could cover fundamental AI concepts and basic tool usage. In contrast, advanced modules could focus on integrating AI-driven analytics into instructional design. This tiered approach would ensure that novice and tech-savvy teachers receive training tailored to their needs, promoting more effective adoption of AI. Teacher education curricula should prioritize hands-on workshops and mentorship programs that focus on the practical applications of AI tools, such as personalized lesson planning, automated feedback systems, and data-driven decision making. For instance, embedding AI literacy modules into coursework could bridge the gap between theoretical knowledge and classroom readiness, directly addressing the moderate expectancy levels observed in the study.
To amplify AI adoption, institutions must emphasize its tangible benefits, particularly utility value, which emerged as the strongest predictor of behavioral intention. Demonstrating how AI streamlines administrative tasks—such as grading and attendance tracking—while enhancing student engagement through adaptive learning platforms can foster a positive perception of AI’s role in education. For example, professional-development workshops could feature real-world case studies of teachers successfully integrating AI into their classrooms, showcasing best practices for adapting lessons, assessing students, and managing classroom activities. Practical examples, such as using generative AI for creative assignments or leveraging analytics to identify at-risk students, could be integrated into professional-development programs to showcase AI’s transformative potential.
The relatively high intrinsic value scores suggest that teachers are interested in exploring the possibilities of AI. Capitalizing on this curiosity, institutions could establish innovation hubs where educators experiment with emerging AI tools in low-stakes environments, fostering technical proficiency and creative pedagogic applications. Simultaneously, addressing perceived costs—such as time investment and technical complexity—is critical. The study identified cost perceptions as a significant barrier, reflecting concerns about the effort required to learn AI tools and potential disruptions to existing workflows. To mitigate these challenges, institutions should simplify AI interfaces, provide ongoing technical support, and integrate AI training into scheduled professional-development hours to minimize perceived time burdens. A structured, step-by-step AI competency roadmap, guiding teachers from basic use to advanced applications, could facilitate a smoother transition into AI-enhanced teaching practices. For example, modular training sessions aligning with teachers’ schedules can reduce resistance and encourage gradual adoption.
At a systemic level, policymakers should advocate for national frameworks that standardize AI competency benchmarks for teachers, ensuring alignment with evolving educational technologies. Equitable access to AI resources also must be prioritized to prevent disparities across socioeconomic contexts. By aligning training, resource allocation, and policy with the motivational drivers identified in this study, stakeholders can create a sustainable ecosystem for AI integration, ultimately enhancing teaching efficacy and student outcomes in a technology-driven educational landscape.
Limitations
Although this study offers valuable insights, it has certain limitations that should be addressed in future research. The cross-sectional design limited our ability to infer causality between the motivational factors and behavioral intentions. Longitudinal studies could provide a more dynamic understanding of how these intentions evolve over time. Despite the robust sample size and the statistical power afforded by SEM, the data were drawn exclusively from a single cultural and national context (Türkiye), limiting the generalizability of findings. Future research should include diverse populations to validate and extend these results.
Additionally, the reliance on self-report forms introduces potential biases, such as social desirability bias and inaccurate self-assessment, which may influence the validity of the responses. Participants may respond in ways they perceive as favorable, but these responses may not accurately reflect their true motivations and intentions. More specifically, prospective teachers may have overestimated their intentions to use AI due to perceived expectations within teacher education programs that emphasize technological competence, or they may have inflated their expectancy beliefs to align with professional identity aspirations. Furthermore, the contextual constraints of the study—conducted during a period of rapid AI development and increasing institutional discourse around digital transformation in education—may have primed participants to respond more favorably toward AI integration than they might in more stable or less technology-focused environments. The online survey format, while ensuring broad reach, also may have attracted respondents with higher digital engagement, potentially skewing the sample toward those already predisposed to technology adoption (Bethlehem, 2010; Daikeler et al., 2020). To address these biases, future research could incorporate a combination of self-report measures with observational methods or other objective data-collection techniques. Additionally, employing implicit measures, behavioral tracking of actual AI use, and qualitative interviews can triangulate self-reported intentions with enacted behaviors, providing deeper insights into the authenticity of motivational responses (Podsakoff et al., 2012).
Another limitation concerns the selection of control variables in the model. While gender, class level, and AI usage frequency were included based on their documented influence in technology-acceptance literature and their alignment with individual-level factors, other contextual and institutional variables also could affect behavioral intentions. For instance, professional-development opportunities provided by teacher education programs, AI-related coursework embedded in the curriculum, institutional support structures, and access to technological infrastructure may significantly shape prospective teachers’ intentions to integrate AI into their future practice (e.g., Admiraal et al., 2017). Similarly, individual-level factors such as digital literacy, prior attitudes toward AI, and AI-related anxiety could have been considered as additional control variables (Scherer, Siddiq & Tondeur, 2019; Wang & Cheng, 2021). The exclusion of these institutional, curricular, and psychological variables represents a limitation of this study because they may interact with motivational beliefs in complex ways. This study deliberately focused on psychological predictors grounded in EVT to maintain theoretical coherence and manage the scope of a cross-sectional design. However, future research should adopt multilevel models that incorporate both individual and institutional factors to provide a comprehensive understanding of AI adoption among prospective teachers. Specifically, exploring how institutional support structures, access to AI-related professional development, and curriculum integration moderate the strength of expectancy-value relationships in shaping behavioral intention would provide a more nuanced understanding of the contextual conditions under which motivational factors translate into actual adoption behaviors (Feng et al., 2025; Jeilani & Abubakar, 2025; Schmidt et al., 2025). Such investigations could examine, for example, whether strong institutional support amplifies the effect of intrinsic value on behavioral intention or whether limited access to professional development attenuates the influence of utility value. These moderating effects remain unexplored in this study and represent important directions for future inquiry. Researchers can cross-validate the findings using multiple data-collection methods and obtain a more accurate picture of the motivational factors influencing behavioral intentions in different contexts.
Furthermore, while this study acknowledges the ethical implications of AI integration in education—including concerns about data privacy, algorithmic bias, and the potential displacement of human judgment in pedagogic decision making—it did not explicitly examine how prospective teachers are prepared to navigate these ethical challenges. Given the increasing reliance on AI systems that collect and analyze student data, there is a pressing need to embed ethical AI training within teacher preparation programs. Such training should equip prospective teachers with frameworks for critically evaluating AI tools, understanding issues of algorithmic transparency and fairness, and making informed decisions about when and how to deploy AI in ways that prioritize student welfare and equity. Future research should investigate the extent to which current teacher education curricula address these ethical dimensions and explore practical pedagogic approaches for developing ethical AI literacy among prospective educators (Holmes et al., 2022; Selwyn, 2022).
Conclusions
In conclusion, this study highlights the complex interplay of motivational factors influencing prospective teachers’ intentions to integrate AI into education. The model demonstrated substantial predictive power, explaining 62% of the variance in behavioral intention. The significant predictors identified, including utility value (most substantial), cost perceptions, intrinsic value, attainment value, and expectancy beliefs (weakest), underscore the multifaceted nature of technology adoption and its varying degrees of influence on AI integration intentions. Notably, although utility and intrinsic value were rated highest among prospective teachers, expectancy and attainment value scored lowest, suggesting a gap between perceived relevance and confidence in successfully implementing AI.
Additionally, control variables including gender, class level, and AI usage frequency significantly influenced behavioral intentions, highlighting the importance of individual characteristics and prior experience. The adverse effect of cost perceptions emphasizes the critical need to address barriers such as time, effort, and resource constraints. By addressing these factors through targeted training (to enhance expectancy beliefs) and support and resource provision (to reduce perceived costs), educational institutions can enhance the readiness and willingness of future teachers to adopt AI technologies. In particular, emphasizing AI’s practical utility and fostering intrinsic motivation through engaging, relevant experiences can lead to more effective and innovative teaching practices, better preparing students for a technology-driven world. Understanding and leveraging these motivational drivers in their order of importance is critical to fostering a supportive environment for AI integration in education.
Footnotes
Appendix A: Questionnaire Items
All items in the questionnaire were presented in English and rated on a 5-point Likert scale (1 = “Completely false” to 5 = “Completely true”) unless otherwise specified. The questionnaire consisted of two main parts: (1) Demographic Information Form and (2) Study Scales.
Conflicting Interests
The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
