Abstract
The rapid growth of Artificial Intelligence (AI) in education has sparked both enthusiasm for its potential and concern about academic integrity and human autonomy. Within higher education, these tensions invite deeper reflection on AI literacy and agency among educators and researchers. This paper explores how two early-career researchers and university teaching staff navigate their agency when engaging with Generative AI (GenAI) tools such as ChatGPT in teaching and research contexts. Using a collaborative autoethnographic approach informed by Vygotsky’s cultural-historical theory, we reflect on our experiences of both utilising and refraining from using GenAI tools across academic practices. Through narrative snapshots, we examine how we balanced the potential of GenAI use for teaching practices and supporting professional growth, against ethical concerns related to integrity, privacy, and intellectual property. The study highlights how educators exercise reflective and situated agency in an AI-mediated landscape, demonstrating how everyday academic practices contribute to responsible, human-centred engagement with AI in education.
The fast growth of artificial intelligence (particularly generative AI tools like ChatGPT) in higher education has reshaped the way educators and researchers deal with the production of knowledge. The fast change and request for the introduction and use of AI in the education context have not proceeded smoothly. While researchers and educators are intrigued by the possibilities that GenAI tools can provide to boost their careers, they show awareness about the issues that this usage might carry (Al-Zahrani, 2023). This includes issues and concerns such as biases in AI algorithms, privacy and data security risks, the risk of decreased human interactions, ethical issues such as privacy, safety and knowledge property, teaching and learning outcomes, and over-reliance on technology (Aravantinos et al., 2024; Gulson et al. 2022; Jackson, 2025; Kong et al., 2023). It is important to keep in mind that academics must remain in control of the AI use and the user should act as an autonomous agent able to navigate their own goal and not be controlled by it and its algorithms (Kassens-Noor et al., 2024; Laitinen & Sahlgren, 2021). To keep the values of higher education, it is crucial to maintain human agency in the AI educational landscape.
Considering the focus on AI in higher education, there is a need for more research on the ways educators and researchers are experiencing these tools. The highlighted issue of AI users’ agency asks for further research to provide a deeper understanding and to shed light on the concerns regarding human agency during the application of AI tools. That is, further research is required to unpack the human agency and the use of AI. As with the concern of this paper, it is interesting and significant to understand how early-career researchers and teaching staff in higher education navigate through this complex process of the use of Generative AI (GenAI) tools in their research and teaching practice while remaining agents. Here, the term “AI tools” refers to the software applications and models that generate content, such as ChatGPT, which is the focus of this paper.
In this paper, we aim to examine our experiences with AI use as two early-career researchers and teaching staff. We focused on the ways we have embraced or resisted GenAI tools (specifically ChatGPT) and how we have exercised our agency during the experience. To this end, we engage in a collaborative autoethnography method through collaborative construction and analysis of the short narratives or snapshots of our practices with ChatGPT in research and teaching in higher education. This qualitative method enables us to examine the process through which we have balanced GenAI use and exercised our agency in navigating this process. Through our collaborative reflections in this method, we show the ways we practice choice and control as GenAI users, for example, when we refrain from protecting academic integrity and when to leverage the tool for efficiency. This paper seeks to shed light on the use of AI and the role of human agency.
Literature Review
OpenAI’s introduction of ChatGPT marked a pivotal moment in GenAI. This language model has been trained on extensive datasets comprising diverse text corpora to emulate human language capabilities (Cascella et al., 2023). However, the beginning of AI use in its basic forms of questioning the ability of machines to think dates to research by Turing (1950). This AI use continued to be expanded to a variety of research lines such as “machine learning, informatics, knowledge-based system, and pattern recognition” in the 1980s and ended AI as a general umbrella term for these developments (Bozkurt et al., 2021, p. 1). Following these, the use of AI in the field of education has been experiencing an important development since the beginning of the present century (Roll & Wylie, 2016). The recent use of AI in education has been reported to determine key factors affecting learners’ performance and achievement (Koçoğlu et al., 2017; Yağcı, 2022), enhancing student engagement through technologies that lead to improved learning outcomes (Kokku et al., 2018), students’ AI literacy (Kong et al., 2023), and further in medical and health informatics education to address the evolving needs of the healthcare industry (Sapci & Sapci, 2020).
AI tools have the potential to significantly enhance teaching practices by expanding teachers’ capabilities and promoting evidence-based teaching. Mollick and Mollick (2023) emphasised that, when thoughtfully implemented, AI applications can improve instructional methods and learning outcomes. Additionally, Chiu et al. (2023) pointed out that AI can support teacher professional development by providing adaptive teaching strategies, improving instructional competencies, and offering automated assessments. In pre-service teacher training, AI-based chatbots create personalised learning experiences, helping educators to develop pedagogical skills through meaningful interactions (Lee & Yeo, 2022). Moreover, AI technologies provide personalised lesson plans and diagnostic questions, allowing teachers to focus more on student engagement and less on content creation (Nicolae et al., 2023). Integrating digital capabilities holistically in the classroom can further enhance teachers’ skills in classroom management, cognitive stimulation, and fostering a positive learning environment, contributing to deeper student engagement (Nicolae et al., 2023).
Moving to the application of AI tools in research, the potential and the possible threats to academic writing have drawn researchers’ attention (see Lund et al., 2023; Zawacki-Richter et al., 2019). Uno et al. (2024) suggest that while AI plays a significant role in academic research by swiftly analysing vast datasets, reducing plagiarism, and enhancing literature reviews, it may also hinder critical thinking and stifle academic creativity. Furthermore, AI usage could lead to complacency among researchers and give rise to machine-generated plagiarism, often called AIgiarism. Building on these concerns around AI’s influence on academic integrity and creativity, additional research has examined the challenges posed by AI-generated content in the peer review process. For example, Casal and Kessler (2023) explored the role of human judgement, accuracy, and research ethics in journal reviewers’ ability to differentiate between human-written and AI-generated texts. They found that reviewers, despite using various evaluation criteria, struggled to accurately distinguish between the two. Additionally, while many editors viewed AI tools as ethically permissible for supporting research processes, some disagreed. The study also discussed potential future research directions related to AI tools and academic publishing. Further, AI-enabled technologies have the potential to transform academic research through advancements in data collection, analysis, and writing (Dogru et al., 2023). However, their widespread adoption raises several ethical and legal challenges in research bringing legal issues, particularly regarding intellectual property rights (Dogru et al., 2023).
The ethical issues concerning the use of AI in research extend to aspects such as biases with data where the AI algorithms have been found to embed and reinforce gender, race, ethnicity, or disability biases that disadvantage minorities (Dogru et al., 2023; Shrestha & Das, 2022). Privacy is a significant concern when using AI in research, since the collection and processing of data for personalised responses increase the risk of data breaches and the exposure of personal information. Safeguarding data privacy and maintaining control over personal information are becoming even more crucial, with AI introducing heightened threats to privacy (Y. Zhang et al., 2021). The enhanced capabilities of AI allow for more powerful analysis of personal data, potentially infringing on individual privacy rights (Kerry, 2020). Further, the issue of authorship requires academic publishers to create policies addressing AI’s role in authorship and intellectual property policies, though understanding of these guidelines is still limited (Bozkurt, 2024; Lund & Naheem, 2024).
Among these ethical concerns, the issue of human agency and role during the application of AI is highlighted. A number of studies highlight that agency is essential for motivation, personal development, and psychological well-being (Calvo et al., 2020). Calvo et al. (2020) continue that responsible AI must prioritise an understanding of and be designed with human agency in mind rather than focusing solely on machine autonomy. There is concern that automating the learning process through AI could undermine learners’ agency and hinder the development of essential agentic skills for the future (Darvishi et al., 2024).
While the impact of digital experiences on human agency is complex and inconsistent rather than straightforward, there is a need for further exploration of human agency during the application of AI. This important issue of human agency is the main concern of the present paper, raised by the critical questions about user agency and the balance of power in human–AI interactions due to the integration of AI in education and research. Therefore, this paper aims to further investigate the researcher/teacher (authors of the paper) experiences with AI tools to bring more light from the perspective of AI users and how they navigate through the process. It is important to understand which party—human (teacher/researcher) or AI—holds control in the process of decision-making. In other words, it is essential to determine where the balance lies and how the authors of this paper maintain their agency in interactions with AI.
Theoretical Approach
In this paper, we examine the early-career researchers’ experiences of engaging with AI tools and navigating their agency from a cultural-historical perspective. We aim to explore how educators navigate and maintain agency in the evolving time of AI in education and research. We position human agency at the core of interaction with AI tools, seeing it as influenced by the social environment as the source of human development from a cultural-historical theoretical perspective.
Human agency is a widely debated concept, frequently explored in fields such as sociology, psychology, and philosophy (Archer, 2000; Bandura, 2001; Foucault, 1982; Giddens, 1984). Psychologists traditionally approach agency within the narrow context of individuals seeking to assert their will in specific social situations, which is a limited view (Blunden, 2023). However, as cultural-historical theorists suggest, agency involves more than self-assertion in social contexts. It includes individuals’ capacity to influence and reframe their situations, establish collaborative relationships, and effect change rather than merely adapting (Blunden, 2023). In defining agency, we draw on the perspective of G. Zhang and Daneshfar (2025), who define “agency of the two narrators [which] entails the purposeful and professional actions within which actors can fight for their aims, plan and enact actions, and promote their development and a sought-after future” (pp. 3–4). Applying their perspectives to this paper, the agency of the two authors entails the professional and ethical actions within which they can make decisions on how to use AI in research and teaching practice in higher education, thus promoting our professional development and conducting research while maintaining academic integrity.
In addition to agency, we regard social situations and tools as two important concepts from cultural-historical theory (CHT) that are important and useful for us to understand our interaction with AI artefacts. The social situation (as a part of the social environment) captures the dynamic interaction between an individual and their social environment, specifically in this paper, focusing on individuals’ engagement with AI tools. We define this social situation as the interplay between the researchers’ academic knowledge, knowledge of AI, feedback from the educational environment (such as students), and each interaction with AI tools. This situational approach could emphasise that agency does not operate in isolation but is shaped by social situation—a core perspective in Vygotsky’s CHT.
The next aspect of this paper is perceiving the importance of tools and signs from a cultural-historical perspective, which is recognised as a crucial role in human development, shaping individuals’ development within the social environment (Vygotsky, 1997a). In other words, as individuals engage in the social environment, they simultaneously learn to master and apply cultural tools and signs, furthering their development (Daneshfar, 2023). A list of cultural tools includes elements like “language, different forms of numeration and counting, mnemotechnic techniques, algebraic symbolism, works of art, writing, schemes, diagrams, maps, blueprints, all sorts of conventional signs, etc.” (Vygotsky, 1997b, p. 85). In this paper, we expand Vygotsky’s list to include modern tools such as GenAI. The tools range from simple (a spoon) to complex systems with sophisticated capabilities. However, regardless of complexity, all tools and signs share a common feature, which is mediation during the process of human development (Veresov, 2014). CHT highlights that the role of cultural signs and mediation during the process of human psychological development is reorganising the higher function (the human development) (Fleer & Veresov, 2018) however, AI tools, especially generative AI, could offer new forms of mediation.
Method—a Collaborative Autoethnography
As stated earlier, we aim to investigate our research and teaching experiences in connection to the use of AI tools. We selected and utilised a collaborative micro-autoethnography (CMAE), enabling us to gather, collaboratively analyse, and interpret our autobiographical materials (reflections about our AI use) to generate meaningful insights (see Chang et al., 2016; Rangarajan & Daneshfar, 2023; G. Zhang & Daneshfar, 2025).
Micro-ethnography, initially proposed by Smith and Geoffrey (1968), was designed to employ ethnographic methods to study the intricacies of an urban classroom, where “micro” refers to the cultural phenomena’s size rather than the methods. They aimed to highlight the American educational system’s shortcomings through ethnographic methods like interviewing and participant observation, focusing on individual classrooms rather than entire societies (Streeck & Mehus, 2004). Today, micro-ethnography involves capturing short events that allow researchers to examine specific micro-events, uncovering the foundations of social organisation, culture, and interaction at the micro-level of moment-by-moment human activities (McArthur, 2019).
In this paper, our qualitative research approach, a micro-aspect of collaborative autoethnography, includes creating short, focused narratives about the use of AI tools during our research and teaching.
We are both researchers and participants. Data for this paper include short, focused narratives about our use of AI tools, both during our PhD studies at an Australian university and for our teaching purposes at the same university. For AI tools, we refer to GenAI tools (ChatGPT mainly). In pursuit of this objective, we followed a series of steps inspired by existing literature (Chang et al., 2016; Hradsky et al., 2022; Pretorius, 2022). To generate data and guide our focused narratives, we decided on key leading questions based on existing literature (see Rangarajan & Daneshfar, 2023; G. Zhang & Daneshfar, 2025). These questions help generate our narratives on the use of AI tools in our teaching and learning experiences. The following questions helped us to generate our narratives:
• Describe our first encounter with AI tools.
• When and how did this happen?
• What specific tools were involved?
• How have we used AI tools in our research, including during our PhD and for publications?
• What specific tools did we use, and how did they assist or hinder our learning and research processes?
• How have we integrated AI tools into our teaching practices (e.g., as a teaching assistant or in other teaching roles)?
• What specific tools did we use, and how did they influence (from a teaching-learning perspective) our teaching methods and interactions with students?
Then, we each generated preliminary data individually (reflected on our use of AI tools during research and teaching), followed by collaborative discussions and analysis of our narrations during meetings.
Upon preparing the data, the process of individual-collaborative analysis of the narratives started. For data analysis, we followed the steps taken by Rangarajan and Daneshfar (2023) and G. Zhang and Daneshfar (2025). We engaged in preliminary meaning-making by individually re-reading and reviewing our reflections before collaboratively examining them. We began developing themes through individual and collaborative reflections on our stories. Through this process, we ultimately concluded that the data would be best thematised to address the following three questions. Finally, we outlined and documented our interpretations collaboratively.
What is our perception of AI tools?
How do we use AI tools in our teaching and research?
Why do we choose to use or not use AI tools?
Discussion of the Narratives
In this section, we analyse and examine our AI-related experiences in research and teaching in higher education. This section is guided by our cultural-historical lens and agency, specifically defined as the narrator’s professional and ethical actions to decide on the use of AI and the social environment within which the interaction between the human and the AI tool is happening. Therefore, we aim to interpret the ways we exercise agency during the dynamic interaction between the individual and AI tools. As discussed in the method section, our AI-related experiences were analysed based on the following themes.
(1) What are our perceptions of AI tools?
(2) How do we use AI tools in our teaching and research?
(3) Why do we choose to use or not use AI tools?
In the following sections, we present our discussions and interpretations under each theme (questions), along with samples of our narratives.
What Are Our Perceptions of AI Tools?
In this theme, we draw on our experiences to reflect on our attitudes and perceptions of generative AI (GenAI) tools. Our perceptions are shaped by an evolving understanding of their potential and limitations. The data show that over time, each of us has experienced a change in our perceptions towards the use of AI.
Perceptions of AI in Academic Research
Author 1’s initial perception of GenAI tools in research was marked by strong resistance rooted in unfamiliarity and scepticism. His concerns centred on academic integrity, privacy issues, and the reliability of GenAI outputs. He noted, My first encounter with ChatGPT as a generative AI tool occurred towards the end of my PhD project. There was a lot of uncertainty surrounding the tool at the time. My initial reaction, given my lack of familiarity with ChatGPT, was to view it as something that could potentially lead to plagiarism in research.
Author 1’s early reflections also highlight his limited knowledge of the tool and the tool’s inability to meet the depth required for his research. For instance, when he attempted to use ChatGPT to define a key concept within his PhD, he observed superficial results and a lack of theoretical alignment. This further reinforced his reluctance to integrate GenAI into his research.
Similarly, Author 2 approached GenAI with caution during her PhD research, though her resistance stemmed from her relatively solid AI literacy and ethical concerns rather than unfamiliarity. Author 2’s background in AI literacy, gained through her work as a senior research assistant in a large-scale AI literacy project in Hong Kong from 2020 to 2021, had shaped her perspective: In the very beginning, our team figured out it was the stage of weak AI. That is, for AI to have consciousness as humans, it will be a very long way to go. Therefore, it is not that frightening. Later, in 2022, I attended a presentation from Prof. Sam Seller. He argued that AI could influence humans’ decision-making, and it would be dangerous. At that moment, I questioned whether it is only an algorithm, and whether or not it should be that “frightening”.
As illustrated above, Author 2’s cautious but analytic approach is informed by her working experience with AI technicians and academics. Author 2’s concerns were grounded in AI ethics, privacy risks, and the potential misuse of GenAI in academic settings. Both of us, however, acknowledged the limitations of GenAI in research, including its inability to produce nuanced or contextually relevant outputs, as evidenced by Jackson (2025).
Perceptions of AI in Teaching and Learning
While both of us remained resistant to GenAI in our research, our perceptions gradually changed in its use in teaching and learning in higher education. For Author 2, resistance gave way to cautious acceptance as she integrated GenAI into her teaching practices to help students apply AI in lesson planning and critical evaluation: It was very interesting that many students told me the AI-generated version was not really helpful, as there were no contexts where learner diversity was taken into account, and it is teacher-centred. These students have more teaching experience. Meanwhile, some education students found it helpful as the structure is neat and has helped them to think of the procedure. . . It was from then I think AI can be helpful for teaching and learning, as long as it is used ethically and appropriately.
This reflects a turning point in Author 2’s perception, as she began to see the potential of GenAI in teaching, provided it was used ethically and with proper contextual understanding. In addition, it would cultivate students’ AI literacy in terms of responsible use of AI.
In the same vein, Author 1’s perspective also evolved, albeit through a different path. While initially hesitant about GenAI in research, his use of ChatGPT for general tasks—such as drafting emails or social media posts—helped him develop a practical familiarity with the tool: It is worth noting that my uncertainty about ChatGPT’s output was not as strong when it came to general or everyday tasks. Over time, I began using ChatGPT for day-to-day tasks, ranging from simple ones like drafting emails or social media posts to more significant tasks such as assisting with visa applications or job applications.
Although Author 1 viewed these tasks as unrelated to his academic work, this daily engagement laid the groundwork for his eventual adoption of GenAI in teaching. Reflecting on student feedback, he began to see how ChatGPT could adapt materials to better meet students’ needs, ensuring clarity and relevance while enhancing the overall learning experience.
Our evolving perceptions of GenAI tools reflect a shift from initial scepticism to gradual acceptance, shaped by ethical concerns, practical uses, and contextual factors, while also demonstrating our active agency in navigating these tools. Author 2’s AI literacy and ethical grounding prompted caution, while Author 1’s engagement with the tools for everyday tasks paved the way for a more nuanced understanding. These shifts demonstrate that perceptions of GenAI are not static but are shaped by ongoing interactions with the technology, reflective practices, and situated uses in research and teaching contexts. It also reflects that we exercised our agency in making decisions on how, when, and to what extent we use AI in higher education. The following sections will further discuss these.
How Do We Use AI Tools in Our Teaching and Research?
This section considers the use of AI tools in the two aspects: research and teaching. Our reflections reveal that, despite initial resistance or gradual acceptance over time, we have actively engaged with these tools in various aspects of our professional journey.
Using AI in Research
Author 2’s reflections on using AI in her research reveal a cautious but evolving relationship with the tool. Initially, her adoption of GenAI was influenced by observations of peers, many of whom highlighted its capacity to overcome language barriers and produce polished academic writing. As an individual whose first language is not English, Author 2 raised the question of fairness—despite emerging research celebrating AI’s role in severing equity goals for English-as-second-language learners (Pretorius et al., 2024), it is not fair for early-career researchers like Author 2 seldom using it. After a period, with the questions in her mind, she cautiously started exploring the tool while making all efforts to protect her privacy and knowledge property. She recalled, At that time, my peers, many PhD students, started sharing how AI has changed their life, as it fills the language barriers and provides “perfect English” (see Jackson, 2025, who argued very differently). Same to many of my peers, English is not my native language. I then talked with my previous colleagues and friends with AI background and learned ways to protect my privacy and knowledge property. I started using the paid version in the mid of 2024 while always checking the privacy setting and was still worried about the information leaking. I started getting more information from the tool, ChatGPT. I have been in a lot of worry. It was until recently, after rounds of using it in teaching, I tentatively used it to distinguish synonyms and search literature. If I try to refine some sentences, the key idea/innovative idea will be replaced by a general term. Moreover, I have never fed ChatGPT with more than two sentences.
This change reflects Author 2’s agency in leveraging GenAI to improve productivity, while using it in a minimal manner to protect her knowledge property. Author 2’s actions demonstrate a measured approach to integrating ChatGPT into her research practices (
By tentatively engaging with ChatGPT for tasks that enhance academic efficiency, Author 2 reflects a strategic approach to GenAI. Her deliberate testing and eventual application highlight an important dimension of her agency: the ability to navigate between scepticism and utility, critically assessing a tool’s value before integrating it into her workflow. This progression could be an illustration of how external influences (e.g., peer experiences) and internal deliberations (e.g., ethical considerations) shape the trajectory of GenAI adoption in academic research.
Using AI in Teaching
Author 1 reflects that the most useful and engaging part of this practice is related to his teaching at the university. Continuing Author 1’s snapshot from above to the use of AI in everyday tasks, he points out these practices: These tasks allowed me to gain practical experience with the tool and learn how to use it more effectively. This trial-and-error approach later proved beneficial in my teaching, where I was able to apply ChatGPT more confidently.
The non-career-related application of GenAI tools prepared the understanding and a kind of trust in Author 1 in navigating the tool. He further illustrates that Throughout my teaching at the university, I have used ChatGPT for tutorials and course preparation. I approached this tool with a more open and exploratory mindset compared to its use in my PhD and research. I integrated ChatGPT to refine PowerPoint presentations, clarify complex concepts, and align my teaching materials with the course content. ChatGPT helped me simplify difficult theoretical concepts, making them more accessible for my students. I used the tool to break down complex terms and provide definitions and examples that were easier for students to understand. Additionally, ChatGPT assisted in generating thought-provoking questions to encourage class discussions and student engagement. However, I was careful to ensure that the content remained aligned with the original material to avoid distorting its meaning.
Author 1’s reflection highlights his exploratory approach to integrating ChatGPT into his teaching practices. Unlike its application in his PhD research, where scepticism prevailed, his use of ChatGPT for teaching reveals a shift in mindset. This openness could be a demonstration of his agency in leveraging the tool to align with his pedagogical goals.
Author 1’s account of simplifying complex theoretical concepts into accessible content underscores his proactive role in shaping the tool’s functionality to meet the needs of his students. For example, his use of ChatGPT to refine PowerPoint presentations and clarify difficult ideas highlights a deliberate effort to enhance teaching materials. By breaking down theoretical concepts and generating relatable examples, Author 1 exemplifies how GenAI can act as a support, facilitating the translation of abstract ideas into student-friendly formats.
From an agency perspective, this reflection demonstrates Author 1’s capacity to move beyond being a mere consumer of AI-generated content. He positions himself as an active applicant and critically evaluates the outputs to ensure alignment with the course objectives. This adaptability reflects not just acceptance of GenAI tools but also a sense of control and purposeful use.
In the early stages, I mainly used ChatGPT for basic tasks such as rephrasing, summarising content, and creating clear explanations. For instance, I used the tool to condense complex theoretical ideas into bullet points to make the material more digestible for students. Over time, I expanded my use of ChatGPT to generate more dynamic content, such as discussion prompts, engaging learning activities, and reflective questions. I also used the tool to create online games like Kahoot, which made my classes more interactive and fun. By incorporating ChatGPT, I enhanced the quality and depth of my teaching materials while adding an element of enjoyment to the learning experience.
In the early stages, Author 1’s use of ChatGPT focused on relatively basic tasks, such as rephrasing and summarising, demonstrating a cautious yet strategic approach. This phase reflects progress in his adoption of GenAI. For instance, distilling complex theoretical ideas into bullet points illustrates a pragmatic and student-centred use of ChatGPT.
Over time, Author 1’s expanded use of ChatGPT reveals a deeper integration of the tool into his teaching methods. His creation of discussion prompts, engaging learning activities, and reflective questions points to a more dynamic and interactive pedagogical approach. This shift from simplifying content to generating innovative teaching strategies signals a shift in his confidence and mastery over the tool’s capabilities. Notably, his development of interactive games like Kahoot showcases an inventive use of GenAI to foster an engaging classroom environment.
In her teaching, Author 2 adopted a proactive approach to integrating ChatGPT, particularly through helping with students’ tasks. By designing an activity that required students to generate, critically evaluate, and refine a lesson plan using ChatGPT, she positioned the tool to be utilised to facilitate student agency critical thinking rather than as a replacement for student input and thinking. This approach reflects her agency in teaching students not just how to use GenAI tools but also how to engage with them critically, thus fostering their AI literacy as future educators (Kong et al., 2021; Kong et al., 2023).
My first initiative was integrating assessment. In an education unit, students were asked to plan a lesson. The unit required students to generate a lesson plan through ChatGPT and critically evaluate and refine it. It was very interesting that many students told me the AI-generated version was not really helpful, as there were no contexts where learner diversity was taken into account, and it is teacher-centred. These students have more teaching experience. Meanwhile, some education students found it helpful as the structure is neat and has helped them to think of the procedure. In the assessment, students were required to have clear AI declarations, cite the AI-generated lesson and their own revised version, and have a reflection on use of AI. It was from then I think AI can be helpful for teaching and learning, as long as it is used ethically and appropriately. After that, I have also used AI tools for students in exploring resources and interacting in asynchronous learning. . . The approach I had taken was to let students know how to use it in a way that suits their learning, but never overuse.
Author 2’s snapshots highlight the varying effectiveness of AI-generated content for different learners. As she states above, higher education students with more teaching experience found the AI-generated lesson plans inadequate, while less experienced students appreciated the structured guidance ChatGPT provided. These insights demonstrate Author 2’s agency as an educator in making pedagogical innovations in teaching education students.
Additionally, by requiring students to document their use of ChatGPT and critically analyse its outputs, she fosters a responsible and reflective use of GenAI among her students. This could align with her broader approach of encouraging balanced usage—helping students understand how to use AI tools effectively without over-relying on them.
Why Do We Choose to Use or Not Use AI Tools?
This theme explores the underlying reasons behind our decisions to embrace or resist GenAI tools. The data analysis highlights ethical concerns, practical needs, and agency from us. Author 1’s reflections highlight several reasons behind his choice not to rely on ChatGPT during his PhD research.
I chose not to use the tool for my PhD for several reasons. First, by the time I was introduced to ChatGPT, I had already completed the final drafts of my literature review, methodology, data analysis, and findings chapters. I did not need any assistance from it at that point. Although I did experiment with the tool to see how it might apply to parts of my thesis, I found the results unconvincing.
These reasons are rooted in both practical and research-related considerations. By the time he encountered the tool, the majority of his PhD work was complete, which reduced the need for AI assistance. This temporal aspect reflects a practical limitation in adopting new technologies that emerge late in a research process.
For example, I asked ChatGPT to provide a definition of a concept I was researching, based on a specific theoretical framework. The responses it generated were quite superficial and did not align with the depth of understanding I had developed after working on the concept for nearly four years. I knew that the tool could offer a general overview of the concept, but it was unable to deliver the nuanced, theory-specific definition I needed. This experience was one of the main reasons I couldn’t rely on ChatGPT for my PhD work. I also acknowledge that I was a novice user of the tool at the time. With my current experience, I believe I could have used it more effectively in certain areas.
Moreover, Author 1’s attempt to integrate ChatGPT into his research revealed the limitations of the tool with regard to his expectations. For instance, his experiment with using ChatGPT to define a theoretical concept exposed its inability to meet the specific demands of academic research Author 1 was focusing on. Author 1 believed that while the tool could generate broad, general definitions, it lacked the depth and theoretical grounding necessary for his research, which had been refined over a time of rigorous study.
This experience reveals a conflict between the capabilities of GenAI and the expectations of scholarly research. Author 1’s reflections illustrate the importance of human expertise and the irreplaceable role of deep, theory-specific knowledge in academic work. Furthermore, his acknowledgement of being a novice user at the time suggests that individual familiarity with the tool influences its perceived utility.
Author 1’s reasons for using GenAI in teaching contrast with his hesitations in research, revealing a more proactive and pragmatic engagement with the tool. As his familiarity with ChatGPT grew, so did his ability to integrate it into his teaching practices. His reflections highlight how student feedback played a pivotal role in shaping his decisions. For example, students compared the original lecture slides with the refined ones created with ChatGPT. This feedback reinforced Author 1’s decision to continue using the tool and adapt its outputs to suit his pedagogical goals.
In the early stages, I mainly used ChatGPT for basic tasks such as rephrasing, summarising content, and creating clear explanations. For instance, I used the tool to condense complex theoretical ideas into bullet points to make the material more digestible for students. However, I was careful to ensure that the content remained aligned with the original material to avoid distorting its meaning. As I became more familiar with the tool, I began integrating ChatGPT more interactively into my teaching practices. For example, after reflecting on student feedback, I realised that they compared the original Moodle slides with the refined ones I presented in class. This feedback helped me see how ChatGPT allowed me to adapt my materials based on the students’ needs, ensuring clarity and relevance in my lessons. By incorporating ChatGPT, I enhanced the quality and depth of my teaching materials while adding an element of enjoyment to the learning experience.
In this context, Author 1’s agency is evident in his capacity to mediate between the tool’s outputs and the specific requirements of his teaching. For instance, while ChatGPT helped refine and simplify teaching materials, Author 1 attempted to ensure that the content remained faithful to the original material, avoiding any distortion of meaning. This deliberate act of alignment reflects his critical awareness and responsibility as an educator, demonstrating that the use of AI in teaching involves more than simply accepting its outputs—it requires thoughtful integration and adaptation.
Author 2’s reflections reveal a multifaceted approach to the use of GenAI tools, characterised by a deep awareness of ethical considerations, a cautious adoption of technology, and a commitment to fostering responsible practices among peers and students. Her decisions regarding the use of AI are underpinned by concerns about data privacy, intellectual property, and the broader implications of AI integration into academic practices.
I also heard some doctoral students are not aware of the issue of privacy. One told me he sent the whole chapter to ChatGPT, and it got over 90% similarity. I told them please protect your data and it can be stolen. I also resisted several invitations from LinkedIn on training AI, as I really do not want to be an accomplice of the “idea-stealing-scheme”. I started using the paid version in the mid of 2024 while always checking the privacy setting and was still worried about the information leaking. Australia, where I am located, lags behind in policy initiatives, at least, in my institution. On the one hand, I started getting more information from the tool, ChatGPT, on the other hand, I do not feel very at ease. As someone who worked on AI ethics, I know the loopholes and vulnerabilities of the AI applications. This triggers my reluctance to use them; however, sometimes I believe I am right in thinking this way.
A central theme in Author 2’s narrative is her grassroots-level advocacy for ethical AI use. She actively encourages her peers and students to engage critically with AI tools, emphasising the importance of protecting their intellectual work and understanding the potential risks. For instance, she shares an example of a doctoral student unknowingly compromising the privacy of their work by submitting an entire chapter to ChatGPT, resulting in high similarity scores. This incident prompted Author 2 to educate herself on safeguarding her data.
Author 2’s scepticism about AI’s ethical dimensions is also reflected in her refusal to participate in LinkedIn AI training programmes, which she perceives as potential “idea-stealing schemes”. This resistance demonstrates her agency in questioning the motives of larger AI-driven initiatives, positioning herself as a critical actor rather than a passive adopter of technology. Her reluctance to fully embrace AI, despite its utility, is shaped by her understanding of its vulnerabilities, particularly regarding intellectual property and data security.
In teaching, Author 2 balances the integration of AI tools with a firm stance on responsible usage. She not only uses AI tools herself but also influences students to adopt a restrained and ethical approach. For example, she advised students against over-reliance on AI for writing tasks, warning them about the risks of their work becoming public or losing ownership of their ideas. By setting boundaries on how students interact with AI, Author 2 emphasises the importance of maintaining agency over their creative and intellectual outputs.
Author 2’s reflections also reveal a nuanced tension between the opportunities AI provides and the discomfort it creates. Although she uses ChatGPT for obtaining information, her persistent concerns about data privacy and ethical loopholes lead her to question whether her cautious approach is the “right way”. This ambivalence highlights her agency in AI adoption, where the desire to leverage its benefits is tempered by an acute awareness of its limitations and risks.
Conclusion
This study examines how we, as early-career researchers and educators, have exercised our agency to navigate the challenges and opportunities presented by GenAI tools in our research and teaching practices. Despite coming from different backgrounds and having varying levels of prior AI literacy, we both actively engaged with these tools to enhance our teaching and research. We used AI to simplify complex concepts, refine teaching materials, facilitate interactive classroom experiences, and support students’ critical engagement with AI. In research, we applied generative AI selectively for tasks such as refining language and conducting preliminary literature reviews, making sure to uphold academic integrity, protect our privacy, and safeguard intellectual property. These practices reflect our conscious effort to balance the benefits of AI with its limitations and ethical considerations. Through our pedagogical practices, we not only enhanced teaching and learning but also helped foster students’ critical AI literacy.
Our experiences offer insights for policymakers, in terms of the need for institutional support that helps educators adapt to rapidly evolving AI technologies. By sharing our journeys, we hope to shed light on the lived realities of navigating this AI jungle and to contribute to more informed and supportive policies that empower educators and students alike.
