Abstract
Model texts have been widely recognized as an effective feedback instrument in enhancing second-language (L2) writing. Recent advances in generative artificial intelligence (GenAI), capable of producing humanlike text in response to user prompts, offer an accessible and efficient alternative for generating customized model texts tailored to L2 learners’ specific goals and proficiency levels. Despite this potential, little empirical research has examined L2 students’ practice and perceptions of leveraging GenAI for model text creation to support their revision. To address this gap, this study follows a process approach to investigate students’ prompting strategies to elicit AI-generated model texts, their noticing and incorporation of the model text features, and their perceptions of this process. Data were collected from 12 university students’ note-taking sheets, records of interaction with Kimi (a GenAI tool developed in China), written essays, and semi-structured interviews. The findings revealed that, although students employed only two prompts on average, they embedded multiple specifications to elicit model texts from GenAI that aligned with their varied needs. They primarily concentrated on lexical and content-related features from the GenAI-generated model texts and showed relatively high incorporation rates of the noticed features into subsequent revisions. While students appreciated the personalization and flexibility, they encountered challenges in prompting, critically evaluating the AI-generated content, and negotiating the balance between text incorporation with ethical considerations. This study offers pedagogical implications for integrating GenAI in model texts as a feedback instrument (MTFI) tasks and fostering students’ effective and responsible use of GenAI in such practices.
Get full access to this article
View all access options for this article.
