Abstract
While recent research has investigated the effectiveness of generative artificial intelligence (GenAI) in providing feedback on students’ writing, the role of prompt engineering remains underexplored. This study enhanced the specificity and customizability of prompts by integrating standards-based writing descriptors from China's Standards of English Language Ability (CSE) into the design of GenAI prompts. A total of 164 second-year English majors were randomly assigned to three groups that used different prompt types to generate feedback during a 10-week writing course: an experimental group (n = 76) using CSE-based prompts, a control group (n = 76) using conventional prompts, and a third group (within-subjects design; n = 12) using both prompt types interchangeably. Results indicated that the standards-based prompts led to a significantly greater improvement in the quality of students’ revised essays compared with conventional prompts. Furthermore, students reported more positive perceptions and experiences with the feedback generated using CSE-based prompts. This study demonstrates that incorporating standards-based descriptors into prompt engineering can effectively enhance GenAI-generated writing feedback.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
