Abstract
This study presents a framework for customizing system prompts to design Generative Pre-Trained Transformers (GPTs) for teaching purposes. As GenAI tools gain traction in education, educators must understand GPT limitations and operational logic to create learning experiences that are helpful, honest, and harmless, especially as many still use default models without contextual training, risking harmful outputs. To address this, the study offers a framework that helps educators translate learning theories and objectives into structured prompts and curated knowledge bases to systematically customize GPTs. A case study illustrates its application in teaching the rhetorical goals of research communication. The customized GPT outperformed the default model across all pedagogical alignment dimensions, particularly in helpfulness and harmlessness, but remained vulnerable in honesty, especially to overgeneralization, fabrication, and bias. This study concludes with mitigation strategies to improve alignment, offering educators an accessible way to leverage their expertise and reduce risk without complex technical interventions.
Keywords
Get full access to this article
View all access options for this article.
