Abstract
This article examines the rhetorical and pedagogical dynamics that emerge when a human engages a conversational AI through the interview form. Although machine-generated language can resemble dialogue, the resemblance is only surface deep: the system performs understanding without interiority, intention, or the capacity for recognition. Through a structured series of prompts and responses, the article analyzes how the system simulates empathy, authority, and neutrality, and how these performances shape the human interlocutor’s interpretive labor. The curated exchanges reveal a consistent pattern. The system smooths conflict, reframes ambiguity as something to be resolved, and gravitates toward consensus even when tension or disagreement is pedagogically necessary. These tendencies illuminate the structural limitations of conversational AI in contexts where learning depends on mutuality, risk, and the capacity to stay with the difficult. Rather than treating the AI’s responses as data, the article reads them as rhetorical artifacts that expose the architecture of machine conversation and the human work required to sustain the illusion of dialogue. The analysis concludes with implications for practice, offering adult educators concrete strategies for using structured encounters with conversational AI to support critical AI literacy. These include designing prompts that surface interpretive habits, comparing AI-generated feedback with human feedback to highlight the limits of simulated recognition, and using AI’s tendency toward consensus to teach learners how to engage ambiguity and democratic tension. By foregrounding the rhetorical nature of machine conversation, the article provides a conceptual and practical framework for helping adult learners interpret AI-generated language with discernment and ethical awareness.
Keywords
Get full access to this article
View all access options for this article.
