Abstract
Feedback on multilingual writing is a central yet contested feature of English language education, historically guided by narrow, prescriptive norms. Drawing on our experience as researchers and editors in the fields of global Englishes and academic writing, this viewpoint article critiques conventional feedback practices that privilege prescriptive native-speaker standards and marginalise multilingual writers. We argue for a reconceptualisation of feedback as an inclusive, dialogic process that aims to improve rhetorical effectiveness and mirrors the linguistically diverse contexts within which the written language is used. The paper explores how teacher-, peer- and artificial intelligence-generated feedback can reinforce or resist normative bias, and examines the structural challenges that inhibit change, including high-stakes expectations and institutional pressures. We propose that feedback, when guided by global Englishes and translingual principles, and supported by emerging forms of critical digital literacy, can empower writers to participate confidently and equitably in global and academic discourses.
Keywords
Introduction
Feedback on writing remains one of the more researched, yet contested, aspects of multilingual education. In this article,
At the outset, we first examine how feedback practices have historically been shaped by prescriptive norms and consider how global Englishes perspectives reorient feedback towards rhetorical effectiveness. We then explore the structural and pedagogical challenges involved in shifting towards more inclusive approaches, including the emerging role of generative artificial intelligence (GenAI). Finally, we outline implications for equitable feedback practices and suggest directions for future research and teacher education.
Reconceptualising Feedback: From Norm Enforcement to Rhetorical Empowerment
Feedback on writing has traditionally been conceived as a process of error correction, governed by fixed standards of British or American English (McKinley and Rose, 2018). While such standards have historically functioned as proxies for quality, they often alienate writers whose linguistic repertoires fall outside these rigid norms. This normativity is especially entrenched in academic publishing, which then trickles down to the norms that underpin academic English writing curricula. As editors, we have observed how ‘good English’ continues to be equated with native-speaker usage (McKinley and Rose, 2023), and how feedback to multilingual writers often becomes punitive rather than constructive. Instead of enabling clarity and rhetorical effectiveness, feedback under these assumptions can marginalise multilingual voices.
Global Englishes research (Rose and McKinley, 2025a; Rose et al., 2021) offers a compelling counterpoint to standard language ideology. From this perspective, community-based deviations from so-called standard English are not errors, but rather contextually shaped features of globalised English use (Dobinson et al., 2018; McKinley and Rose, 2018; Tardy et al., 2021). This reconceptualisation underpins recent calls to reframe feedback as more responsive to linguistic diversity across classrooms, peer review and evolving communicative contexts. In a recent reflection, we further positioned multilingual writing as a key site in which standard language ideologies are reproduced, and where rhetorical agency can either be constrained or supported depending on feedback practices (Rose and McKinley, 2025b).
In McKinley and Rose (2018), we critique how the construct of ‘nativeness’ continues to shape academic writing expectations, reinforcing a standard language ideology that marginalises L2 writers. We argue for shifting attention from whether a text aligns with native norms to how effectively it achieves its rhetorical purpose in context. Supporting this position, Dobinson et al. (2018), in their study of lecturer feedback at an Australian university, reveal how standardised rubrics often overlook or misclassify features of World Englishes in student writing, offering little space for the recognition of linguistic diversity. Their findings point to a need for greater transcultural awareness among educators and more nuanced approaches to evaluating student texts. Similarly, Tardy et al. (2021) explore how incorporating global Englishes content into academic writing instruction can raise students’ awareness of variation while also challenging instructors to reflect critically on their own linguistic assumptions. This research highlights that effective feedback must go beyond enforcing correctness. Across this research, rhetorical intent, intelligibility and audience awareness emerge as more salient criteria than strict adherence to prescriptive norms.
This shift is increasingly supported in applied linguistics. Hyland and Jiang (2020) advocate for feedback practices that recognise the communicative value of non-native features, while Canagarajah (2012) argues for valuing
Feedback and the Problem of Normativity
The problem of normativity in feedback is especially acute when the default mode of teacher or peer response is corrective. Conventional feedback on L2 writing, particularly in educational contexts, has long centred on identifying and eliminating linguistic error, what Lee (2023) critiques as the enduring dominance of ‘written corrective feedback’ (WCF). The very term reinforces an assumption that English writing should conform to static, idealised norms, most often modelled on British or American English. Yet such assumptions fail to reflect the reality of English as a global language, shaped and used by diverse communities across sociolinguistic contexts. This creates an implicit hierarchy in which some forms of English are framed as inherently more legitimate than others.
From a global Englishes perspective, feedback that narrowly emphasises correctness reproduces native-speakerist ideologies, even when given by well-intentioned teachers or peers. Lee (2023) argues that WCF not only conflates language with error, but also promotes a deficit view of multilingual writers, positioning them as deviating from a presumed standard rather than as agentive users of English. In response, she proposes shifting the framing from WCF to
This is not to say that grammar correction should be ignored completely in feedback. Instead, global Englishes research emphasises that using English as a lingua franca is not an ‘anything goes policy’ (Jenkins, 2012). The successful use of English as a global language is governed by a need to be intelligible and comprehensible to a diverse global audience. Thus, if the grammaticality of a learner's writing were interfering with this central communicative aim, a teacher may wish to address it in their feedback to help develop the writer. Such clarification distinguishes between error correction that facilitates meaning-making and correction that enforces conformity, reinforcing that the problem lies not with grammar attention per se but with its uncritical, prescriptive application.
These critiques and contradictions expose a central problem: feedback practices that default to correctness often obscure rather than clarify what constitutes effective writing in a globalised world. Focusing solely on prescribed norms risks erasing variation, voice and identity in multilingual texts. A global Englishes-informed pedagogy demands greater reflexivity about what is being corrected, why and to what end. Moving towards feedback on language use allows educators to consider not only whether language is
From Prescriptive Correction to Inclusive Pedagogy
While critiques of normative feedback are gaining traction, they have yet to be fully translated into classroom practice. Xiao and Lee (2024) argue that despite the theoretical advances of the global Englishes paradigm, the dominance of WCF continues to shape pedagogy in ways that marginalise multilingual writers. Feedback practices grounded in prescriptive norms not only constrain teacher agency but also limit students’ development as creative and confident writers. In response, they propose a global Englishes language teaching (GELT)-informed feedback pedagogy that prioritises language awareness, rhetorical flexibility and communicative effectiveness. Such an approach aligns inclusive pedagogy with principles of empowerment, visibility of diverse linguistic resources and recognition of writer identity.
A core principle of this approach is the recognition that language norms are not universal but context-dependent. While conventional WCF tends to reduce feedback to binary judgements of correctness and error, GELT-oriented feedback shifts the emphasis to supporting writers in navigating varied rhetorical expectations. For example, a teacher might respond to a student's unconventional phrasing by asking about intended audience and meaning before suggesting revisions, prompting a dialogue about rhetorical choices rather than defaulting to correction. In practice, this may involve overlooking article misuse or non-standard collocations in a reflective assignment where meaning is clear and the purpose is exploratory, while foregrounding coherence, stance and audience alignment. By contrast, in a high-stakes genre such as a grant abstract or policy brief, feedback may focus on clarity of reference, information sequencing or potential ambiguity that could impede uptake by an international readership. In both cases, the focus of feedback shifts from eliminating surface-level ‘errors’ to supporting communicative effectiveness in relation to genre and purpose.
This approach goes beyond permissiveness to actively support strategic language choices. It represents a reorientation of feedback towards meaning-making, audience awareness and rhetorical sensitivity (see McKinley, 2022). Rather than correcting deviations, teachers are encouraged to engage with how and why students use language the way they do, offering guidance on how those choices function within specific communicative contexts. Xiao and Lee (2024) show that such feedback better prepares learners for the realities of English use in globalised, multilingual environments, where clarity and pragmatism often matter more than strict adherence to prescriptive norms.
The problem becomes further complicated in contexts where local English varieties are emerging but remain under-codified. Hamid and Baldauf (2013) highlight how, in the Bangladeshi context, feedback practices often rely on vague or unexamined criteria for determining what counts as an error versus a legitimate varietal feature. Teachers, positioned as gatekeepers of language standards, must navigate unclear boundaries between innovation and incorrectness in contexts where policy may prescribe exocentric norms, but classroom realities reflect localised English usage. The authors argue that this ambiguity places teachers in a double bind: expected to enforce standards they may themselves only partially command, while lacking institutional support to recognise and validate emerging local forms.
Equally important is the recognition of feedback as a social and emotional process. The WCF model, with its focus on error correction, has been shown to induce anxiety and erode confidence, especially among multilingual writers striving to meet unrealistic expectations (Lee, 2023). GELT-informed feedback, by contrast, invites a more dialogic and collaborative dynamic in which students are positioned as agentive users of English rather than deficient learners. Developing this orientation in teacher education is critical, as teachers may require structured opportunities to examine their linguistic assumptions, observe model feedback dialogues and practice responsive commenting strategies.
To embed such practices meaningfully, Xiao and Lee (2024) call for systemic changes in teacher education. Teachers must be equipped not only with the linguistic awareness to recognise global English variation but also with the pedagogical tools to provide inclusive, rhetorically focused feedback. This includes developing feedback literacy that is attuned to diversity in language use, genre and student identity. It also involves confronting the lingering influence of native-speakerism in institutional assessment practices, which often pressure teachers to correct rather than guide. In such environments, teacher agency may be constrained, and resisting prescriptive norms can require collective dialogue and institutional support.
Inclusive feedback pedagogy is not about abandoning standards altogether, but rather about
Generative Artificial Intelligence and the Reinforcement of Normative Bias
Building on the challenges of normativity discussed above, the risks are amplified when feedback processes are partially automated.
The rise of GenAI has introduced new possibilities, and new risks, into the feedback landscape. Tools such as ChatGPT, Grammarly and Microsoft Editor are increasingly used to support writing development, offering grammar corrections, rewordings and stylistic enhancements. While these tools may appear neutral, in collaboration with colleagues Seongyong Lee and Jaeho Jeon (Lee et al., 2025), we show that GenAI systems are far from objective. Trained predominantly on standardised corpora of British and American English, such models tend to default to monolingual, native-speaker norms. Without careful intervention, artificial intelligence (AI) feedback risks scaling up the very normativity that global Englishes research seeks to dismantle.
From a GELT perspective, the problem is not simply technical but ideological. In Lee et al. (2025), we argue that GenAI often reinforces linguistic hierarchies by marking non-standard usages as incorrect, even when those usages reflect legitimate varieties or intentional rhetorical choices. For example, a large language model (LLM) may automatically rewrite plural noun forms, verb choices or discourse markers that are widely accepted in lingua franca academic communication, framing them as deficiencies rather than options. When used uncritically, such feedback encourages writers to defer to algorithmic authority rather than to consider audience expectations or intelligibility. However, when prompted to comment on clarity or rhetorical impact rather than correctness, the same tools can support more agentive decision-making by the writer. This has significant implications for multilingual writers. Rather than supporting them in expressing meaning across diverse contexts, AI feedback often pushes them towards a narrow ideal of correctness, suppressing voice and flattening variation. The result is a subtle but powerful re-inscription of native normativity, now automated and operating at scale. Recent work on structured peer feedback further demonstrates how dialogic, identity-sensitive interaction can support writers’ rhetorical development in ways that resist homogenisation and preserve authorial presence (McKinley et al., 2025).
Crucially, the Lee et al. (2025) study does not call for abandoning AI in language education but rather for recalibrating how these tools are used. Their study demonstrates how GenAI can be adapted through prompt engineering and the integration of global Englishes-informed corpora to produce feedback more attuned to local contexts and communicative goals. In one case, a LLM trained to assess student writing from a Korean English perspective provided significantly more nuanced and context-sensitive feedback than models relying on generic standards. Such examples exemplify how teacher agency and prompting literacy can mitigate algorithmic bias and support contextually appropriate voice.
Still, these adaptations remain the exception rather than the norm. Without explicit attention to linguistic diversity in AI training data, and without frameworks like GELT to inform their use, GenAI tools will continue to reflect dominant ideologies of language. Linguistic bias towards standard English is built into the architectures of the models and the assumptions of the developers; thus, these technological ‘solutions’ to feedback must be used critically by teachers seeking to move beyond mere error correction.
The broader implication is that AI feedback is not separate from the challenges of teacher or peer feedback: it is part of the same ecology. If educators rely uncritically on automated suggestions, they risk amplifying the same deficit discourses that traditional WCF perpetuates. Conversely, if GenAI tools are used as part of a pedagogically grounded, global Englishes-informed approach, they can support writers in navigating diverse rhetorical norms and developing their own voice. To enable this, teacher education programmes will likely need to incorporate opportunities to develop critical digital literacy and basic prompting strategies, ensuring teachers can use AI intentionally rather than deferentially.
To do so, however, educators need not only technological access but also critical digital literacy. Teachers must be prepared to question the linguistic assumptions embedded in the tools they use and to advocate for AI that reflects the pluralistic realities of English today. Surveying teacher beliefs about language variation and legitimacy may also be necessary, as entrenched attitudes often shape how AI feedback is interpreted and applied.
Structural and Systemic Challenges to Change
While pedagogical and technological shifts hold promise for more inclusive feedback, their uptake is constrained by powerful systemic forces. As Hamid and Baldauf (2013) emphasise, the question of what counts as an ‘error’ versus a legitimate varietal feature of English is not just pedagogical or theoretical, it is deeply institutional. Educational policies, editorial practices and assessment regimes often continue to enforce exocentric norms, reproducing a narrow vision of English. Even when educators are sympathetic to more inclusive approaches, they often lack the professional autonomy or institutional support to implement them. Resisting such pressures is frequently easier said than done, particularly when feedback practices are tied to accountability frameworks and standardised outcomes.
Economic inequities further complicate the feedback landscape. L2 writers working in under-resourced institutions or countries often lack access to robust writing support, including tutoring, mentorship or developmental editing. In response, a growing industry of commercial editing services markets itself to these writers, promising ‘native-like’ English. Yet such services often position writers as deficient in their abilities, rather than supporting them to develop their own academic voice (McKinley and Rose, 2023). As editors, we have seen how these services rarely engage with the substance of ideas, instead focusing narrowly on surface-level correction, further entrenching a view of writing as a technical skill rather than a rhetorical act.
These systemic challenges are magnified in high-stakes writing contexts, such as examinations, graduation theses, official reports, grant applications and research publications, where evaluative criteria may be rigidly tied to prescriptive norms. In such situations, writers can face a dilemma, to conform to expectations to secure legitimacy, or risk marginalisation by attempting to disrupt them. This tension highlights the need for strategic negotiation, in which writers learn to operate within dominant expectations while gradually expanding what counts as acceptable academic performance.
Importantly, systemic constraints also shape how emerging technologies like GenAI are adopted. Institutions may promote AI tools for efficiency or cost-saving, without considering the biases embedded in their training data. As we discussed in the previous section, without careful prompt engineering, these tools risk scaling up the same hierarchies and exclusions already embedded in human feedback practices.
Beyond classrooms, editorial and peer review systems also play a gatekeeping role, as decisions about clarity, ‘readability’, or ‘appropriateness’ often reflect unexamined standard language ideologies. From our own experience as editors, we have observed that reviewers may conflate linguistic conformity with scholarly merit, inadvertently discouraging rhetorical experimentation or signals of linguistic identity. Engaging reviewers in conversations about linguistic diversity and encouraging criteria that prioritise intelligibility and contribution can help shift these gatekeeping practices over time.
Addressing these systemic challenges requires more than individual shifts in pedagogy or technology. It calls for structural reform to rethink how linguistic standards are defined, how writing is assessed and how institutions support diverse writers. It also demands a revaluation of feedback itself: not as a tool of correction, but as a mechanism for supporting equitable participation in global knowledge production.
Towards Feedback for Equity and Empowerment
If feedback is to serve writers in a multilingual and globalised world, it must move beyond its traditional role as a corrective mechanism and become a tool for empowerment. Across classroom, institutional and technological contexts, we have argued for a reconceptualisation of feedback as dialogic, situated and inclusive, which enables rather than polices rhetorical agency.
A key principle in this shift is the decentring of prescriptive norms. As Lee (2023) and Xiao and Lee (2024) both argue, multilingual writers benefit far more from feedback that addresses language
This reorientation is not a rejection of quality, but a redefinition of it. Informed by global Englishes and translingual perspectives, feedback can become a generative process that affirms linguistic diversity and supports writers in making strategic choices appropriate to their context, goals and audience. Our recent argument has also positioned writing as a critical site in which standard language ideologies are reproduced, emphasising the need for systemic shifts in assessment practices and institutional expectations (Rose and McKinley, 2025b).
In practical terms, such an orientation may involve (a) embedding classroom discussions about audience and rhetorical purpose, (b) providing brief reflective prompts alongside feedback to support decision-making (e.g., ‘Who do you imagine the target audience is here?’; ‘Which of these suggested revisions best reflects your intended meaning, and why?’), (c) integrating peer review activities that focus on rhetorical effects rather than surface correction and (d) offering short exemplars of diverse yet intelligible Englishes to normalise variation.
Such feedback must also be dialogic, inviting negotiation and reflection rather than unidirectional correction. In pedagogical terms, this means cultivating feedback literacy among both teachers and students. Teachers need support in recognising and valuing diverse Englishes, understanding how to provide rhetorically informed responses and resisting institutional pressures to conform to prescriptive norms. Students, in turn, must be encouraged to see feedback as an opportunity for growth, not as a list of faults to be fixed. This calls for a rethinking of classroom power dynamics, positioning feedback as a shared endeavour in meaning-making.
Technology, too, has a role to play, if deployed critically. GenAI tools can reinforce linguistic hierarchies when left unchecked, but they also have the potential to offer context-sensitive feedback when guided by global Englishes-informed frameworks. The challenge lies in designing, adapting and using these tools in ways that support rather than suppress writer identity and stylistic diversity.
Crucially, feedback reform cannot happen in isolation. It must be supported by systemic changes in how writing is taught, assessed and published. Institutions must revisit the language ideologies embedded in curricula, peer review practices and editorial standards. Only then can inclusive feedback principles take root and flourish. To move towards feedback for equity and empowerment is to recognise that writing is never just a technical act: it is a social, rhetorical and political one. In globalised academia, feedback must evolve to reflect that reality. By grounding our practices in the principles of global Englishes, we can begin to reshape feedback as a practice that uplifts, affirms and enables participation in a linguistically diverse world.
Future research might explore how feedback literacy develops longitudinally across programmes, how GenAI can be adapted for equity-oriented feedback, how rubric design can legitimise rhetorical variation and how writers’ identities and voices are shaped through dialogic feedback in multilingual contexts. Research into assessment reform and stakeholder beliefs about language standards could further illuminate how systemic change might be sustained.
Footnotes
Funding
The authors received no financial support for the research, authorship and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
