Abstract
The emergence of digital platforms provides new opportunities to make sense of, and confront, issues central to language teaching and learning. In recent years, for example, technological advancements have allowed learners to receive automated, AI-generated feedback on their spoken language. Such technologies offer exciting opportunities to transform the ways in which learners understand and develop language proficiency. However, AI programming may rely on outdated or problematic understandings of the “native speaker,” which is an empirical issue that deserves further attention. To this end, the current study examines how learners respond to the language evaluations provided by ELSA, which is a popular app that offers automated pronunciation feedback. The results, which are based on a topic modeling examination of 51,143 user reviews alongside diary entries, suggest that AI disempowers learners by providing numerical scores based on region-specific accents and pronunciation features, and thus oversimplifying the challenges of developing language proficiency. These findings demonstrate that despite widespread claims of AI objectivity, ELSA's evaluations (re)produce native speakerism and racialized language ideologies as understood by its users. ELSA's evaluations illustrate just one example of a broader phenomenon of for-profit digital platforms using AI to provide a veneer of objectivity and legitimacy to outdated understandings of language proficiency.
Get full access to this article
View all access options for this article.
