Abstract
Artificial intelligence (AI)–driven assistive technologies are increasingly promoted as tools of accessibility, autonomy, and empowerment for disabled people. Computer vision applications, speech recognition systems, and AI-mediated platforms are widely framed as neutral innovations designed to reduce barriers and enhance independent participation. However, critical scholarship has shown that AI systems are deeply embedded in social, political, and economic power relations, often reproducing normative assumptions about bodies, senses, and communication. Despite growing debates on algorithmic bias, disability remains underexamined within mainstream AI ethics and fairness research. This study investigates how disability is operationalized in AI-based assistive technologies through a qualitative comparative analysis of four widely used systems: Microsoft Seeing AI, Be My Eyes (AI image description), Google Lookout, and Google Live Transcribe. Rather than assessing technical performance, the analysis examines official user manuals, help center documentation, interface structures, and promotional materials as empirical sites where normativity is produced and communicated. Drawing on Critical Disability Studies and Science and Technology Studies, the study analyzes how these technologies frame perception, communication, independence, and assistance. The findings demonstrate that assistive AI systems consistently conceptualize accessibility as a process of translating disabled bodies and senses into formats legible to dominant technological norms. User instructions and interface logics privilege visual, auditory, and linguistic standardization, while marginalizing multi-channel communication, interdependence, and alternative sensory practices. This paper introduces a methodological intervention by foregrounding documentation and interface instructions as sites of algorithmic ableism, arguing that accessibility is shaped not only by data and models, but also by everyday design discourse. The study concludes by calling for participatory, relational, and politically grounded approaches to assistive AI design.
Keywords
Get full access to this article
View all access options for this article.
