Abstract
This article critically examines algorithmic bias and discrimination in the Indian context, focusing on the intersection of rapid artificial intelligence (AI) adoption and existing societal inequities rooted in historical injustices. It explores how AI systems, developed without deliberation and representation, risk perpetuating biases related to caste, religion, gender and socio-economic status, posing significant dangers to marginalised communities. Using a multi-theoretical framework, including algorithmic bias theory, sociotechnical systems theory and feminist theories of structural injustice, the study highlights AI’s entanglement with societal norms and power dynamics. The qualitative research involved in-depth interviews with five experts from academia, law, science and policy to examine AI’s sociotechnical landscape in India. Key findings include underrepresentation of marginalised communities in AI datasets, lack of ethical AI guidelines, risks of surveillance misuse and limited awareness of AI ethics among developers and policymakers. A novel insight is the danger of addressing AI bias reactively, as this could entrench further harm. The study advocates for public policy solutions to promote data equity, foster responsibility among AI developers, establish independent oversight and encourage public discourse on AI ethics. Prioritising equity and ethical considerations is crucial to harnessing AI’s potential while protecting the rights and dignity of all citizens.
Keywords
Get full access to this article
View all access options for this article.
