Abstract
The use of artificial intelligence (AI) in human resource analytics (HRA) has transformed how organisations handle talent acquisition, assess employee performance and manage their workforce. However, the pervasive challenge of embedded prejudices within AI algorithms poses significant ethical, operational and legal concerns. This article examines the challenges of implementing AI-driven HRA, highlighting how biases in training data, algorithm design and decision-making processes can reinforce systemic discrimination. It underscores the importance of transparency, accountability and inclusivity in AI systems to promote fair and equitable outcomes. Additionally, the study investigates strategies for mitigating biases and enhancing the reliability of AI in HR decision-making. By addressing these challenges, organisations can harness AI’s potential while fostering a fair and inclusive workplace environment.
Keywords
Get full access to this article
View all access options for this article.
