Abstract
In the era of Big Data, the development of artificial intelligence (AI) systems presents both opportunities and challenges, particularly concerning privacy and fairness. While differential privacy (DP) has emerged as a robust methodology for preserving privacy in real-world applications, its local variant (LDP) specifically addresses trust issues by removing the reliance on a centralized server. Equally critical, conducting fairness audits of AI systems helps identify and mitigate discriminatory outcomes in machine learning. Although the relationship between DP and fairness is inherently multifaceted, this paper offers a detailed empirical examination of how collecting multi-dimensional sensitive attributes under LDP affects fairness in binary classification tasks. Our findings reveal that LDP can slightly improve fairness without substantially degrading model performance—challenging the notion that DP necessarily exacerbates unfairness. We demonstrate these results by evaluating seven state-of-the-art LDP protocols on three benchmark datasets, using established group fairness metrics. Moreover, we propose a novel privacy budget allocation scheme that incorporates varying domain sizes of sensitive attributes, achieving a superior privacy–utility–fairness trade-off compared to existing solutions.
Get full access to this article
View all access options for this article.
