Abstract
The central question of this paper is how to simply enhance a set of given supervised learning algorithms under some fairness requirements, to ensure that any sensitive variable does not “unfairly” influence the outcome. To achieve this goal, we work with several notions of fairness (Demographic Parity, Equalized Odds, Lack of Disparate Mistreatment), possibly generalised to more general concepts of conditional fairness. We linearly combine an ensemble of binary and/or continuous basis classifiers or regressors to build an “approximately optimal” solution in terms of fairness and accuracy for any given notion of fairness. The trade-off between fairness and predictive power is managed by considering penalised criteria. We rely on post-processing procedures without any transformation of the data nor of the basis training algorithms. Some empirical experiments, by simulation and on real databases, are provided to illustrate our approach.
Get full access to this article
View all access options for this article.
