Abstract
Abstract
“Big Data” and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for discrimination-aware data mining and fairness-aware data mining aim at keeping decision processes supported by information technology free from unjust grounds. However, these formal approaches alone are not sufficient to solve the problem. In the present article, we describe reasons why discrimination with data can and typically does arise through the combined effects of human and machine-based reasoning, and argue that this requires a deeper understanding of the human side of decision-making with data mining. We describe results from a large-scale human-subjects experiment that investigated such decision-making, analyzing the reasoning that participants reported during their task to assess whether a loan request should or would be granted. We derive data protection by design strategies for making decision-making discrimination-aware in an accountable way, grounding these requirements in the accountability principle of the European Union General Data Protection Regulation, and outline how their implementations can integrate algorithmic, behavioral, and user interface factors.
Get full access to this article
View all access options for this article.
