Abstract
Artificial intelligence (AI) is increasingly utilized in criminal justice systems to support decisions related to bail, sentencing, and parole. However, these systems often perpetuate historical biases, particularly racial disparities embedded in the training data. Many existing models lack effective mechanisms for verifying fairness and mitigating bias, resulting in unequal outcomes for legally protected groups. This research aims to develop a structured machine learning (ML) framework that integrates Fairness Verification Algorithms with mitigation strategies to enhance fairness in criminal risk assessment. The framework employs Extreme Gradient Boosted K-Means Clustering (XGBoost-KMC), trained exclusively on data from White offenders, to minimize bias during model development. The dataset comprises 200,000 arraignment records from a metropolitan court system, including demographic and criminal history variables. Pre-processing steps involve removing sensitive attributes such as race and zip code, applying Z-score normalization, and addressing class imbalance using the Synthetic Minority Over-sampling Technique (SMOTE). Principal Component Analysis (PCA) is used for feature extraction, reducing dimensionality while preserving key predictive information. To mitigate bias, the model applies optimal transport to align the feature distribution of Black offenders with that of White offenders. It generates calibrated and uncertainty-aware risk forecasts using conformal prediction sets. Fairness Verification Algorithms assess the model’s outputs using statistical metrics such as prediction parity and classification parity across demographic groups. The model achieves an average accuracy of above 93% with 5-fold cross-validation, demonstrating both fairness and predictive capability. Results demonstrate that the proposed framework achieves a Pareto improvement, enhancing fairness for disadvantaged groups without compromising predictive accuracy. This approach provides a scalable and transparent solution for building equitable AI systems in criminal justice applications.
Keywords
Get full access to this article
View all access options for this article.
