Abstract
This pilot study evaluates three machine learning models—Claude, ChatGPT, and Gemini—in parole decision-making by comparing their recommendations against human judicial decisions. Analyzing 150 cases across sexual, drug-related, and violent offenses, the study demonstrates alignment rates of 56%–68% between artificial intelligence (AI)-generated and human decisions. Findings reveal a consistent rehabilitative bias in AI recommendations, with statistical analyses indicating robust patterns across offense categories. This research establishes an empirical foundation for understanding AI’s potential as a decision-support tool in parole processes while preserving essential human judgment. While focusing on Israel, the findings offer a preliminary basis for considering similar AI integrations across different jurisdictions. These results underscore the importance of developing clear criminal justice policies and ethical frameworks to guide the responsible use of AI in parole decisions, ensuring that technological tools support rather than replace human judgment and enhance public trust in the system.
Keywords
Get full access to this article
View all access options for this article.
