Abstract
Unchecked overreliance on AI systems in human-AI teaming (HAT) can erode critical human judgment and compromise accountability, especially in high-stakes decisions. This paper introduces two integrated studies aimed at preserving balanced human agency in effective AI-assisted decision-making. First, it presents the Human-AI-System Concordance (HASC) Matrix, a diagnostic framework designed to systematically identify vulnerabilities to overreliance within human-AI collaboration (HAC) scenarios. Second, it then empirically evaluates Cognitive Forcing Functions (CFFs)—targeted interventions designed to mitigate overreliance and maintain calibrated trust—through a controlled experimental setting. These studies deliver actionable insights for developing balanced HAC environments, highlighting the conditions under when overreliance occurs and demonstrating how and which cognitive interventions can effectively mitigate it. This comprehensive approach can help to advance understanding of designing AI systems to better support effective human-centered decision-making.
Keywords
Get full access to this article
View all access options for this article.
