Abstract
Human-automated Judgment Learning (HAJL) is a methodology for investigating human-automated judgment system interaction capturing the judgment processes of the human and automated judge, features of the task environment, and relationships between them. HAJL provides measures for conflict between the judges, compromise by the human judge, adaptation of the human judge to the automated one, and for assessing how well the human judge understands the automated one. HAJL was empirically tested using a simplified air traffic conflict prediction task. Two between-subjects manipulations were crossed to investigate HAJL's sensitivity to training and design interventions. Statistically significant differences were found with respect to 1) males outperforming females judgment performance before feedback from the automated judge was available while the judge's subsequent output eliminated this difference; 2) participants tended to compromise with the automated judge over time. HAJL also identified a trend for participants with higher judgment achievement to predict better the automated judgment and thought that their own judgments were closer to the automated judge than they were.
Get full access to this article
View all access options for this article.
