Abstract
Automated procedural and decision aids may in some cases have the paradoxical effect of increasing errors rather than eliminating them. Results of recent research investigating the use of automated systems have indicated the presence automation bias, a term describing errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing (Mosier & Skitka, in press). Automation commission errors, i.e., errors made when decision makers take inappropriate action because they over-attend to automated information or directives, and automation omission errors, i.e., errors made when decision makers do not take appropriate action because they are not informed of an imminent problem or situation by automated aids, can result from this tendency.
A wide body of social psychological research has found that many cognitive biases and resultant errors can be ameliorated by imposing pre-decisional accountability, which sensitizes decision makers to the need to construct compelling justifications for their choices and how they make them. To what extent these effects generalize to performance situations has yet to be empirically established. The two studies presented represent concurrent efforts, with student and “glass cockpit” pilot samples, to determine the effects of accountability pressures on automation bias and on verification of the accurate functioning of automated aids. Students (Experiment 1) and commercial pilots (Experiment 2) performed simulated flight tasks using automated aids. In both studies, participants who perceived themselves “accountable” for their strategies of interaction with the automation were significantly more likely to verify its correct functioning, and committed significantly fewer automation-related errors than those who did not report this perception.
Get full access to this article
View all access options for this article.
