Abstract
Humans increasingly use automated decision aids. However, environmental uncertainty means that automated advice can be incorrect, creating the potential for humans to act on incorrect advice or to disregard correct advice. We present a quantitative model of the cognitive process by which humans use automation when deciding whether aircraft would violate requirements for minimum separation. The model closely fitted the performance of 24 participants, who each made 2,400 conflict-detection decisions (conflict vs. nonconflict), either manually (with no assistance) or with the assistance of 90% reliable automation. When the decision aid was correct, conflict-detection accuracy improved, but when the decision aid was incorrect, accuracy and response time were impaired. The model indicated that participants integrated advice into their decision process by inhibiting evidence accumulation toward the task response that was incongruent with that advice, thereby ensuring that decisions could not be made solely on automated advice without first sampling information from the task environment.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
