Abstract
Increasing automation transparency is theorized to improve human-automation interaction. For example, delivering uncertainty communication, i.e., alerting the human in/on the information loop that an automated system may falter, should improve the interaction. However, this is not always observed. Scan patterns, multitasking performance, and/or the human’s cognitive abilities may account for this divergence. Four hundred ninety-two Naval Aviation trainees corrected errors of three automated systems of varying reliability and responded to chat messages in a supervisory control simulation. A subset of participants received uncertainty communication about the least reliable system while all other participants received generic information. Stepwise regression models suggested individual differences in attention control predicted automation error correction rates when no uncertainty communication was given. However, when it was given, chat message performance and the amount of visual attention on certain automated systems predicted the error correction rates. Clearly, there is nuance to the efficacy of automation transparency.
Keywords
Get full access to this article
View all access options for this article.
