Abstract
A recurring worry in recent applied research on the role of humans in highly reliable automated systems has been the fear of “complacency‘, or the tendency to trust automation too much, with the consequence that faults or abnormal function go undetected. Existing evidence does not support the conclusion that operators are complacent. Rather, it supports the notion that in any complex dynamic system even an operator who is well calibrated with respect to the probability of faults, who shows eutactic behaviour, and who behaves optimally cannot be expected to detect all faults. The question of what strategy should be adopted when monitoring a highly reliable system is discussed.
Get full access to this article
View all access options for this article.
