Abstract
Operations automation is a computer-based system that attempts to encode operator expertise, typically the expertise of personnel in charge of complex dynamic systems. The use of automation to replace operators is becoming a serious option in many systems, some of which are safety critical. One application of operations automation is ‘lights-out’ automation, that is, systems in which there are no operators to ‘tend’ the system. If an error occurs, the system remotely pages an operator who evaluates the problem. This paper is predicated on the assumption that even the best-designed automation requires human interaction, since, for the foreseeable future, all automation will inevitably fail at some point in time. Automated systems that attempt to emulate human expertise are likely to fail even more often, as building robust systems of this type remains a challenging research issue. Thus, identifying and addressing design issues for operations automation are critical components to ensure safe and productive systems. Design issues include the ability to elicit and encode robust expert knowledge, as well as supporting human-automation interaction with operations personnel when the system reaches its inevitable limits, and with engineers and software designers who must ensure that the system does not fail the same way more than once. This paper describes two field studies of operations automation fielded at NASA for satellite control. The purpose is to understand how and why such automation fails and design issues that facilitate or degrade human-automation interaction.
Get full access to this article
View all access options for this article.
