Abstract
In human-machine systems, it is often assumed implicitly or explicitly, that the human must be at the locus of control and that automation should be always subordinate to the human. Whether the assumption is appropriate can never be examined fully if our discussion is qualitative. This paper shows that quantitative models (especially, probability theoretic models) serve as powerful tools for investigating whether to automate decision and/or action. Following are some observations derived with probabilistic models: (1) Decision and/or action function may be traded between human and automation dynamically in a situation-adaptive manner. (2) There are mathematical conditions for which decision and/or action may be automated and human intervention may not be allowed. Typical examples include Go/NoGo decision-making or a safety-control system with insufficient information under time-criticality. This paper argues that the situation-adaptive autonomy based on quantitative models may serve as one of guidelines for implementing adaptive automation and/or for investigating an automation invocation issue.
Get full access to this article
View all access options for this article.
