Abstract
Aided-adversarial decision-making (AADM) refers to military command and control decisions in environments in which computerized aids are available to groups of co-located and distributed decision-makers, and in which there is a potential for adversarial forces to tamper with and disrupt such aids. It is therefore necessary to understand the extent to which decision-makers rely on or use these decision aid systems, and factors affecting that reliance. Researchers have suggested that trust can affect how much people accept and rely on increasingly automated systems. Ongoing research and experimentation in the Center for Multi-source Information Fusion at the University at Buffalo has been addressing these concerns.
Get full access to this article
View all access options for this article.
