Abstract
Effective communication in human-autonomy teams is crucial for optimizing the process of managing autonomy or automation malfunctions or failures. This study explores how error type (autonomy vs. automation errors) determines how human team members communicate the error and examines how training interventions (control, trust calibration, or coordination training) influence these behaviors in text-based communication channels. Previously collected data in a Synthetic Task Environment was analyzed using logistic regression. Results indicated that human team members were less likely to communicate with other human team members during autonomy errors (when the synthetic teammate malfunctions) compared to automation errors (when the operating system fails, such as display failures), regardless of training type. This suggests that human teammates tend to address autonomy errors directly with the synthetic teammate, which may delay error resolution if the agent cannot correct its own mistakes. These findings help to understand the decision-making process of human teammates in managing errors during unexpected automation and autonomy errors.
Get full access to this article
View all access options for this article.
