Abstract
Operators generalize their trust across all of the autonomous agents they are working with, a phenomenon referred to as System Wide Trust (SWT). As a result, the failure of one aid can cause a trust decrement in —and therefore disuse of— all other competent aids within the system. This study explored two possible SWT mitigation strategies: competence transparency and different appearance of aids. Previous research has shown that transparency and feedback affects trust calibration in systems (Walliser et al., 2016), yet our appearance manipulation is relatively novel, using gestalt principles of grouping to explore whether heterogeneous aids will be more easily differentiated compared to homogenous aids. Participants supervised four UAVs that identified targets as enemies or friendlies. Only one of these UAVs was inaccurate (70% recommended accuracy), which caused a SWT trust decrement for all UAVs. We expected that the heterogeneous UAVs with competence transparency would suffer the least SWT effect, yet the results did not find a difference between conditions. These findings suggest that the System Wide Trust effect is too strong to be affected by our manipulations and that further research on mitigation strategies is required.
Get full access to this article
View all access options for this article.
