Abstract
Subjective trust in automation measurement is used prolifically due to its high availability and practical ease in implementation, and remains close to the state-of-the-art. Yet, it is also easy to violate a validated survey measure by altering components of the measure, such as changing wording, altering the number of items, changing the scale of response, and translating into new languages. An exhaustive review of the Jian et al. scale’s trust in automated systems measure as it was used across 1,480 cited works. More than half altered the scale in ways which, on average, lowered the scale’s internal validity. Increasing the number of modifications further reduced internal validity. Guidance for the future use of subjective scales in trust measurement is provided. We retain hope of encouraging adherence to best practices and enabling continued use of subjective trust measurement in human-centered AI system development efforts.
Get full access to this article
View all access options for this article.
