Abstract
High-consequence domains such as medical triage for critical casualties often involve decisions for which there is no single right answer, and for which expert decision makers will disagree. These decisions thus represent a particular challenge for trustworthy and ethical AI in these domains, as simply training AI to make “the right decision” is not possible. In the current research, we offer a framework for aligning AI systems to the key, domain-specific attributes of decision-makers in order to produce trust in human experts. In Study 1, we investigate six possible attributes and find that the degree of alignment predicts rated trust ratings and delegation using short vignettes for assessment and delegation. In Study 2, we extend these results to demonstrate that alignment predicts trust in decisions from two different AI systems across two distinct attributes. The results thus offer a dimensionality-reducing approach to AI trustworthiness in high-consequence domains.
Keywords
Get full access to this article
View all access options for this article.
