Abstract
Engineering more trustworthy AI is increasingly important as AI is being applied to more high-consequence work. We argue that trust in AI is often examined too narrowly, and that there is value in applying more naturalistic methods to examine trust in the context of the entire sociotechnical system (i.e., people, technology, and work). We present a case study where we interviewed submariners, and describe how interactions among the people, technology, and work in submarine operations can affect trust in AI. We also discuss the practical benefits of trust as a social contract between end users and developers of AI.
Get full access to this article
View all access options for this article.
