Abstract
This practitioner case study aims to demonstrate the efficacy of including images of human faces in the display of automated systems to promote trust and transparency. Previous research uncovered that commercial applications which include a face are perceived to have more credibility than ones in which a face is absent. This paper’s hypothesis states that, similarly, including faces in a security monitoring and automated access checkpoint display will garner higher perceived trust from the human monitoring the system. The study compared three automated security management displays. One display includes the face of the individual requesting access, another display uses only names, and a third display uses only personnel titles. The results demonstrate the display with faces provided the monitoring operator with more accuracy (p < .04), higher subjective trust (p < .000163), and trended toward higher measured trust (p < .088). The study potentially has a wider impact given the growth of artificial intelligence algorithms. According to researchers, an important aspect of improving operator performance with artificial intelligence algorithms is to provide system transparency. Using faces in displays may enhance transparency and therefore lead to better human machine performance with automated algorithms.
Keywords
Get full access to this article
View all access options for this article.
