Abstract
As automated systems are entering new environments, some of which involve high-risk decision making, it is critical that we understand in what situations people will or will not rely on the recommendations of automated decision aids. It is theorized that in deciding whether to trust automation people consider perceived consequence, weighing the cost associated with inappropriate action or inaction and the psychological cost associated with verifying the aid. This study will address the effect that perceived consequence has on attitudes and behavior toward decision aids by exposing participants to different levels of consequence, manipulated by the cost associated with making a mistake and the cost needed to verify the aid. It is expected that as the cost of making a mistake increases and the cost of verifying the automation decreases, trust and reliance in a decision aid will decrease.
Get full access to this article
View all access options for this article.
