Abstract
The rapid growth of artificial intelligence (AI) has significant environmental impacts that are rarely communicated. This study leverages the concept of design friction to communicate AI’s environmental costs during user interaction with a system, and examines its effects on trust, perceived trust calibration, and responsible AI use. In an experiment (N = 171), cue-based friction indirectly enhanced trust-related outcomes through the transparency heuristic and perceived social responsibility, whereas action-based friction influenced both trust-related outcomes and responsible AI use through heightened cognitive elaboration but reduced perceptions of user agency. Implications for conceptualizing different forms of design friction and promoting responsible AI design are discussed.
Get full access to this article
View all access options for this article.
