Abstract
Clear explanations about automated vehicle (AV) decisions are critical for enhancing user understanding and reducing uncertainty. However, how users perceive different AV explanations and how they would improve them remains underexplored. Through qualitative interviews, this study explored user responses to four types of AV explanations: no explanation, action explanations (“what”), reasoning explanations (“why”), and combined action and reasoning (“what and why”). Participants highlighted evident differences: the absence of explanations caused anxiety and confusion, while explanations offering reasons or actions alone had strengths and weaknesses. Combined explanations generally provided the best balance by enhancing predictability and transparency, though they occasionally risked information overload. Participants also suggested practical improvements for AV explanations, emphasizing the inclusion of visual cues, clear descriptions of consequences, and delivery through natural conversational speech. These insights underscore the importance of adaptable explanation designs tailored to diverse user preferences and contexts. This research provides user-driven recommendations for designing effective AV explanations, enhancing transparency, and strengthening public trust in automated driving technologies.
Keywords
Get full access to this article
View all access options for this article.
