Abstract
E-scooters raise safety concerns, and an AI-driven system might help. Previous research suggests that AI can assist human users in three primary roles: Advisor, Co-pilot, or Guardian. Understanding the impact of presenting these AI roles with various modalities (e.g., visual, auditory, tactile) is crucial for their effectiveness. However, the effect of presenting AI roles in multiple modalities to e-scooter users remains unknown. Accordingly, this study examined user preferences for human-AI collaboration in e-scooters using a national survey. A total of 473 valid responses (mean age = 46.29) were collected. The results indicated no significant differences among AI roles. The auditory modality was preferred over both the visual and tactile modalities. Within each modality type, road projection was the most favored visual modality, an informative agent was preferred in the auditory modality, and the handlebar was preferred in the tactile modality. Overall, these findings support the development of future AI-driven micromobility systems.
Keywords
Get full access to this article
View all access options for this article.
