Abstract
The advancement of Connected and Autonomous Vehicle (CAV) technology has made the complex cooperation of multiple CAVs feasible, especially at non-signalized intersections. However, challenges remain in the mixed-autonomy traffic scenarios, where collaboration among CAVs and their interactions with human-driven vehicles (HDVs) must be resolved simultaneously. Accordingly, this study proposes a new multi-agent reinforcement learning algorithm, SG-QMIX, extending the original QMIX framework by incorporating a sequential game theory-based action filter module. This module integrates the First-Come-First-Served (FCFS) strategy with the sequential game and its derived equilibrium to mask out the unavailable actions, in which the FCFS strategy simplifies the ensuing cooperation of multiple CAVs before they enter the intersection, and the equilibrium of the designed sequential game guides the CAVs to learn the interactive strategies with other participants at the junction. Moreover, the experimental results show that the SG-QMIX approach can outperform the existing approaches in terms of episodic return, episodic length, and success rate for cooperative decision-making at an isolated signal-free intersection. Additionally, the trained policy can reduce average fuel consumption, maintain traffic efficiency, and enable CAVs to interact with surrounding HDVs rationally and altruistically. Furthermore, we also validate the zero-shot transferability of SG-QMIX to an unseen scenario and its applicability to more complex intersections.
Keywords
Get full access to this article
View all access options for this article.
