Abstract
With the continuous development of urban metro networks, train delays have increasingly become a significant factor affecting metro operational efficiency and passenger travel experience. This paper delves into operational adjustment strategies for metro trains under delay conditions and proposes an improved deep reinforcement learning algorithm, namely the D3QN (double dueling deep Q-network) algorithm, to tackle the real-time adjustment of metro train schedules in the event of delays. The D3QN algorithm enhances practical application by refining the selection, simplification, and rationalization of training instances, while also more effectively modeling the interaction relationships among trains. It integrates techniques such as the double deep Q-network (DDQN) and dueling deep Q-network (DQN) to enhance stability, computational speed, and convergence. A case study on Shenzhen Metro Line 14 demonstrates the efficiency and applicability of the proposed algorithm. Results show that the D3QN algorithm can significantly reduce the average total delay of train operations by approximately 34.15% compared to traditional methods (including no adjustment, manual adjustment, and the first in first out method). Furthermore, it outperforms both the DQN and DDQN algorithms with respect to reducing delays by approximately 4.1%, reducing convergence iterations by approximately 24.5%, and increasing convergence reward by 14.62%. The state variable design of the D3QN demonstrates its versatility and provides an effective solution for real-time metro train schedule optimization under delay conditions.
Keywords
Get full access to this article
View all access options for this article.
