Abstract
Complex traffic scenes greatly challenge the road safety of automated vehicles (AVs). This paper integrates the decision-making, path-planning, and motion-control modules to ensure the autonomous driving performance in the high-speed cruising scenario. First, to guarantee deep exploration of the reinforcement learning method, a Bootstrapped deep-Q-Network (BDQN) is proposed to address the adaptive decision-making of AVs. The artificial potential field (APF) is introduced in the reward function to improve the autonomous obstacle-avoidance ability. Then, quantifying the multi-performance requirements of AVs under high-speed cruising can be complex. We employ an inverse reinforcement learning (IRL) approach to learn path-planning ability from skilled drivers, generating a reference path for executing lane changes. An embedded linear-quadratic regulator (LQR) tracking controller is further developed to output the trajectory points. These points effectively satisfy the vehicle safety constraints and ensure reference feasibility. By introducing the IRL and LQR into the training process of BDQN, the path-planning, motion-control modules are coupled with decision-making. Finally, simulation results demonstrate the proposed framework can guarantee the high-speed cruising performance.
Keywords
Get full access to this article
View all access options for this article.
