Abstract
Vehicle motion control in handling limits, including drifting control, involves highly nonlinear system dynamics and uncertainties. Conventional dynamic-model-based approaches often require extensive parameter tuning, since fixed parameters are difficult to handle dynamic environmental changes. To address this, a Model Predictive Control (MPC) framework with parameter self-adaptation via reinforcement learning (RL) is proposed. The RL agent can autonomously adjust MPC controller parameters based on its learned experiences, and is capable of online learning during closed-loop control. This framework is first validated on the high-fidelity simulation platform CarSim, showing that the algorithm can achieve stable drifting under various conditions and effectively adapt to dynamic environmental changes. Further, real-vehicle testing is conducted on a full-size B-class electric car with rear-wheel-drive and steer-by-wire systems. According to the published literature, this is the first time that RL methods are successfully trained and deployed on real-vehicle drifting tasks, outperforming the conventional MPC-based algorithm.
Keywords
Get full access to this article
View all access options for this article.
