Abstract
This paper presents a control strategy that integrates deep reinforcement learning-based active disturbance rejection control (ADRC) with deep deterministic policy gradients (DDPG) for leader-follower coordination in unmanned tracked vehicles. In the proposed framework, DDPG adaptively tunes ADRC parameters, enabling robust leader-following performance under challenging conditions such as track slippage and high-frequency measurement noise. Simulation studies on a laboratory vehicle model with varying leader velocities validate the effectiveness of the method. Compared to conventional fixed-parameter ADRC, the adaptive ADRC–DDPG controller achieves substantial performance gains, reducing the integral absolute error by up to 62%, the integral time absolute error by up to 63%, and the integral time square error by up to 88%. These results highlight the potential of the proposed approach to enhance UTV autonomy and adaptability in dynamic environments, representing a promising step toward advanced adaptive control for autonomous ground vehicles.
Keywords
Get full access to this article
View all access options for this article.
