Abstract
Magnetorheological (MR) dampers, as a highly promising alternative to traditional passive dampers, enable semi-active control of landing gear systems (LGS) by dynamically adjusting damping force through current modulation. This approach aims to enhance vibration attenuation performance and energy absorption capacity. However, the complexity of landing conditions—such as varying runway surface characteristics, aircraft loading, and landing velocity—poses stringent requirements on controller robustness. To address this challenge, a Hybrid intelligent control framework integrating Linear Active Disturbance Rejection Control (LADRC) with adaptive deep reinforcement learning (DRL) is developed. The LADRC module employs an Extended State Observer (ESO) to compensate for parameter uncertainties and unmodeled dynamics, thereby maintaining control performance under complex operating conditions. To overcome the limitations of fixed-gain LADRC, a dynamic gain scheduling mechanism based on Deep Deterministic Policy Gradient (DDPG) is introduced, enabling real-time optimization of disturbance rejection parameters through feedback adaptation. Taking into account the deficiencies of conventional DDPG, such as inadequate state representation, policy instability, and reward delays, an improved cross-temporal information enhancement method is proposed. This method enhances the state space by transforming implicit dynamics into explicit time series representations and combines action-holding strategies with a discounted cumulative reward mechanism to establish an immediate mapping between actions and rewards. Experimental results demonstrate that, compared to the conventional DDPG algorithm, the proposed method achieves a 42% improvement in the convergence rate. In generalized tests involving multi-environment, energy absorption efficiency and impact attenuation coefficient are improved by 22% and 28%, respectively, thereby validating the effectiveness of the proposed multi-timescale dynamic optimization framework.
Keywords
Get full access to this article
View all access options for this article.
