Abstract
High-performance cycling ergometers require precise control systems to ensure smooth operation, dynamic stability, and accurate resistance modulation. Traditional control methods for active magnetic bearings (AMBs) often struggle to manage nonlinear system behavior and external disturbances. This study develops and evaluates a reinforcement learning-based control framework for AMBs to improve stability and responsiveness in cycling ergometers under variable loading conditions. A nonlinear dynamic model of the rotor system is formulated, incorporating magnetic force dynamics, shaft deflection, and disturbances such as pedaling force variation and flywheel imbalance. Two control strategies are implemented and compared: a conventional Proportional-Integral-Derivative (PID) controller and a reinforcement learning controller based on Proximal Policy Optimization (PPO). Both controllers are trained in simulation and evaluated under transient and steady-state conditions. Results show that the PPO controller provides superior rotor stability, reduced angular deviation, and more robust performance under shock inputs compared to PID control. Additionally, the PPO controller adapts effectively to varying operational scenarios without manual tuning. These findings demonstrate the potential of reinforcement learning for real-time adaptive control of AMBs, offering a promising approach to enhance the performance, reliability, and user experience of next-generation sports training equipment.
Keywords
Get full access to this article
View all access options for this article.
