Abstract
In an indirect adaptive neural control scheme, the performance can be significantly impacted by arbitrary selection of the neural emulator and neural controller’s adaptive learning rates. This paper presents a novel method for adjusting the adaptive learning rates of both the neural network emulator and controller using reinforcement learning, thereby improving the closed-loop system’s performance. Key advantages of this new control algorithm include: (1) real-time optimization of the adaptive learning rates, eliminating the need for manual tuning typically required by conventional methods, and (2) faster convergence, disturbance rejection, and accurate tracking. The efficiency of the reinforcement learning-based adaptive rate adjustment is validated through two numerical control of a SISO nonlinear systems. Results demonstrate the improved performance of the proposed neural controller compared to existing methods. The developed approach is also applied to a semi-batch reactor to validate the simulation results.
Keywords
Get full access to this article
View all access options for this article.
