Nonlinear systems' control presents a significant challenge and has attracted considerable interest within the research community. In this study, reinforcement learning-based control is explored, a method that has shown promising results across various applications. The focus is placed on an underactuated double pendulum system, which is characterized as either a pendubot or an acrobot, depending on the actuated joint. The primary goals are to achieve effective swing-up and to maintain stability at the system's highest position. A combined controller approach is employed, integrating a Soft Actor-Critic (SAC) trained agent, a model-free reinforcement learning method, with a Linear Quadratic Regulator (LQR) controller. Promising results have been achieved in simulations. To address the challenges encountered in the transition from simulation to real-world application, several techniques, including domain randomization, early termination, and noisy validation, are employed. These methods aim to enhance the robustness of the system in a real-world environment. The application on real hardware, especially for the pendubot setup, has demonstrated a limited success rate of 40%. Through performance and robustness leaderboards, the efficacy of the SAC+LQR controller in both simulated and real-world environments is quantitatively assessed.
Reinforcement Learning-based Control for Swing-up and Stabilization of an Underactuated Double Pendulum System
In der Regel sind die Vorträge Teil von Lehrveranstaltungsreihen der Universität Bremen und nicht frei zugänglich. Bei Interesse wird um Rücksprache mit dem Sekretariat unter sek-ric(at)dfki.de gebeten.