Deriving Rewards for Reinforcement Learning from Symbolic Behaviour Descriptions of Bipedal Walking
Daniel Harnack, Christoph Lüth, Lukas Groß, Shivesh Kumar, Frank Kirchner
In 62nd IEEE Conference on Decision and Control (CDC), (CDC-2023), 13.12.-15.12.2023, Marina Bay Sands, TBA., 2023.

Abstract :

Generating physical movement behaviours from their symbolic description is a long-standing challenge in artificial intelligence (AI) and robotics, requiring insights into numerical optimization methods as well as into formalizations from symbolic AI and reasoning. In this paper, a novel approach to finding a reward function from a symbolic description is proposed. The intended system behaviour is modelled as a hybrid automaton, which reduces the system state space to allow more efficient reinforcement learning. The approach is applied to bipedal walking, by modelling the walking robot as a hybrid automaton over state space orthants, and used with the compass walker to derive a reward that incentivizes following the hybrid automaton cycle. As a result, training times of reinforcement learning controllers are reduced while final walking speed is increased. The approach can serve as a blueprint how to generate reward functions from symbolic AI and reasoning.

Keywords :

Reinforcement Learning, Robotics, Hybrid Systems

Files:

main.pdf

Links:

https://arxiv.org/pdf/2312.10328.pdf


© DFKI GmbH
last updated 28.02.2023
to top