VeryHuman

Learning and Verifying Complex Behaviours for Humanoid Robots

RH5 Humanoid. Source: DFKI GmbH, Heiner Peters
RH5 Humanoid. Source: DFKI GmbH, Heiner Peters

The validation of systems based on deep learning for use in safety-critical applications proves to be inherently difficult, since their sub symbolic mode of operation does not provide adequate levels of abstraction for representation and proof of correctness. The VeryHuman project aims to synthesize such levels of abstraction by observing and analysing the behaviour of upright walking of a two-legged humanoid robot. The theory to be developed is the starting point for the definition of an appropriate reward function to optimally control the movements of the humanoid by means of enhanced learning, as well as for verifiable abstraction of the corresponding kinematic models, which can be used to validate the behaviour of the robot more easily.

Duration: 01.06.2020 till 31.05.2024
Donee: DFKI GmbH
Sponsor: Federal Ministry of Education and Research
Grant number: 01IW20004
Partner: Cyber-Physical Systems (CPS) Research Department, German Research Center for Artificial Intelligence (DFKI)
Application Field: Assistance- and Rehabilitation Systems
Logistics, Production and Consumer
SAR- & Security Robotics
Space Robotics
Related Projects: D-Rock
Models, methods and tools for the model based software development of robots (06.2015- 05.2018)
TransFIT
Flexible Interaction for infrastructures establishment by means of teleoperation and direct collaboration; transfer into industry 4.0 (07.2017- 06.2021)
Q-Rock
AI-based Qualification of Deliberative Behaviour for a Robotic Construction Kit (08.2018- 07.2021)
Related Software: HyRoDyn
Hybrid Robot Dynamics
MARS
Machina Arte Robotum Simulans
NDLCom
Node Level Data Link Communication
Rock
Robot Construction Kit

Project details

Overall workflow in the project VeryHuman. Source: DFKI GmbH, Foto: Daniel Harnack
Biologically inspired control algorithms for robots have been proven very successful. Often, these algorithms use techniques such as reinforcement learning or optimal control to perform sophisticated movement patterns with a robot (for e.g. humanoid walking). However, two main challenges exist for these learning-based approaches:
  • A robust hardware of the robot along with an accurate simulation of the system is required. For example, the robot can be subjected to a large amount of holonomic constraints including internal closed loops and external contacts which pose challenges to the accuracy of the simulation.
  • Second, control algorithms of this kind can be hard to implement due to lack of knowledge of reward and constraints. As an example, consider the upright walking movement for a two-legged humanoid robot. It is not immediately clear how one can specify the task of “upright walking”. We might try to relate different body parts (head above shoulders, shoulders above waist, waist above legs), use physical stability criteria (centre of pressure, zero moment point etc), but do these really specify walking and what are non-trivial properties? This leads to the non-trivial task of defining a suitable reward function for (deep) reinforcement learning approaches or cost function for optimal control approaches along with constraints.
This project aims at three basic research questions:
  • How can we formulate and prove properties of a complex humanoid robot, and
  • how can we efficiently combine reinforcement learning and optimal control-based approaches, and
  • how can we make use of symbolic properties to derive a reward function in a deep reinforcement learning approach or optimal control approach for complex use cases such as humanoid walking.
These three research questions are closely interwoven and are dealt with in three work areas that lead to the overarching goal of the project: a methodology to develop a hybrid of deep (reinforcement) learning and optimization based control of a robot together with a corresponding rational reconstruction of its observed and future behavior. This reconstruction is based on observations of the robot’s movements and general knowledge of physics (rigid body dynamics). The overall approach is shown in Fig 2. The demonstration scenario includes showing walking for a complex series-parallel humanoid robot RH5 which has been recently developed at DFKI-RIC (see Fig. 1).

Back to the list of projects
© DFKI GmbH
last updated 28.09.2020
to top