
Advanced AI – Robot Learning
The Team "Robot Learning" aims to develop methods and approaches that enable machines to learn from human-machine or machine-to-machine interactions. Machines in this context can either be robotic systems or synthetic agents that interact with their pendants in simulated or real environments. With these approaches, autonomous robots and synthetic agents which operate in complex environments along humans or other systems over long periods of time will continously be able to learn. As a result of the interactions these learning processes are based on, robots will be able to not only improve their own behaviour but also quickly adjust to different challenges within their team of other machines and/or humans. This will allow for sustainable cooperation within a team that optimally utilizes each members’ expertise while also facilitating the exchange of skills and knowledge.
The "Robot Learning" -Team develops machine learning approaches that provide the means for robots to learn complex behavior from interaction on the basis of generalizable behavioural primitives. In human-computer-interaction, the robot’s behaviour can be adapted in ways that increase its predictability for humans, thus making it a more easily acceptable interaction partner. The robot’s predictability for humans is in turn directly linked to the human’s predictability for the system, which is an important aspect of the human-computer-interaction’s safety. The study of machine-to-machine interaction sheds light on comprehensive potential for optimization in the realm of machine modelling as well as on cooperative and competing solution behaviour for complex, structured and unstructured problem areas in robotic interaction.
Team lead: Dr. Patrick Draheim
Intrinsic interactive reinforcement learning: Using error-related potentials
Thanks to human negative feedback, the robot learns from its own misconduct.