Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction
Su-Kyoung Kim, Elsa Andrea Kirchner, Arne Stefes, Frank Kirchner
In Scientific Reports, Nature, volume 7: 17562, pages n.a., Dec/2017.

Abstract :

Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.

Files:

Su_Kyoung_Kim_SciRep_2017.pdf

Links:

https://www.nature.com/articles/s41598-017-17682-7


© DFKI GmbH
last updated 28.02.2023