Flexible online adaptation of learning strategy using EEG-based reinforcement signals in real-world robotic applications
In Proceedings of the IEEE International Conference on Robotics and Automation, (ICRA-2020), 31.3.-31.8.2020, Paris, IEEE, pages 4885-44891, Aug/2020.
Flexible adaptation of learning strategy depending
on online changes of the user's current intents have a high
relevance in human-robot collaboration. In our previous study,
we proposed an intrinsic interactive reinforcement learning
approach for human-robot interaction, in which a robot learns
his/her action strategy based on intrinsic human feedback
that is generated in the human's brain as neural signature
of the human's implicit evaluation of the robot's actions. Our
approach has an inherent property that allows robots to adapt
their behavior depending on online changes of the human'ss
current intents. Such flexible adaptation is possible, since robot
learning is updated in real time by human's online feedback.
In this paper, the adaptivity of robot learning is tested on eight
subjects who change their current control strategy by adding a
new gesture to the previous used gestures. This paper evaluates
the learning progress by analyzing learning phases (before and
after adding a new gesture for control). The results show that
the robot can adapt the previously learned policy depending
on online changes of the user's intents. Especially, learning
progress is interrelated with the classification performance of
electroencephalograms (EEGs), which are used to measure the
human's implicit evaluation of the robot's actions.