New research challenges in robotics, which arise from attempts to get robots out of isolated environments into human spaces with a close human-robot interaction, demand new methods to acquire a deeper understanding of the human behavior and intentions. Especially in Learning from Demonstration (LfD), which provides an intuitive way to teach robotic systems new behavior from human movement examples, approaches are needed to recognize relevant movement segments in human motion data. In this way, building blocks of robotic motions can be learned which can be combined to generate a wide range of behaviors.
This thesis introduces algorithms to detect and annotate human building block movements in recordings of manipulation movements as well as an approach to determine hierarchical structures in these movements. The velocity-based Multiple Change-point Inference (vMCI) algorithm is presented, which identifies building blocks with a bell-shaped velocity profile using Bayesian Inference. The algorithm can be applied in an online and automatic fashion, without the need for expert knowledge, time-consuming training data generation or parameter tuning. To annotate the detected building blocks, several standard movement recognition approaches are compared with respect to their performance in the presence of only a small amount of labeled movement examples for training. It is shown that using 1-Nearest Neighbor (1-NN)-based movement classification, annotations can be found fast and reliably under these conditions. To detect basic movements as well as their concatenations into more complex, labeled actions, the velocity-based Hierarchical Movement Segmentation
(vHMS) algorithm which hierarchically analyses human movements is presented.
Each of the developed methods is evaluated on different movement recordings acquired from several subjects, ranging from simple point-to-point movements to more complex dual arm object manipulation tasks. Furthermore, the application of the proposed algorithms in a framework to learn new robotic behavior from human demonstration is shown. Different throwing motions are transferred to several robotic systems in a nearly automated way and in a reasonable amount of time. Additionally, the application of the segmentation approaches on teleoperated movements recorded with an exoskeleton is presented.
With the presented algorithms, the imitation of human behavior for robotic systems can be made more intuitive, automated, and more generally applicable. Furthermore, the hierarchical movement segmentation approach opens the door to construct hierarchical learning approaches which are based on human demonstration to learn complex robotic behavior more effectively