Learning Intention Aware Online Adaptation of Movement Primitives
In IEEE Robotics and Automation Letters, IEEE, volume 4, number 4, pages 3719-3726, 2019.
In order to operate close to non-experts, future robots require both an intuitive form of instruction accessible to laymen and the ability to react appropriately to a human co-worker. Instruction by imitation learning with probabilistic movement primitives (ProMPs) allows capturing tasks by learning robot trajectories from demonstrations, including the motion variability. However, appropriate responses to human co-workers during the execution of the learned movements are crucial for fluent task execution, perceived safety, and subjective comfort. To facilitate such appropriate responsive behaviors in human-robot interaction, the robot needs to be able to react to its human workspace co-inhabitant online during the execution of the ProMPs. Thus, we learn a goal-based intention prediction model from human motions. Using this probabilistic model, we introduce intention-aware online adaptation to ProMPs. We compare two different novel approaches: First, online spatial deformation, which avoids collisions by changing the shape of the ProMP trajectories dynamically during execution while staying close to the demonstrated motions and second, online temporal scaling, which adapts the velocity profile of a ProMP to avoid time-dependent collisions. We evaluate both approaches in experiments with non-expert users. The subjects reported a higher level of perceived safety and felt less disturbed during intention aware adaptation, in particular during spatial deformation, compared to non-adaptive behavior of the robot.