Assistive robotic manipulators have the potential to support individuals with impairments to regain some of their independence in performing Activities of Daily Living. For individuals with impairments, interaction with assistive robotic manipulators is a very challenging task. In this talk, I will present a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. A study with 25 participants was conducted and the robot learning framework was able to distinguish between instructions and unrelated conversations and accurately execute the requested actions. Moreover, in this talk, I will present the preliminary results from my recent grant from the National Science Foundation (NSF) on developing a personalized assistive robotic system that assesses cognitive fatigue in persons with paralysis.
Maria Kyrarini is an Assistant Professor of Electrical and Computer Engineering and David Packard Jr. Faculty Fellow at Santa Clara University (SCU). She also leads the Human-Machine Interaction & Innovation (HMI2) research group which has been supported by federal (NSF) and SCU internal grants. Her primary research interests are in the fields of Robot Learning from Human Demonstrations, Human-Robot Interaction, and Assistive Robotics with a special focus on enhancing Human Performance.
Prior to SCU, she was a postdoctoral research fellow at the University of Texas at Arlington and the assistant director of the Heracleia Human-Centered Computing Lab. In 2019, Dr. Kyrarini received her Ph.D. in Engineering from the University of Bremen under the supervision of Professor Dr.-Eng. Axel Gräser. The title of her Ph.D. thesis is: "Robot learning from human demonstrations for human-robot synergy". Before that, she received her M.Eng. degree in Electrical and Computer Engineering and her M.Sc. degree in Automation Systems both from the National Technical University of Athens (NTUA) in 2012 and 2014, respectively.