PhD Talk: Few-shot behavior learning

We humans can remember a face after only seeing it once, we can also grasp and identify an object we have never interacted with before. We perform those motor-perceptual tasks with a limited amount of information and experience. In an opposite manner to our capabilities, the current state-of-the-art algorithms for computer-vision and high-level manipulation use deep learning methods which consume a great number of training samples and obtain a limited generalization. This is specifically problematic for robotic systems that are supposed to operate in a wide range of environments while performing multiple different tasks. The urging question thus is: Which prior elements and algorithms would allow a robot to learn those motor-perceptual tasks with the same flexibility and limited resources that a human has and uses? In this talk, I present some of the limitations of deep learning methods and I propose to start a PhD thesis in which I research how to learn robot behaviors from less information.

In der Regel sind die Vorträge Teil von Lehrveranstaltungsreihen der Universität Bremen und nicht frei zugänglich. Bei Interesse wird um Rücksprache mit dem Sekretariat unter sek-ric(at)dfki.de gebeten.

zuletzt geändert am 30.07.2019
nach oben