There exists ample evidence that the brain performs sensory prediction for motor control and perception. Since many species rely heavily on vision, sensory prediction in the visual domain is of special interest for computational modeling. Two studies on this topic will be presented. In the first study, a learning algorithm for visual prediction is developed and tested on a robot camera head. It is inspired by the predictive remapping of retinal positions of visual receptive fields in the brain. In the second study, a model for grasping to extrafoveal (non-fixated) target objects is implemented on a robot setup. This model is based on the premotor theory of attention (Rizzolatti et al., 1994) and includes the additional hypothesis that attention shifts caused by saccade programming imply a prediction of the retinal foveal images after the saccade. The motor commands for arm control are generated with the help of a Gaussian mixture model which has been adapted to the sensorimotor data distribution of the grasping task. Finally, it will be discussed how these modeling approaches could be applied to technical systems.
Visual Prediction and Extrafoveal Grasping
VeranstaltungsortRaum Seminarraum 117, Robert-Hooke-Str. 5 in Bremen
In der Regel sind die Vorträge Teil von Lehrveranstaltungsreihen der Universität Bremen und nicht frei zugänglich. Bei Interesse wird um Rücksprache mit dem Sekretariat unter sek-ric(at)dfki.de gebeten.