The talk will deal with a fundamental problem in robotics and artificial intelligence: How to bridge the gap between semantics and perception. The semantic domain can be described as a set of symbols and meanings in the ontology which describe things. Symbolic representation is the way how things in the known world can be named and classified (e.g. a book, a tree, a chair). On a symbolic level task planning can be performed, e.g. (Take the cup from a table).
In contrast to the symbolic level, robots perceive the environment mainly through exteroceptive sensors. The sensors produce a metric representation of the environment of the robot. Sensors which are mostly used in robotics are (stereo) vision and laser range finders. The problem is that sensor information is usually erroneous and incomplete because objects are sometimes only partially seen. Another point is that objects which have to be classified vary in size and shape which makes object recognition even harder.
In this work a strategy is proposed which makes use of spatial feature descriptors which are perceivable in the geometric domain and also describable in the semantic domain. These features contain also spatial relationships between objects which give important information about the perceived scenery of the robot. A probabilistic reasoning approach is proposed to identify objects based on shape and spatial relationship and to anchor objects to a symbolic knowledge base. The classified objects will than be given to a planner for task planning.