3D Scene Recovery and Spatial Scene Analysis for Unorganized Point Clouds
Markus Eich, Malgorzata Goldhoorn
In Proceedings of 13th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, (CLAWAR-10), 31.8.-03.9.2010, Nagoya, o.A., Aug/2010.
The understanding of the environment is mandatory for any type of autonomous robot. The ability to put semantics on self-generated sensor data is one of the most challenging tasks in robotics. While navigation tasks can
be performed by pure geometric knowledge, high-level planning and intelligent reasoning can only be done if the gap between semantic and geometric representation is narrowed. In this paper, we introduce our approach for recovering 3D scene information from unorganized point clouds, generated by a tilting laser range scanner in a typical indoor environment. This unorganized information has to be analyzed for geometric and recognizable structures so that a robot is able to understand its perception. We discuss in this paper how this spatial information, which is based solely on segmented shapes and their extractable features, can be used for semantic interpretation of the scenery. This will give an idea of how the gap between semantic and spatial representation can be solved by spatial reasoning and thereby increasing robot autonomy.
3D scene recovery, 3D scene interpretation, point clouds, scene understanding