ART-based fusion of multi-modal perception for robots
In Neurocomputing, Elsevier, volume 107, pages 11-22, May/2013.
Robotic application scenarios in uncontrolled environments pose high demands on mobile robots. This is especially true if human-robot interaction or robotrobot interaction is involved. Here, potential interaction partners need to be identified. To tackle challenges like this, robots make use of different sensory systems. In many cases, these robots have to deal with erroneous data from different sensory systems which often are processed separately. A possible strategy to improve identification results is to combine different processing results
of complementary sensors. Their relation is often hard coded and diffcult to learn incrementally if new kinds of objects or events occur. In this paper, we present a new fusion strategy which we call the Simplified Fusion ARTMAP (SiFuAM) which is very fexible and therefore can be easily adapted to new domains or sensor configurations. As our approach is based on the Adaptive Resonance Theory (ART) it is inherently capable of incremental on-line learning. We show its applicability in different robotic scenarios and platforms and give an overview of its performance.
sensor data fusion, incremental learning, Adaptive Resonance Theory, ART, robotic systems, ARTMAP