INSYS

Interpretable Monitoring Systems

The goal of INSYS, based on the Explainable Artificial Intelligence (XAI) in Machine Learning agenda of the Robotics Working Group and DFKI RIC, is to design and implement complex systems enabled with interpretable mechanisms to assist in the monitoring of space missions. INSYS is a joint project together with the AG Robotik of the University of Bremen.

Duration: 01.12.2020 till 30.06.2023
Donee: German Research Center for Artificial Intelligence GmbH & University of Bremen
Sponsor: Federal Ministry for Economic Affairs and Climate Action
Grant number: Funded by the German Ministry for Economic Affairs and Energy (BMWi) (Grant number.: 50 RA 2036 and 50 RA 2035).
Partner: University of Bremen
Application Field: Space Robotics
Related Projects: Entern
Environment Modelling and Navigation for Robotic Space-Exploration (10.2014- 12.2017)
TransTerrA
Semi-autonomous cooperative exploration of planetary surfaces including the installation of a logistic chain as well as consideration of the terrestrial applicability of individual aspects (05.2013- 12.2017)
TransFIT
Flexible Interaction for infrastructures establishment by means of teleoperation and direct collaboration; transfer into industry 4.0 (07.2017- 12.2021)

Project details

Schematic representation of the INSYS approach for a robust, multimodal, robotic system: based on sensor (s) and accutator (a) values of the robotic system, different ML approaches cascade to the interpretable multimodal DNN and finally to the overall consistency check and monitoring designed for different subtasks of the robotic system. In the structure of the whole approach, a distinction is made between the sensor level (sensor data preprocessing and anomaly detections) and model level (neural networks, their results and explainability).

The INSYS project is concerned with the interpretability of learned models and the resulting possibilities for self-monitoring of complex robotic systems working with multimodal data. This will be accomplished with the development of novel approaches of XAI for multimodal robotic systems to better understand the analysis of correlations of cause, in this case input data, and effect, or model output, and make it more explainable.

On this basis, general correlations within the data can also be analyzed without much prior knowledge. For robotic systems, this means that on the basis of the data generated by various sensors and their consistency checks, it is possible to recognize new situations, anomalies or malfunctions and thus to monitor the correct function during the execution of various tasks. At the same time, the information obtained can be used to show mission control the analyses of an automatic system in a comprehensible way so that it can intervene in the event of errors.

In this respect, the monitoring process with regard to the consistency check can be divided into two rough levels can be divided into two levels of the entire processing chain, from sensing and acutatorics to deep, multimodal neural networks individually pursued by the partners.

Here, the University of Bremen is pursuing explainable and interpretable multimodal neural networks at the model level where the focus is the verification of the output values of the learned models individually or in a composite according to the definition of environmental conditions.

DFKI RIC pursues the implementation of monitoring and consistency checking of multimodal robotic systems on the sensor level where checking sensor values and/or simple features correspond to the currently expected behavior and learned expectations.

Publications

2023

Terrain Classification Enhanced with Uncertainty for Space Exploration Robots from Proprioceptive Data
Editors: Mariela De Lucas Alvarez, Jichen Guo, Raúl Domínguez, Matias Valdenegro-Toro
(LXAI-2023), 23.7.-29.7.2023, Honolulu, Hawaii, Journal of LatinX in AI (LXAI) Research, 2023.

Back to the list of projects
© DFKI GmbH
last updated 04.01.2024
to top