Ground Interaction Models for Increased Autonomy of Planetary Exploration Systems
In 13th EASN International Conference on Innovation in Aviation & Space for opening New Horizons (13th EASN 2023), (EASN), 5.9.-8.9.2023, Salerno, IOP Publishing Ltd, series Journal of Physics: Conference Series, 2024.
Planetary exploration of celestial bodies, such as Moon and Mars, using robotic systems has been common practice for decades.
However, to reach scientifically interesting places such as craters and sub-surface lava tubes, they need high mobility capabilities to traverse sandy and rugged terrain, to avoid or overcome obstacles, and to cope with different slopes.
There exist manifold locomotion concepts, e.g. wheeled rovers, legged walking systems, or hybrids that use legged-wheels or wheeled-legs.
They all have their pros and cons, and thus are better or less suited for different terrains.
During the exploration of unknown areas, regardless of the type of locomotion, the robots should autonomously be able to detect the traversing terrain and to identify whether it is harmful or not.
To guarantee mission success, an intelligent system should autonomously navigate around risky environments or, if possible, adapt their locomotion pattern accordingly.
The approach presented in this paper follows the idea to learn online meaningful models of robot-ground interaction.
They allow, on the one hand, the realistic simulation of robot movements on different surfaces in order to predict mobility characteristics as precisely as possible in advance.
And, on the other hand, having a digital twin of the real counterpart allows a continuous comparison with the sensor values actually measured and to detect anomalous behavior.
Due to the fact that the real conditions (gravity, soil composition) cannot be exactly reproduced on earth, the robots need to be able to learn the ground interaction models online during their traversal from their own sensor readings.
For a generic approach to learn ground interaction models, six robotic systems developed for planetary exploration will be used.
With three technically different rovers and three technically different walking systems, a broad spectrum for ground interaction will be analyzed.
By collecting performance data on plain ground, loose soil, rubble, and lava floor, a quantitative comparison of varying walking and roving systems of variables sizes and locomotion principles can be done, and various ground interaction models can be learnt.
For collecting data, the default locomotion behavior will be used, but also the usage of specific probing behaviors maximizing the information gain will be learned and analyzed.
Deep Neural Networks will be used to learn efficient models that predict the ground interaction of the respective wheel or leg contacts.
The learned ground interaction models are then integrated into a real-time physics simulator to improve the simulation of the holistic robot behavior by reducing the simulation reality gap.
The usage of this internal simulation online on the system to predict the nominal robot performance and to compare this to the actual sensor readings enables to continuously determine soil properties and to detect non-nominal conditions which can then be handled in time through path planning or behavioral adaptation.
The paper will describe the concept in more detail, the experimental setup for reproducible experiments, an overview on the selected robots, and first results.