StarTiger Dropter Project: Integrated, Closed-Loop Vision-Aided Navigation with Hazard Detection and Avoidance
In Proceedings of the ESA GNC 2014: 9th International ESA Conference on Guidance, Navigation and Control Systems, (GNC-2014), 02.6.-06.6.2014, Porto, ESA, Jun/2014.
The STARTIGER DROPTER project was a R&D activity within the ESA STARTIGER initiative which aimed at developing and maturing/improving key technologies for future planetary exploration and landing missions, in a relative short time. Within the project, a terrestrial landing test vehicle based on a multi-copter carrier platform was developed. This carrier platform demonstrated a safe lowering and deployment of a rover mockup to demonstrate the autonomous landing of an exploration rover on a planetary surface. During the project, several flight tests were planned and executed to demonstrate the ability of the vehicle to steer itself towards a safe landing site using autonomous on board navigation, stabilizing the carrier ship during the rover lowering phase, ensuring a smooth and highly accurate touchdown and deployment. From images collected during theflight, the capability to track features and produce visual navigation observables was demonstrated, as well as the ability to detect and avoid hazards using lateral divert manoeuvres. This paper describes the navigation and hazard detection algorithms applied to the descent and landing phases of the mission. It first describes the vision-aided inertial navigation system. This includes the measurement and dynamic models involved in the navigation filter avionics, as well as a description of the sensors and the data processing approach. It also describes the individual models required to produce vision-based observables, such as the feature identification and tracking techniques involved and the robust homography estimation procedure taken. The image processing algorithms, used to identify the hazardous regions in a predominantly flat landing area, are also presented. These consist of a combination of texture-based and thresholding algorithms, followed by an efficient maximum search and decision-making algorithm which determines the best area on which to land the rover. The implementation of this system on an embedded real-time hardware is presented, as well as the avionics architecture and the interfaces with sensors and other decision-making hardware and software on board the Dropter vehicle. Finally, flight test results using real sensor data and the described software are shown.
The paper concludes with a discussion of the results of the flight campaign from the perspectives of the visionaided navigation and the hazard avoidance algorithms used.