A Deep Learning Framework for Semantic Segmentation of Underwater Environments
Amos Smith, Jeremy Coffelt, Kai Lingemann
In OCEANS 22 Hampton Roads, (OCEANS-2022), 17.10.-20.10.2022, Hampton Roads, VA, IEEE, Oct/2022. MTS / IEEE OES.

Zusammenfassung (Abstract) :

Perception tasks such as object classification and segmentation are crucial to the operation of underwater robotics missions like bathymetric surveys and infrastructure inspections. Marine robots in these applications typically use a combination of laser scanner, camera, and sonar sensors to generate images and point clouds of the environment. Traditional perception approaches often struggle to overcome water turbidity, light attenuation, marine snow, and other harsh conditions of the underwater world. Deep learning-based perception techniques have proven capable of overcoming such difficulties, but are often limited by the availability of relevant training data. In this paper, we propose a framework that consists of procedural creation of randomized underwater pipeline environment scenes, the generation of corresponding point clouds with semantic labels, and the training of a 3D segmentation network using the synthetic data. The resulting segmentation network is analyzed on real underwater point cloud data and compared with a traditional baseline approach.

Stichworte :

underwater robotics, 3D point cloud segmentation, deep learning perception, data simulation

Links:

https://ieeexplore.ieee.org/abstract/document/9977212


© DFKI GmbH
zuletzt geändert am 27.02.2023
nach oben