Fusion of Inertial Sensor Suit and Monocular Camera for 3D Human Pelvis Pose Estimation
Mihaela Popescu, Kashmira Shinde, Proneet Kumar Sharma, Lisa Gutzeit, Frank Kirchner
In 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), (RO-MAN-2024), 26.8.-30.8.2024, Pasadena, California, IEEE XPlore, pages 160-167, Aug/2024.
Abstract
:
In real-world scenarios, robots come closer to humans in many applications, sharing the same workspace or even manipulating the same objects. To ensure safe and intuitive collaboration, it is crucial to have an accurate knowledge of
the human 3D position in space, which should be estimated with high precision, high frequency and low latency. However, individual sensors such as inertial measurement units (IMUs)
or cameras cannot meet all requirements for reliable human pose estimation under conditions such as long operating times, large distances and occlusions. In this study, we highlight the limitations of different visual pose methods and present
a fused approach for real-time estimation of the 3D position of the human pelvis using machine learning-based visual pose from a monocular camera and an IMU sensor suit. The multimodal fusion is based on the Invariant Extended Kalman
filter (InEKF) on Lie Groups, which fuses drift-free visual poses with high-frequency inertial measurements in a loosely-coupled manner. The evaluation is performed on a recorded dataset of multiple subjects performing various experimental scenarios. The results show that the fused approach can increase the accuracy and robustness of the estimates, taking a step closer towards smooth human-robot collaboration.
Keywords
:
Visualization;Three-dimensional displays;Accuracy;Pose estimation;Robot vision systems;Sensor fusion;Cameras;Real-time systems;Pelvis;Robots