Vortragsdetails

In-hand object localisation using multi–sensor fusion

With increased interest for employing robotic manipulators not only in industrial areas but also in everyday life, the dexterity and versatility of the robotic tasks have expanded. Almost always robots have to be able to manipulate objects of different shape, size or weight. Nonetheless, it is not a rare situation when robots fail in acquiring and maintaining a robust grasp. The task of detecting a change in the grasped object position and orientation directly affects the rate of success in robotic tasks. To address these challenges, this thesis proposes an approach for multi-sensor in-hand object localization.
The approach is based on fusing vision system, wrist force/torque sensor, and tactile sensor for estimating the pose of the grasped object in two finger robotic gripper. In this manner, redundant and complementary data is integrated  into the robotic system, making it more robust and reliable.
The vision system is largely used for object detection and localization. However, its performance is highly affected by the robot's surroundings.
The force/torque sensor is used to estimate the objects position in the gripper using wrench theory.
The tactile sensors complements the sensor data from the force/torque sensors by estimating of object orientation,  without re-grasping it and performing object surface mapping task. The algorithm estimates the object's edge at the moment of grasp and follows its change as the object slips, in order to estimate the slip angle.
Two scenarios are defined and discussed for evaluating the proposed approach for object in-hand localization.
Scenario A:
In the first scenario, the algorithm does not perform object recognition, it only estimates how much the grasped object has slipped or tilt. This scenario is suitable for pick and place processes where it is important the object to be placed the same position and orientation as it was grasped. Therefore in this scenario is only computed the tilt angle and slip occurrence using the tactile sensor information. Afterwards, the acquired information can be used as an offset in the motion planner to correct the object position and orientation.
Scenario B:
In the second scenario, the algorithm estimates position and orientation of the object using its CAD model. The initial estimates of the grasped object are calculated using tactile sensor and force/torque sensor as it is mentioned above.  Each estimate generates PointCloud from the CAD model of the object. De Morgan's laws are applied between the PointCLoud of the object and robot's gripper to estimate the most plausible pose of the grasped object.
From the conducted experiments it could be conculd thatthe vision systems are indeed greatly influenced by the enviroment and that the force/torque sensor together with the tactile sensor can generate a reliable estimation of the position and orientation of the object. Although it should be stated that the resolution and the data frequency of the tactile sensor have greatly impacted in the results of the estimation process.

Veranstaltungsort

Raum A 1.03, Robert-Hooke-Str. 1 in Bremen

In der Regel sind die Vorträge Teil von Lehrveranstaltungsreihen der Universität Bremen und nicht frei zugänglich. Bei Interesse wird um Rücksprache mit dem Sekretariat unter sek-ric(at)dfki.de gebeten.

© DFKI GmbH
zuletzt geändert am 31.03.2023
nach oben