Behaviors for Mobile Manipulation

The main goal of the project BesMan is the development of generic dexterous manipulation strategies which are independent of a specific robot morphology. This will be done by integrating three key components: namely, trajectory planning, whole-body sensor-based reactive control, and dynamic control into a modular, robot-independent and easily-reconfigurable software framework which allows reusing components to describe a variety of complex manipulation behaviours. Furthermore, novel, situation-specific behavior shall be learned by means of a highly-automated machine learning platform, which incorporates an interface to a human operator, who via demonstration will show the robot how to deal with unforeseen situations. The development of dexterous manipulation procedures is mainly the responsibility of the DFKI RIC while the University of Bremen develops the machine learning platform.

Duration: 01.05.2012 till 31.07.2016
Donee: DFKI GmbH & University of Bremen
Sponsor: Federal Ministry of Economics and Technology
German Aerospace Center e.V.
Grant number: This project is funded by the German Space Agency (DLR Agentur) with federal funds of the Federal Ministry of Economics and Technology in accordance with the parliamentary resolution of the German Parliament, grant no. 50 RA 1216 (DFKI) and 50 RA 1217 (University of Bremen).
Partner: Robotics Group, University of Bremen
Team: Team IV - Robot Control
Team VII - Sustained Interaction & Learning
Application Field: Space Robotics
Related Projects: LIMES
Learning Intelligent Motions for Kinematically Complex Robots for Exploration in Space (05.2012- 04.2016)
A Semi-Autonomous Free-Climbing Robot for the Exploration of Crater Walls and Bottoms (07.2007- 11.2010)
Intelligent Human-Robot Collaboration (03.2015- 06.2016)
Related Software: MARS
Machina Arte Robotum Simulans
Behavior Optimization and Learning for Robots

Project details

As shown in neuroscience studies, human manipulative skills are not the result of high powerful actuation systems or fast sensorimotor loops. On the contrary, the key lies on the task organization, involving sequences of organized action plans at various levels of complexity.  We embraced this concept in the project BesMan and defined an ,Action Plan' as a sequence of subtasks which needed to be followed in order to accomplish a certain complex  task (for instance, building a specific infrastructure on the Moon). Thus, the subtasks of the ,Action Plan' can be mapped to behaviour templates available on the robotic system. The mapping between those subtasks and behaviour templates would be pre-defined, however, those templates are to some degree adaptive and will adapt to a certain situation when executed.

In order to fulfill such a generality, a software framework was developed to describe and control robot manipulation behaviors. To keep independence from particular robot hardware and an explicit statement of areas of application, an embedded domain specific language (eDSL) was used to describe the particular robot and a controller network that drives the robot. Thus, we made use of a) a component-based software framework, b) model-based algorithms for motion- and sensor processing representations, c) an abstract model of the control system, and d) a plan management software to describe a sequence of software component networks that generate the desired robot behavior.

In the area of robot dynamic control, we developed methods to obtain robot dynamic models from experimental identification by generating appropriate identification experiments and parameter estimation from the measurements. A software library was developed which builds first a parametrized dynamic model by using the known robot geometry. The next component of the library contains methods for experimental identification of those unknown  dynamic parameters. This includes a library to generate identification trajectories (taking into account joint limits, maximum velocities and accelerations, for instance) and a component for the estimation of the parameters from the experimental measurements. Once the model is available, the dynamic controller can be implemented.

In the area of manipulation planning, we created a sample-based motion planning library based on OMPL and in a similar fashion as MoveIt! as the main pipeline. Additionally, we included some functionality for dual-arm operations with Cartesian constraints, self-filtering and the possibility to plan in dynamic environments via replanning.

In the area of whole-body control, we used a constraint-based Whole Body Control scheme for executing reactive robot motions, incorporating the physical constraints of a system, integrating multiple, disparate sensor-based controllers and optimally utilizing the redundant degrees of freedom in complex systems, like e.g. AILA. In particular we integrated a constraint-based, multi-objective robot controller similar to iTAsC in our ROCK framework and provided the infrastructure to execute action plans based on motion constraints, as well as change the constraints online. All the active motion constraints and their parametrizations compose a subtask in our action plan that is executed by the robot.

In the area of the learning , we developed a learning platform which allows learning robotic manipulation behavior from human demonstrations. The whole procedure is highly automated and fast enough to serve as a  solution to quickly learn behavior for solving unforeseen challenges and tasks, which can always occur during deployment of a robotic system.  The learning platform is composed of several components, which deal with acquisition and preprocessing of human demonstrations, segmenting the demonstrated behavior into basic building blocks, learning movement primitives using imitation-learning, refining movement primitives by means of reinforcement learning, and generalizing movement primitives to related tasks.

The results of this project were developed as generic solutions which were independent of the morphology of the robot and of its application scenario. The transfer of the developed solutions between different robotic systems and between application scenarios (e.g. space and logistics) was exemplary demonstrated, as it can be seen in the different videos below.


BESMAN - Third Demonstration - MANTIS robot

Third official demonstration: Execution of an autonomous manipulation task in a space scenario and learning of manipulation behaviour from human demonstration.

BESMAN - Second Demonstration - KUKA iiwa robot

Second official demonstration: Learning reaching behaviour from human demonstration in a logistics scenario.

BesMan: First demonstration – robot system AILA

First official demonstration: Execution of an autonomous manipulation task in the ISS Mockup






Back to the list of projects
last updated 19.10.2016
to top