Advanced AI - Mechanics & Control

 The overall goal of the Mechanics and Control team is to

Develop dynamic robots that can move and interact in the real world with a grace, agility, and robustness similar to that of humans and animals.

In the field of mechanics, this includes investigating novel principles for robot design, such as parallel and compliant or flexible mechanisms, along with a rigorous analysis and mathematical modeling in terms of geometry, kinematics, and dynamics. To achieve this, we aim to develop a thorough understanding of the way how dynamic robots interact with their environment and produce realistic and computationally manageable models of this interaction, for example, considering extreme impact forces, friction, and slip.

In the field of control, we are equally interested in predicting the robot’s state from incomplete and noisy sensor data, as well as in planning and stabilizing dynamic movements in real-world contexts, such as walking, running, jumping, and brachiating. Thereby we exploit both model-based (optimal control, hybrid optimization, whole-body control) and learning-based (RL, data-driven) approaches, as well as combinations of both. Underactuated robots such as quadrupeds and humanoids are particularly interesting in this means as they are challenging to control due to their hybrid dynamics.

Team lead: Dr.-Ing. Dennis Mronga
Deputy: Dr. Melya Boukheddimi

Recent Research results

Recently, the Mechanics & Control team of DFKI RIC presented algorithmic advancements and insights in the area of model-based control of legged robotic systems, Co-Design, and reinforcement learning for underactuated systems. 

QP Benchmarking A large variety of model-based control architectures for legged locomotion exist. Many of them, like Model-Predictive Control or Whole-Body Control, make use of optimization, mostly quadratic programming (QP).  The controller design requires the formulation of the QP and the selection of a suitable solver, which requires considerable time and expertise. To ease this burden, we recently presented an extensive benchmark on the 2025 IEEE International Conference on Robotics and Automation (ICRA): Benchmarking Different QP Formulations and Solvers for Dynamic Quadrupedal Walking (F. Stark et al.). In this work, we compare dense and sparse QP formulations, and multiple solving methods on different HW architectures, focusing on their computational efficiency.

Adaptive Model-Based Control One major drawback of model-based controllers is their inherent dependency on an accurate system model. This plant model is quite often fixed, which limits the controller’s applicability to different tasks. On the 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), we presented an approach for Adaptive Model-Based Control of Quadrupeds via Online System Identification using Kalman Filter (J. Haack et al.). Using this method, we were able to identify the mass and the center of mass of a quadrupedal robot online, which results in improved tracking performance of the model-based controller when the quadruped carries varying payloads.

Co-Design for Robots with Parallel Kinematics In modern robotic systems, design and control cannot be regarded independently. Co-design methods provide bi-level formulations to simultaneously optimize robot design and behavior for specific tasks. At the 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), we presented a first co-design formulation that explicitly incorporates parallel coupling constraints into the dynamic model of the robot: Parallel Transmission Aware Co-Design: Enhancing Manipulator Performance Through Actuation-Space Optimization (R. Kumar et al.). By taking advantage of the actuation space representation, the approach leads to a significant increase in dynamic payload capacity in manipulators with closed loops, compared to co-design implementation for purely serial robots.

Parallel Transmission Aware Co-Design

Video-Vorschaubild
By playing the video, you accept YouTube's privacy policy.

Enhancing Manipulator Performance Through Actuation-Space Optimization. Authors: Rohit Kumar, Melya Boukheddimi, Dennis Mronga, Shivesh Kumar, and Frank Kirchner 

AI Olympics Competition In the field of robotics, many different approaches ranging from classical planning over optimal control to reinforcement learning (RL) are developed and borrowed from other fields to achieve reliable control in diverse tasks. However, we are lacking standardized, real-world benchmarks to assess the strengths and weaknesses of the individual approaches. The RealAIGym project, developed a set of reproducible robotic hardware platforms, for establishing a baseline for the application of dynamic control algorithms on real hardware. One of these platforms, the underactuated double pendulum, was used in the AI Olympics with RealAIGym competition. The competition was held at the IROS 2024 conference to evaluate different controllers according to their ability to solve a dynamic control problem on the double pendulum system with chaotic dynamics. The results have been published in the Robotics and Automation Magazine: Reinforcement Learning for Robust Athletic Intelligence: Lessons from the 2nd 'AI Olympics with RealAIGym' Competition

RealAIGym: 2nd AI Olympics With Competition

Video-Vorschaubild
By playing the video, you accept YouTube's privacy policy.

Reinforcement Learning for Robust Athletic Intelligence: Lessons Learned From the Second AI Olympics With RealAIGym Competition. Authors: Felix Wiebe, Niccolò Turcato, Alberto Dalla Libera, Jean Seong Bjorn Choe, Bumkyu Choi, Tim Lukas Faust, Habib Maraqten, Erfan Aghadavoodi, Marco Cali, Alberto Sinigaglia, Giulio Giacomuzzo, Ruggero Carli, Diego Romeres, Jong-kook Kim, Gian Antonio Susto, Shubham Vyas, Dennis Mronga, Boris Belousov, Jan Peters, Frank Kirchner, and Shivesh Kumar

© DFKI GmbH
last updated 16.01.2026