TransFIT
Flexible Interaction for infrastructures establishment by means of teleoperation and direct collaboration; transfer into industry 4.0
M.Sc. Manuel Meder (University of Bremen)
Dr. Georg von Wichert (Siemens AG)
The project TransFIT is part of the space road map of the DFKI RIC. The project focuses on the assembly and installation of infrastructure for space applications by humans and robots either autonomously or in cooperation. The cooperation between humans and robots follows the concept of “sliding autonomy”. This means that the control over a robot by a human can be very strong as it is the case during teleoperation, weaker as in case of teleoperation with an autonomous control of components or like supervision only in case of “operator in the loop” approaches. The goal of the human-robot interaction is not only task sharing but further training of robots enabling more complex autonomous behaviour.
Project details
Requirements for the space application
Future space missions will not only include sending scientific equipment close to astronomical objects or even onto planets, moons or astroids using landers and robots, but will further include humans as an important part of the mission on-site. Therefore setting up local infrastructure like stationary camp sites, laboratories or even more complex and bigger module-based structures is required. To prevent unneccessary endangerment of the astronauts being on away missions the usage of robots is obvious. Since robots only have a limited capability of solving complex tasks and only have a limited flexibility in their behavior, a strong cooperation and collaboration with the astronaut is neccessary starting with a general objective for the (semi)-autonomous robots to a direct and intuitive robot control. Like this, the robot is on the one hand able to interact directly with the human, for example holding or fixing a module while the human is assembling a second module, on the other hand another astronaut can teleoperate the robot from the base camp or a space station in the orbit.
An important goal of TransFIT is the development of robot skills which enable the robot to complete complex assembly tasks like grabbing, holding and putting together prefabricated components autonomously or in collaboration with the human. Therefore, the concept of "sliding autonomy" as a dynamic change between fully autonomous behavior, semi-autonomous and cooperative behavior either with the operator in the loop or as a partner and teleoperation will be implemented. Hence, a simple control software is required for a fast adaptation of the behavior on-site and during a mission.
Additionally, the robot should be able to learn new skills from interacting with the human to further optimize his versatility and adaptability to specific requirements. In the final scenario, two robots and at least one human will assemble infrastructure by autonomously putting together parts by the robot, putting together parts in cooperation with a human and also teleoperated by a human. Furthermore we will show that the robots behavior can be easily adapted by an interface to semi-autonomously create installation instructions and by learning skills from observing human behavior.
Transfer in the context of Industry 4.0
The demand for flexible automation is increasing rapidly. Main driver is the ongoing industrial revolution, especially in high wage countries like Germany. This revolution is characterized by constantly increasing numbers of product variants, constantly decreasing product life cycles and, as a consequence, constantly decreasing lot sizes. End-to-end automation using classic approaches is not always feasible in this context, leading to extremely low degrees of automation in large phases of production. One of the phases still executed mostly manually is discrete manufacturing. Automation, using classical approaches, is not cost-effective for large number of product variants due to the associated high engineering costs. New automation approaches need to be developed and tested. One of the main goals of the TransFit project is the transfer of technology developed for extraterrestrial applications into industrial assembly applications. In the project, a highly-flexible, universal and cooperative assembly cell for the manufacturing of complex components will be build for demonstrating this technology transfer.
To achieve the required universality and flexibility, without incurring in high engineering costs, future assembly cells have to be able to execute, autonomously, and in cooperation with the operator, abstract job specifications without the need of detailed programming. These are also requirements for the extraterrestrial assembly and installation of infrastructure. The solutions and interfaces developed in the extraterrestrial scenario should also be applicable in the industrial scenario, where devices with a maximal total weight of 10 kg are to be manufactured in collaboration with a human coworker. The job specification contains assembly steps that, due to the required dexterity can only be performed by the human, and also contains steps that the system can perform with greater precision and repeatability.
The assembly cell has a high degree of autonomy and does not rely on special-purpose tools or sensors. A system-independent semantic description of the process (Bill-of-Process) and product (Bill-of-Material) is given together with a semantic description of the assembly cell including its product-independent skills. The system relies on sensor data to determine its current situation. Using reasoning and planning based on the current situation and the explicit knowledge, the system is able to compute a sequence of actions for assembling a given product. These actions consist of a high-level representation of the capabilities or skills of the system. The system and the human coworker carry out the assembly task together. If one of the steps cannot be executed by the system, then the human is indicated what to do in order to carry out the step.
In summary, the following objectives will be targeted in the project:
- Development of hardware and software solutions for a safe human-machine cooperation using a demand-driven sliding autonomy.
- Development of knowledge-based technologies for robot control and environment perception for the application of setting up an infrastructure.
- Development of a semi-autonomous assistency system for an intuitive human-machine-interaction supporting the astronaut depending on his/her current situation and based on automated feedback approaches using psychophysiological data.
- Increase of the autonomy of robots based on online-learning for behavior optimization, automatic adaptation to hardware changes and learning from interacting with the human ("operator in the loop" approach).
- Transfer of the developed technologies into the context of industry 4.0 with the aim of an interactive and flexible assembly cell.
Videos
RH5 Manus – Humanoid assistance robot for future space missions
TransFIT: Flexible Interaktion für Infrastrukturaufbau
Flexible Interaktion für Infrastrukturaufbau mittels Teleoperation und direkte Kollaboration und Transfer in Industrie 4.0.
Intrinsisches interaktives verstärkendes Lernen: Nutzung von Fehler-korrelierten Potentialen
Der Roboter lernt dank menschlichem Negativ-Feedback aus eigenem Fehlverhalten
Detailed Information on project goals
Intuitive and flexible interaction possibilities
The interaction with the human will be demonstrated in various interaction conditions. A scenario will be developed in which the human can interact and cooperate directly with the robot or the human can teleoperate the robot either from an extraterrestrial or orbital station. In this scenario, two interaction strategies will be demonstrated: a) the direct cooperation with the robot and b) teleoperation which makes it possible to use the human's ability directly by means of the robotic system and also allows the human to use the extended capabilities of the robotic system.
A humanoid robot with complex kinematics cannot be efficiently controlled by classical control strategies due to the large number of degrees of freedom. Hence, an intuitive mapping between the human and the robot is necessary for teleoperation. It makes sense here to use a full-body exoskeleton which has been developed within the project Recupera Reha (Grant No: 01IM14006A). Within the project TransTerrA (Grant No: 50 RA 1301), an exoskeleton has been successfully used as a subsystem for the intuitive control of robotic systems such as CoyoteIII and SherpaTT, and especially of the manipulation arm of the robot SherpaTT. During teleoperation, a virtual immersion is necessary in order to allow the human to immerse in the scenario. The virtual immersion which was developed and applied in the previous projects such as VI-Bot (Grant No: 01IW07003), IMMI (Grant No: 50 RA 1012 and 50 RA 1011), and TransTerrA (Grant No: 50 RA 1301) should be expanded in the project TransFit. Bilateral state estimation between human and robot should also be expanded, for example, not only aspects such as human workload will be taken into consideration, but also feedback about the robot's functionality and correctness of the robot's behavior will be given to the robot. Furthermore, it will be investigated whether a strong immersion of the human in the scenario can improve and simplify the control of robot. To this end, online analysis of physiological and behavior data will be used and this data can be collected by using different types of sensors. As wearable sensors, the following measurement system can be used: eye tracking system for observation of gaze behavior and other mental states (e.g., fatigue, pulse, etc.), EEG electrodes for analysis of error-related potentials, or IMU sensors for the measurement of motion pattern and physical conditions. As external sensors of the robot, laser scanner or RGB-D camera can be used for the analysis of motion and behavior. The ideal way is to use a multimodal online analysis that combines numerous input channels in order to be irrespective of environment as possible and be applied both in extraterrestrial environment and in industrial context.
For human-machine interaction, experiences from the project IMMI (Grant No: 50 RA 1012 and 50 RA 1011) will be used to flexibly use multimodal data and support various interaction possibilities. Additionally, experiences from the projects Moonwalk (EU project; Grant No: 607346) and iMRK (on behalf of Volkswagen) will be used, which enables the human to interact with the system via a simple communication tool such as gestures.
The robotic system should be able to automatically and intuitively estimate in which context the human is currently acting (i.e., situation-specific concept for the human's action). To this end, sensors developed within the project Moonwalk (EU project; Grant No: 607346) can be used to extract gestures and estimate a general pattern of behavior. Further, sensors will be used with other sensors together to determine the action context which can be given to the robot. This can be applied in a concrete application case such as the installation of infrastructure in extraterrestrial environment as follows. For example, the following issues can be addressed whether the human intends to interact with systems, whether unexpected events interrupt the interaction, or in which phases of the before learned motion sequences the human is acting or interacting with the robot. Such knowledge can be used to predict further motions and actions. In this way, the robot can, for example, already move the expected interaction location. The better characterization of the human's motion can be also used for planning and joint coordination of system.
Online Learning of Skills from Demonstration
The robot should be able to learn from the interaction with humans. To achieve this, previous work of the project BesMan (FZK: 50 RA 1216 (DFKI) and 50 RA 1217 (University of Bremen)) will be continued and extended to allow online learning with data obtained by the exoskeleton. Learning will only take place during teleoperation, as an precise execution can be expected and motions are compatible with the system's limitations.
Furthermore, the robot should be able to autonomously cope with changes (such as fatigue or altered human-robot-interaction, e.g. due to changes in the personnel) by adapting its current behavior and without the necessity of modifying the simulation environment. Especially for robots with complex kinematics, behaviors may be learned in simulation without prior human demonstrations.
Validation trials in reality will be automatically selected to minimize the time required to evaluate these behaviors on the real system. All data obtained in these trials are used to learn an internal model predicting how transferable simulated behaviors are to the real system. Robotic systems that adapt their behavior through reinforcement learning require a performance evaluation, given by the reward function. This approach allows to learn behaviors that are more suitable to the robotic system than an exact copy of the human demonstration.
Simulation environments for learning, control and interaction
The use of physical simulation environments has a number of advantages in the development of robotic systems and their control. It is possible to test design concepts at an early stage, thus making adjustments to the design even before the first prototype production. Also in the development of control procedures, the possibility of virtual testing can save time and costs. In TransFIT, simulation environments will be exploited for different areas. They will be used for decision-making in robot control, learning for adaptations and optimizations of behavior, e.g. for increasing the robustness, as well as for the interaction, e.g. for the virtual immersion of the operator during teleoperation.
Data from the design serve as the basis for the creation of system models. Measurements during the interaction of the robot with its environment are used to optimize these basic models. If the robot is temporarily not required in the mission, tests can be carried out specifically to improve the predictive power of the simulation.
Simulation tools must also be expanded, improved, and utilized to support the development and operation of the robotic systems, or to provide for certain procedures such as, for example, the use of machine learning. The possibility to simulate robots and their interaction with the environment is indispensable for the generation, testing, evaluation and optimization of a large variety of behavior. One focus is on real-time simulations, which make it possible to develop and test the software for systems. These simulations should also be integrated directly into the control software of robots and used at runtime in order to be able to evaluate the success and the possible consequences of an intended action before their execution in the real world. Thus, an increase in the robustness of the autonomous capabilities of the systems can be achieved.
In addition, the simulation is used for the virtual, immersive representation of the robots in their intended environment for the operator via head-mounted displays or in a stereoscopic round projection in order to realize and improve the simultaneous intuitive control of (several) systems.
Improvement of manipulation skills
The robot manipulation skills need to be improved within the project TransFit. As previously mentioned, due to the constantly increasing complexity of the robot kinematics, holistic approaches such as whole-body control are required in order to efficiently control the vast number of degrees of freedom. That is not only true for humanoid robots but also for mobile manipulator systems such as the Mantis robot. A software framework for whole-body control was developed within the project BesMan (FZK: 50 RA 1216 (DFKI)) which was tested as execution layer for manipulation tasks on several robotic systems. The framework is based on constraint-based control and can be deployed in robots with many degrees of freedom in order to control simultaneously-running parallel subtasks. Based on the results from the project BesMan, the framework should be further developed in order to allow the possibility to execute action plans based on semantic task descriptions, which are independent from the specific application context (for instance, involved objects or robotic systems). Especially interesting is to examine the option to combine whole-body control with learning methods in order to enable the (semi-) automatic selection and configuration of the above-mentioned robot subtasks.
On the other side, with fast robot motions, high accelerations appear; a pure kinematic control would reach its limits and the use of a dynamic model is unavoidable. For that, the identification of the robot dynamics based on experimentally-gathered robot data as well as the development of generic software components for the use of the dynamic model in robots of different morphology (humanoid robots, industry robots in transfer scenario) is required. In order to identify or learn these dynamic models and in order to be able to tackle their complexity, the combination of classical identification procedures with learning methods might be a suitable solution. Especially interesting would be to combine their specific advantages: use classical methods to train learning methods and thus, speed up the learning process and, on the other side, use the possibilities of learning methods for online adaption of learned models and the use of their predictive potential. Furthermore, the use of computationally-efficient dynamic models would be beneficial in order to be able to consider the robot dynamics on the motion planning, which it is currently mostly neglected.
Biological inspired gripping
The interaction capability of robots with their environment is hardly determined by their gripping skills. Due to gripping and manipulation of objects, environment gets affected or changed. The handling itself can be realized by multiple physical effects. How safe an object is being gripped, is determined by parameters like the amount of force, form closure and adhesion. The performance of the gripper cannot be optimized for all gripping-relevant parameters, so the threshold areas of gripping are mostly optimized by special-purpose solutions. Through the actual energy and mass limitations of mobile robot systems the ambition is to optimize the gripper’s dead weight / payload ratio for a better performance. This challenge shall be solved by a mechanical self-adaptive finger-kinematic and an intelligent gripper control. The mechanical self-adaptive structure enables the gripper to handle objects of different geometries without additional adaption of the gripping pose by a superior control layer.
Grasping reflexes give us the chance to increase the payload capacity of a system, but also to establish a fast low level control over the gripper in case of an imminent grip-loss.
Our aim is to develop a modular self-adaptive gripping-system that can be used on various industrial manipulators. The underactuated and mechanical self-adaptive gripping system that was developed for 2015’th SpaceBot Cup will be further developed and additionally upgraded by tactile sensors from the projects SeeGrip and LIMES. Additional proximity sensors on the contact surfaces are providing data that is being used for the movement planning. This approach is an approximation between Visual- and Tactile-Servoing.
Additionally, the grasp control of the humanoid robot will be improved by biologically inspired gripping reflexes to prevent a possible object loss while adapting the gripping force. Beside the development of practical control-algorithms the manipulation capability will be increased through the use of self-adapting gripping structures.
Increasing sensor intelligence for fine manipulation
The ability to perform fine manipulation requires strong sensory support in the end-effectors. The work in this project will base on the previous achievements in the SeeGrip and LIMES project. The sensors used in these projects have originally been used in a space scenario (Canadarm of the International Space Station, ISS) and will be used here for the feedback of contact forces. Within this project, it is planned to enhance the pure contact information with proximity sensors in order to monitor all stages of manipulation (approach, contact, release). In terms of increasing the technology readiness level, it is envisioned to increase the robustness as well as the space qualification and the local intelligence of the sensing modules. The goal is to set up the sensor processing chain of the sensors in a way that the preprocessed measurement values can be directly used by the manipulation and grasp planning.
The autonomy of the gripper to ensure a stable grasp as well as the capability to autonomously apply tactile exploration strategies under increased speed is of special interest.
Reliable object recognition by combining top-down and bottom-up approaches
Integration of knowledge representation for flexible task planning
Knowledge representation and reasoning techniques are of high importance not only for object recognition but also for planning the robots actions and for human-robot interaction.
Semantic environment representation, task planning and their connection allow the robot to express its behaviour in a meaningful way to the human and facilitates intention recognition in both directions, which in turn increases the acceptance rate of the system.
The task planner needs to take into account the knowledge that is provided by the robot’s environment representation and intention recognition. However, this knowledge is available in very different forms and modalities, e.g., causal, temporal, spatial knowledge or resource requirements. In order to achieve the desired robot behaviour all of these knowledge sources should be taken into account. In addition to that, the robot’s planner has to reason about the human’s action as well and react flexibly to unexpected events.
To deal with these requirements, TransFit will integrate a hierarchical hybrid task planner into the robotic system and extend the planner to the needs of human-robot interaction and other requirements in the project. With its modular hybrid planning approach the planner can be extended with such other kinds of knowledge. Furthermore, the planner’s hierarchical structure allows to reuse and adapt parts of the plan in case of execution failures or unexpected events. At the same time, this hierarchical planning paradigm reduces the induced huge hybrid search space and makes it feasible to generate plans online on the robot.
Technology transfer
One of the main goals of the TransFit project is the transfer of technology developed for extraterrestrial applications into terrestrial applications. These two application domains have similar requirements. Siemens, with its large experience in the manufacturing of small, complex devices, will be responsible for the development of a scenario to show the transfer of technology.
The aim is a bilateral transfer of the technology. The goal of the transfer scenario is to demonstrate a highly-flexible, universal and cooperative assembly-cell for the manufacturing of complex components, like for example, compact mechanical or electrical devices that, as of today, are only produced manually. The manufacturing of small components is also relevant for the extraterrestrial assembly and installation of infrastructure. Solutions can therefore be transferred from the extraterrestrial application in the industrial application, and also the other way around: approaches from the industrial application can be used in the extraterrestrial application as well.
The following technologies that will be developed primarily for the extraterrestrial scenario will be transferred into the industrial assembly:
- Integration of knowledge representation for flexible task planning: hierarchical planning, together with a semantic representation of the environment and geometrical reasoning will be used in the context of industrial manufacturing
- Sensor-based dynamic motion and grasp planning: transfer to the concrete hardware setup of the transfer demonstrator (2-arm setup with off-the-shelf sensors and grippers) and dual-arm assembly task
- Whole-body control based on semantic task descriptions: transfer to the concrete 2-arm and gripper setup of the transfer demonstrator
- Context-based object detection: the object detection pipeline and grasp pose planning should be transferred to the concrete hardware setup of the transfer demonstrator and applied in the context of industrial assembly.
The following technologies that will be developed primarily for the industrial assembly scenario will be transferred into the extraterrestrial application:
- Skill-based framework for autonomous production systems: the concept of skills introduces a task-independent hierarchy of system functionality. This hierarchy is adequate for both discrete manufacturing and extraterrestrial infrastructure construction. The concept of skill is a central aspect in the hierarchical task planning and in the architecture of the whole system.
- Integrated task and motion planning: sequential steps in an assembly process can have geometrical constraints that cannot be considered when the steps are planned and executed independently. Task planning has therefore to be able to take into account the motion of the robot and tools over several process steps. This holds for both scenarios.
- Intuitive human-machine interaction for autonomous production systems. An intuitive human-machine interaction is not only relevant in the context of industrial production, but in general for each type of human-machine interaction. All aspects of the interaction concept that are independent of the context of the industrial assembly should be realized in the context of extraterrestrial infrastructure building.