Our mission is to make robots learn in a developmental fashion - similar to children. Why should that be useful? Already now robots are present in many areas of our life, taking over tasks that are too dangerous, too repetitive, or require too high precision for humans. However, their area of application is bound because preprogrammed robots are not able to successfully interact with our complex and constantly changing world.
In living beings, this problem is solved by an interplay of learning, self-organization and innate
information. Self-organization—ubiquitous in nature—offers promising perspectives for practical applications because it is based on local interactions and typically scales well to large fault-tolerant systems. In our research, we were building the theoretical basis for self-organized robot control. We study how dynamical systems and information theory can be used to generate sensory-motor coordination in robots.
In order to make robots successful in learning new skills, they have to extract as much information as possible from experience. Learning a model of its body and the environment can capture the information needed to master unknown future tasks. Classical regression models are not good at predictions in new situations. We investigate how an algorithm can identify the underlying relationships in data, thereby obtaining a model that can extrapolate well.
For actually learning, improving and controlling goal-oriented behavior, we use reinforcement learning and optimal control methods. Our aim is to make reinforcement learning so data-efficient that it can be applied to real systems easily. We currently take two complementary routes. One route is to combine optimal control methods and planning with reinforcement learning. The other route is to learn many tasks at the same time using hierarchical learners that are intrinsically motivated to explore appropriate goal spaces and understand relational structures in the environment.
Machine Learning methods are the working horses behind our recent developments in model learning, reinforcement learning, and representation learning. While investigating the state of the art methods we realized a potential for speeding up stochastic gradient descent in practical applications. Further, we investigate unsupervised learning of independent representations.
In several research projects, we investigate data-driven approaches for optimal and robust control, with applications e.g. in robotics. By combining optimal -- a principled way of decision-making and control, with reinforcement learning for control designs, we are tackling various challenges arising in robotic systems. Read More
Deep learning is the tool for our research to obtain learned representations, fit functions such as policies or value functions and learn internal models. Along the way of using deep learning techniques for our core focus of autonomous learning we frequently need to develop new methods. Quite often, we stumble upon unsolved or puzzlin... Read More
Our goal is to design one 3D haptic sensation system covering all surfaces over the humanoid to help the robot to find its embodiment. Instead of applying dense array shaped sensors or monitoring the robot using multi external cameras, we propose to place as few sensors as possible to predict the touch information. In another word, we ... Read More