Intelligent Systems


2024


no image
Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models
2024 (misc)

Abstract
Humans excel at robust bipedal walking in complex natural environments. In each step, they adequately tune the interaction of biomechanical muscle dynamics and neuronal signals to be robust against uncertainties in ground conditions. However, it is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem considering stability, robustness, and energy efficiency. In computer simulations, energy minimization has been shown to be a successful optimization target, reproducing natural walking with trajectory optimization or reflex-based control methods. However, these methods focus on particular motions at a time and the resulting controllers are limited when compensating for perturbations. In robotics, reinforcement learning~(RL) methods recently achieved highly stable (and efficient) locomotion on quadruped systems, but the generation of human-like walking with bipedal biomechanical models has required extensive use of expert data sets. This strong reliance on demonstrations often results in brittle policies and limits the application to new behaviors, especially considering the potential variety of movements for high-dimensional musculoskeletal models in 3D. Achieving natural locomotion with RL without sacrificing its incredible robustness might pave the way for a novel approach to studying human walking in complex natural environments. Videos: this https://sites.google.com/view/naturalwalkingrl

link (url) [BibTex]

2022


no image
A Sequential Group VAE for Robot Learning of Haptic Representations

Richardson, B. A., Kuchenbecker, K. J., Martius, G.

pages: 1-11, Workshop paper (8 pages) presented at the CoRL Workshop on Aligning Robot Representations with Humans, Auckland, New Zealand, December 2022 (misc)

Abstract
Haptic representation learning is a difficult task in robotics because information can be gathered only by actively exploring the environment over time, and because different actions elicit different object properties. We propose a Sequential Group VAE that leverages object persistence to learn and update latent general representations of multimodal haptic data. As a robot performs sequences of exploratory procedures on an object, the model accumulates data and learns to distinguish between general object properties, such as size and mass, and trial-to-trial variations, such as initial object position. We demonstrate that after very few observations, the general latent representations are sufficiently refined to accurately encode many haptic object properties.

link (url) Project Page [BibTex]

2022

link (url) Project Page [BibTex]


A Soft Vision-Based Tactile Sensor for Robotic Fingertip Manipulation
A Soft Vision-Based Tactile Sensor for Robotic Fingertip Manipulation

Andrussow, I., Sun, H., Kuchenbecker, K. J., Martius, G.

Workshop paper (1 page) presented at the IROS Workshop on Large-Scale Robotic Skin: Perception, Interaction and Control, Kyoto, Japan, October 2022 (misc)

Abstract
For robots to become fully dexterous, their hardware needs to provide rich sensory feedback. High-resolution haptic sensing similar to the human fingertip can enable robots to execute delicate manipulation tasks like picking up small objects, inserting a key into a lock, or handing a cup of coffee to a human. Many tactile sensors have emerged in recent years; one especially promising direction is vision-based tactile sensors due to their low cost, low wiring complexity and high-resolution sensing capabilities. In this work, we build on previous findings to create a soft fingertip-sized tactile sensor. It can sense normal and shear contact forces all around its 3D surface with an average prediction error of 0.05 N, and it localizes contact on its shell with an average prediction error of 0.5 mm. The software of this sensor uses a data-efficient machine-learning pipeline to run in real time on hardware with low computational power like a Raspberry Pi. It provides a maximum data frame rate of 60 Hz via USB.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]

2021


Sensor arrangement for sensing forces and methods for fabricating a sensor arrangement and parts thereof
Sensor arrangement for sensing forces and methods for fabricating a sensor arrangement and parts thereof

Sun, H., Martius, G., Kuchenbecker, K. J.

(PCT/EP2021/050230), Max Planck Institute for Intelligent Systems, Max Planck Ring 4, January 2021 (patent)

Abstract
The invention relates to a vision-based haptic sensor arrangement for sensing forces, to a method for fabricating a top portion of a sensor arrangement, and to a method for fabricating a sensor arrangement.

Project Page [BibTex]

2021

Project Page [BibTex]


Method for force inference, method for training a feed-forward neural network, force inference module, and sensor arrangement
Method for force inference, method for training a feed-forward neural network, force inference module, and sensor arrangement

Sun, H., Martius, G., Kuchenbecker, K. J.

(PCT/EP2021/050231), Max Planck Institute for Intelligent Systems, Max Planck Ring 4, January 2021 (patent)

Abstract
The invention relates to a method for force inference of a sensor arrangement for sensing forces, to a method for training a feed-forward neural network, to a force inference module, and to a sensor arrangement.

Project Page [BibTex]

2020


Method for Force Inference of a Sensor Arrangement, Methods for Training Networks, Force Inference Module and Sensor Arrangement
Method for Force Inference of a Sensor Arrangement, Methods for Training Networks, Force Inference Module and Sensor Arrangement

Sun, H., Martius, G., Lee, H., Spiers, A., Fiene, J.

(PCT/EP2020/083261), Max Planck Institute for Intelligent Systems, Max Planck Ring 4, November 2020 (patent)

Abstract
The present invention relates to a method for force inference of a sensor arrangement, to related methods for training of networks, to a force inference module for performing such methods, and to a sensor arrangement for sensing forces. When developing applications such as robots, sensing of forces applied on a robot hand or another part of a robot such as a leg or a manipulation device is crucial in giving robots increased capabilities to move around and/or manipulate objects. Known implementations for sensor arrangements that can be used in robotic applications in order to have feedback with regard to applied forces are quite expensive and do not have sufficient resolution. Sensor arrangements may be used to measure forces. However, known sensor arrangements need a high density of sensors to provide for a high special resolution. It is thus an object of the present invention to provide for a method for force inference of a sensor arrangement and related methods that are different or optimized with regard to the prior art. It is a further object to provide for a force inference module to perform such methods. It is a further object to provide for a sensor arrangement for sensing forces with such a force inference module.

Project Page [BibTex]

2014


no image
Robot Learning by Guided Self-Organization

Martius, G., Der, R., Herrmann, J. M.

In Guided Self-Organization: Inception, 9, pages: 223-260, Emergence, Complexity and Computation, Springer Berlin Heidelberg, 2014 (incollection)

link (url) DOI [BibTex]

2014

link (url) DOI [BibTex]

2013


no image
Behavior as broken symmetry in embodied self-organizing robots

Der, R., Martius, G.

In Advances in Artificial Life, ECAL 2013, pages: 601-608, MIT Press, 2013 (incollection)

[BibTex]

2013

[BibTex]

2011


no image
Tipping the Scales: Guidance and Intrinsically Motivated Behavior

Martius, G., Herrmann, J. M.

In Advances in Artificial Life, ECAL 2011, pages: 506-513, (Editors: Tom Lenaerts and Mario Giacobini and Hugues Bersini and Paul Bourgine and Marco Dorigo and René Doursat), MIT Press, 2011 (incollection)

[BibTex]

2011

[BibTex]

2010


no image
\textscLpzRobots: A free and powerful robot simulator

Martius, G., Hesse, F., Güttler, F., Der, R.

\urlhttp://robot.informatik.uni-leipzig.de/software, 2010 (misc)

[BibTex]

2010

[BibTex]


no image
Playful Machines: Tutorial

Der, R., Martius, G.

\urlhttp://robot.informatik.uni-leipzig.de/tutorial?lang=en, 2010 (misc)

[BibTex]

[BibTex]


no image
Taming the Beast: Guided Self-organization of Behavior in Autonomous Robots

Martius, G., Herrmann, J. M.

In From Animals to Animats 11, 6226, pages: 50-61, LNCS, Springer, 2010 (incollection)

link (url) DOI [BibTex]

link (url) DOI [BibTex]