Header logo is

Institute Talks

Safe Learning Control for Gaussian Process Models

Talk
  • 25 February 2020 • 14:00 15:00
  • Jonas Umlauft
  • MPI-IS Stuttgart, Heisenbergstr. 3, seminar room 2P4

Machine learning allows automated systems to identify structures and physical laws based on measured data, which is particularly useful in areas where an analytic derivation of a model is too tedious or not possible. Research in reinforcement learning led to impressive results and superhuman performance in well-structured tasks and games. However, to this day, data-driven models are rarely employed in the control of safety critical systems, because the success of a controller, which is based on these models, cannot be guaranteed. Therefore, the research presented in this talk analyzes the closed-loop behavior of learning control laws by means of rigorous proofs. More specifically, we propose a control law based on Gaussian process (GP) models, which actively avoids uncertainties in the state space and favors trajectories along the training data, where the system is well-known. We show that this behavior is optimal as it maximizes the probability of asymptotic stability. Additionally, we consider an event-triggered online learning control law, which safely explores an initially unknown system. It only takes new training data whenever the uncertainty in the system becomes too large. As the control law only requires a locally precise model, this novel learning strategy has a high data efficiency and provides safety guarantees.

Organizers: Sebastian Trimpe

Learning to Model 3D Human Face Geometry

Talk
  • 20 March 2020 • 11:00 12:00
  • Victoria Fernández Abrevaya
  • N3.022 (Aquarium)

In this talk I will present an overview of our recent works that learn deep geometric models for the 3D face from large datasets of scans. Priors for the 3D face are crucial for many applications: to constrain ill posed problems such as 3D reconstruction from monocular input, for efficient generation and animation of 3D virtual avatars, or even in medical domains such as recognition of craniofacial disorders. Generative models of the face have been widely used for this task, as well as deep learning approaches that have recently emerged as a robust alternative. Barring a few exceptions, most of these data-driven approaches were built from either a relatively limited number of samples (in the case of linear models of the shape), or by synthetic data augmentation (for deep-learning based approaches), mainly due to the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of 3D information that can be gathered when considering publicly available datasets that have been captured over the last decade. I will discuss here our works that tackle the challenges of building rich geometric models out of these large and varied datasets, with the goal of modeling the facial shape, expression (i.e. motion) or geometric details. Concretely, I will talk about (1) an efficient and fully automatic approach for registration of large datasets of 3D faces in motion; (2) deep learning methods for modeling the facial geometry that can disentangle the shape and expression aspects of the face; and (3) a multi-modal learning approach for capturing geometric details from images in-the-wild, by simultaneously encoding both facial surface normal and natural image information.

Organizers: Jinlong Yang

Electro-active Ionic Elastomers

Talk
  • 23 March 2020 • 11:00 12:00
  • Prof. Antal Jákli
  • 2P04

Motivated by the low voltage driven actuation of ionic Electroactive Polymers (iEAPs) [1] [2], recently we began investigating ionic elastomers. In this talk I will discuss the preparation, physical characterization and electric bending actuation properties of two novel ionic elastomers; ionic polymer electrolyte membranes (iPEM)[3], and ionic liquid crystal elastomers (iLCE).[4] Both materials can be actuated by low frequency AC or DC voltages of less than 1 V. The bending actuation properties of the iPEMs are outperforming most of the well-developed iEAPs, and the not optimized first iLCEs are already comparable to them. Ionic liquid crystal elastomers also exhibit superior features, such as the alignment dependent actuation, which offers the possibility of pre-programed actuation pattern at the level of cross-linking process. Additionally, multiple (thermal, optical and electric) actuations are also possible. I will also discuss issues with compliant electrodes and possible soft robotic applications. [1] Y. Bar-Cohen, Electroactive Polyer Actuators as Artficial Muscles: Reality, Potential and Challenges, SPIE Press, Bellingham, 2004. [2] O. Kim, S. J. Kim, M. J. Park, Chem. Commun. 2018, 54, 4895. [3] C. P. H. Rajapaksha, C. Feng, C. Piedrahita, J. Cao, V. Kaphle, B. Lüssem, T. Kyu, A. Jákli, Macromol. Rapid Commun. 2020, in print. [4] C. Feng, C. P. H. Rajapaksha, J. M. Cedillo, C. Piedrahita, J. Cao, V. Kaphle, B. Lussem, T. Kyu, A. I. Jákli, Macromol. Rapid Commun. 2019, 1900299.

Biomechanical models and functional anatomy of the horse body

Talk
  • 23 March 2020 • 12:00 12:45
  • Elin Herlund
  • N3.022 (Aquarium)

“There’s something about the outside of a horse that is good for the inside of a man”, Churchill allegedly said. The horse’s motion has captured the interest of humans throughout history. Understanding of the mechanics of horse motion has been sought in early work by Aristotle (300 BC), in pioneering photographic studies by Muybridge (1880) as well as in modern day scientific publications.

The horse (Equus callabus ferus) is a remarkable animal athlete with outstanding running capabilities. The efficiency of its locomotion is explained by specialised anatomical features, which limit the degrees of freedom of movement and reduce energy consumption. Theoretical mechanical models are quite well suited to describe the essence of equine gaits and provide us with simple measures for analysing gait asymmetry. Such measures are well needed, since agreement between veterinarians is moderate to poor when it comes to visual assessment of lameness.

The human visual system has indeed clear limitations in perception and interpretation of horse motion. This limits our abilities to understand the horse, not only to detect lameness and to predict performance, but also to interpret its non-verbal communication and to detect signs of illness or discomfort.

This talk will provide a brief overview of existing motion analysis techniques and models in equine biomechanics. We will discuss future possibilities to achieve more accessible, sensitive and complex ways of analysing the motion of the horse.

A fine-grained perspective onto object interactions

Talk
  • 30 October 2018 • 10:30 11:30
  • Dima Damen
  • N0.002

This talk aims to argue for a fine-grained perspective onto human-object interactions, from video sequences. I will present approaches for the understanding of ‘what’ objects one interacts with during daily activities, ‘when’ should we label the temporal boundaries of interactions, ‘which’ semantic labels one can use to describe such interactions and ‘who’ is better when contrasting people perform the same interaction. I will detail my group’s latest works on sub-topics related to: (1) assessing action ‘completion’ – when an interaction is attempted but not completed [BMVC 2018], (2) determining skill or expertise from video sequences [CVPR 2018] and (3) finding unequivocal semantic representations for object interactions [ongoing work]. I will also introduce EPIC-KITCHENS 2018, the recently released largest dataset of object interactions in people’s homes, recorded using wearable cameras. The dataset includes 11.5M frames fully annotated with objects and actions, based on unique annotations from the participants narrating their own videos, thus reflecting true intention. Three open challenges are now available on object detection, action recognition and action anticipation [http://epic-kitchens.github.io]

Organizers: Mohamed Hassan


Artificial Haptic Intelligence for Human-Machine Systems

IS Colloquium
  • 25 October 2018 • 11:00 12:00
  • Veronica J. Santos
  • N2.025 at MPI-IS in Tübingen

The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.

Organizers: Katherine J. Kuchenbecker Adam Spiers


Artificial Haptic Intelligence for Human-Machine Systems

IS Colloquium
  • 24 October 2018 • 11:00 12:00
  • Veronica J. Santos
  • 5H7 at MPI-IS in Stuttgart

The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.

Organizers: Katherine J. Kuchenbecker


Control Systems for a Surgical Robot on the Space Station

IS Colloquium
  • 23 October 2018 • 16:30 17:30
  • Chris Macnab
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

As part of a proposed design for a surgical robot on the space station, my research group has been asked to look at controls that can provide literally surgical precision. Due to excessive time delay, we envision a system with a local model being controlled by a surgeon while the remote system on the space station follows along in a safe manner. Two of the major design considerations that come into play for the low-level feedback loops on the remote side are 1) the harmonic drives in a robot will cause excessive vibrations in a micro-gravity environment unless active damping strategies are employed and 2) when interacting with a human tissue environment the robot must apply smooth control signals that result in precise positions and forces. Thus, we envision intelligent strategies that utilize nonlinear, adaptive, neural-network, and/or fuzzy control theory as the most suitable. However, space agencies, or their engineering sub-contractors, typically provide gain and phase margin characteristics as requirements to the engineers involved in a control system design, which are normally associated with PID or other traditional linear control schemes. We are currently endeavouring to create intelligent controls that have guaranteed gain and phase margins using the Cerebellar Model Articulation Controller.

Organizers: Katherine J. Kuchenbecker


Learning to Act with Confidence

Talk
  • 23 October 2018 • 12:00 13:00
  • Andreas Krause
  • MPI-IS Tübingen, N0.002

Actively acquiring decision-relevant information is a key capability of intelligent systems, and plays a central role in the scientific process. In this talk I will present research from my group on this topic at the intersection of statistical learning, optimization and decision making. In particular, I will discuss how statistical confidence bounds can guide data acquisition in a principled way to make effective and reliable decisions in a variety of complex domains. I will also discuss several applications, ranging from autonomously guiding wetlab experiments in protein function optimization to safe exploration in robotics.


  • Ravi Haksar
  • MPI-IS Stuttgart, seminar room 2P4

What do forest fires, disease outbreaks, robot swarms, and social networks have in common? How can we develop a common set of tools for these applications? In this talk, I will first introduce a modeling framework that describes large-scale phenomena and which is based on the idea of "local interactions." I will then describe my work on creating estimation and control methods for a single agent and for a cooperative team of autonomous agents. In particular, these algorithms are scalable as the solution does not change if the number of agents or environment size changes. Forest fires and the 2013 Ebola outbreak in West Africa are presented as examples.

Organizers: Sebastian Trimpe


  • Charlotte Le Mouel
  • 2P4, Heisenbergstr. 3, 70188 Stuttgart

Theories of motor control in neuroscience usually focus on the role of the nervous system in the coordination of movement. However, the literature in sports science as well as in embodied robotics suggests that improvements in motor performance can be achieved through an improvement of the body mechanical properties themselves, rather than only the control. I therefore developed the thesis that efficient motor coordination in animals and humans relies on the adjustment of the body mechanical properties to the task at hand, by the postural system.

Organizers: Charlotte Le Mouel Alexander Badri-Sprowitz


  • Mario Herger
  • Kupferbau Universität Tübingen, Hörsaal 22

Über 1.000 selbstfahrende Testfahrzeuge von insgesamt 57 Unternehmen fahren im Silicon Valley bereits herum, und nun steht die Google-Schwester Waymo davor, 82.000 Robotertaxis auf die Straßen zu bringen. Und das nicht irgendwann, sondern noch dieses Jahr. Währenddessen rüstet sich Tesla mit seinem vollelektrischen Model 3 für einen Frontalangriff auf die deutschen Hersteller. In den USA sind die Verkaufszahlen deutscher Mittelklassewagen im Vergleich zum Vorjahr um 29 Prozent eingebrochen.


Still, In Motion

Talk
  • 12 October 2018 • 11:00 12:00
  • Michael Cohen

In this talk, I will take an autobiographical approach to explain both where we have come from in computer graphics from the early days of rendering, and to point towards where we are going in this new world of smartphones and social media. We are at a point in history where the abilities to express oneself with media is unparalleled. The ubiquity and power of mobile devices coupled with new algorithmic paradigms is opening new expressive possibilities weekly. At the same time, these new creative media (composite imagery, augmented imagery, short form video, 3D photos) also offer unprecedented abilities to move freely between what is real and unreal. I will focus on the spaces in between images and video, and in between objective and subjective reality. Finally, I will close with some lessons learned along the way.


  • Mariacarla Memeo
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

The increasing availability of on-line resources and the widespread practice of storing data over the internet arise the problem of their accessibility for visually impaired people. A translation from the visual domain to the available modalities is therefore necessary to study if this access is somewhat possible. However, the translation of information from vision to touch is necessarily impaired due to the superiority of vision during the acquisition process. Yet, compromises exist as visual information can be simplified, sketched. A picture can become a map. An object can become a geometrical shape. Under some circumstances, and with a reasonable loss of generality, touch can substitute vision. In particular, when touch substitutes vision, data can be differentiated by adding a further dimension to the tactile feedback, i.e. extending tactile feedback to three dimensions instead of two. This mode has been chosen because it mimics our natural way of following object profiles with fingers. Specifically, regardless if a hand lying on an object is moving or not, our tactile and proprioceptive systems are both stimulated and tell us something about which object we are manipulating, what can be its shape and size. The goal of this talk is to describe how to exploit tactile stimulation to render digital information non visually, so that cognitive maps associated with this information can be efficiently elicited from visually impaired persons. In particular, the focus is to deliver geometrical information in a learning scenario. Moreover, a completely blind interaction with virtual environment in a learning scenario is something little investigated because visually impaired subjects are often passive agents of exercises with fixed environment constraints. For this reason, during the talk I will provide my personal answer to the question: can visually impaired people manipulate dynamic virtual content through touch? This process is much more challenging than only exploring and learning a virtual content, but at the same time it leads to a more conscious and dynamic creation of the spatial understanding of an environment during tactile exploration.

Organizers: Katherine J. Kuchenbecker