Header logo is

Institute Talks

Safe Learning Control for Gaussian Process Models

Talk
  • 25 February 2020 • 14:00 15:00
  • Jonas Umlauft
  • MPI-IS Stuttgart, Heisenbergstr. 3, seminar room 2P4

Machine learning allows automated systems to identify structures and physical laws based on measured data, which is particularly useful in areas where an analytic derivation of a model is too tedious or not possible. Research in reinforcement learning led to impressive results and superhuman performance in well-structured tasks and games. However, to this day, data-driven models are rarely employed in the control of safety critical systems, because the success of a controller, which is based on these models, cannot be guaranteed. Therefore, the research presented in this talk analyzes the closed-loop behavior of learning control laws by means of rigorous proofs. More specifically, we propose a control law based on Gaussian process (GP) models, which actively avoids uncertainties in the state space and favors trajectories along the training data, where the system is well-known. We show that this behavior is optimal as it maximizes the probability of asymptotic stability. Additionally, we consider an event-triggered online learning control law, which safely explores an initially unknown system. It only takes new training data whenever the uncertainty in the system becomes too large. As the control law only requires a locally precise model, this novel learning strategy has a high data efficiency and provides safety guarantees.

Organizers: Sebastian Trimpe

Learning to Model 3D Human Face Geometry

Talk
  • 20 March 2020 • 11:00 12:00
  • Victoria Fernández Abrevaya
  • N3.022 (Aquarium)

In this talk I will present an overview of our recent works that learn deep geometric models for the 3D face from large datasets of scans. Priors for the 3D face are crucial for many applications: to constrain ill posed problems such as 3D reconstruction from monocular input, for efficient generation and animation of 3D virtual avatars, or even in medical domains such as recognition of craniofacial disorders. Generative models of the face have been widely used for this task, as well as deep learning approaches that have recently emerged as a robust alternative. Barring a few exceptions, most of these data-driven approaches were built from either a relatively limited number of samples (in the case of linear models of the shape), or by synthetic data augmentation (for deep-learning based approaches), mainly due to the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of 3D information that can be gathered when considering publicly available datasets that have been captured over the last decade. I will discuss here our works that tackle the challenges of building rich geometric models out of these large and varied datasets, with the goal of modeling the facial shape, expression (i.e. motion) or geometric details. Concretely, I will talk about (1) an efficient and fully automatic approach for registration of large datasets of 3D faces in motion; (2) deep learning methods for modeling the facial geometry that can disentangle the shape and expression aspects of the face; and (3) a multi-modal learning approach for capturing geometric details from images in-the-wild, by simultaneously encoding both facial surface normal and natural image information.

Organizers: Jinlong Yang

Electro-active Ionic Elastomers

Talk
  • 23 March 2020 • 11:00 12:00
  • Prof. Antal Jákli
  • 2P04

Motivated by the low voltage driven actuation of ionic Electroactive Polymers (iEAPs) [1] [2], recently we began investigating ionic elastomers. In this talk I will discuss the preparation, physical characterization and electric bending actuation properties of two novel ionic elastomers; ionic polymer electrolyte membranes (iPEM)[3], and ionic liquid crystal elastomers (iLCE).[4] Both materials can be actuated by low frequency AC or DC voltages of less than 1 V. The bending actuation properties of the iPEMs are outperforming most of the well-developed iEAPs, and the not optimized first iLCEs are already comparable to them. Ionic liquid crystal elastomers also exhibit superior features, such as the alignment dependent actuation, which offers the possibility of pre-programed actuation pattern at the level of cross-linking process. Additionally, multiple (thermal, optical and electric) actuations are also possible. I will also discuss issues with compliant electrodes and possible soft robotic applications. [1] Y. Bar-Cohen, Electroactive Polyer Actuators as Artficial Muscles: Reality, Potential and Challenges, SPIE Press, Bellingham, 2004. [2] O. Kim, S. J. Kim, M. J. Park, Chem. Commun. 2018, 54, 4895. [3] C. P. H. Rajapaksha, C. Feng, C. Piedrahita, J. Cao, V. Kaphle, B. Lüssem, T. Kyu, A. Jákli, Macromol. Rapid Commun. 2020, in print. [4] C. Feng, C. P. H. Rajapaksha, J. M. Cedillo, C. Piedrahita, J. Cao, V. Kaphle, B. Lussem, T. Kyu, A. I. Jákli, Macromol. Rapid Commun. 2019, 1900299.

Biomechanical models and functional anatomy of the horse body

Talk
  • 23 March 2020 • 12:00 12:45
  • Elin Herlund
  • N3.022 (Aquarium)

“There’s something about the outside of a horse that is good for the inside of a man”, Churchill allegedly said. The horse’s motion has captured the interest of humans throughout history. Understanding of the mechanics of horse motion has been sought in early work by Aristotle (300 BC), in pioneering photographic studies by Muybridge (1880) as well as in modern day scientific publications.

The horse (Equus callabus ferus) is a remarkable animal athlete with outstanding running capabilities. The efficiency of its locomotion is explained by specialised anatomical features, which limit the degrees of freedom of movement and reduce energy consumption. Theoretical mechanical models are quite well suited to describe the essence of equine gaits and provide us with simple measures for analysing gait asymmetry. Such measures are well needed, since agreement between veterinarians is moderate to poor when it comes to visual assessment of lameness.

The human visual system has indeed clear limitations in perception and interpretation of horse motion. This limits our abilities to understand the horse, not only to detect lameness and to predict performance, but also to interpret its non-verbal communication and to detect signs of illness or discomfort.

This talk will provide a brief overview of existing motion analysis techniques and models in equine biomechanics. We will discuss future possibilities to achieve more accessible, sensitive and complex ways of analysing the motion of the horse.

A New Framework to Understanding Biological Vision

IS Colloquium
  • 03 September 2019 • 11:00 12:00
  • Zhaoping Li
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Visual attention selects a tiny amount of information that can be deeply processed by the brain, and gaze shifts bring the selected visual object to fovea, the center of the visual field, for better visual decoding or recognition of the selected objects. Therefore, central and peripheral vision should differ qualitatively in visual decoding, rather than just quantitatively in visual acuity.

Organizers: Katherine J. Kuchenbecker


  • Björn Browatzki
  • PS Aquarium

Current solutions to discriminative and generative tasks in computer vision exist separately and often lack interpretability and explainability. Using faces as our application domain, here we present an architecture that is based around two core ideas that address these issues: first, our framework learns an unsupervised, low-dimensional embedding of faces using an adversarial autoencoder that is able to synthesize high-quality face images. Second, a supervised disentanglement splits the low-dimensional embedding vector into four sub-vectors, each of which contains separated information about one of four major face attributes (pose, identity, expression, and style) that can be used both for discriminative tasks and for manipulating all four attributes in an explicit manner. The resulting architecture achieves state-of-the-art image quality, good discrimination and face retrieval results on each of the four attributes, and supports various face editing tasks using a face representation of only 99 dimensions. Finally, we apply the architecture's robust image synthesis capabilities to visually debug label-quality issues in an existing face dataset.

Organizers: Timo Bolkart


  • Gunhyuk Park
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Many hapticians have designed and implemented haptic effects to various user interactions. For several decades, hapticians have proved that the haptic feedback can improve multiple facets of user experience including task performance, analyzing and utilizing user perception, and substituting other sensory modalities. Among them, this talk introduces two representative rendering methods to provide vibrotactile effects to users: 2D phantom sensation that makes a user perceive illusive tactile perception by using multiple real vibrotactile actuators and vibrotactile dimensional reduction that reduces 3D acceleration data from real interactions to 1D vibrations for maximizing its realism and similarity.

Organizers: Katherine J. Kuchenbecker


  • Yoshihiro Kanamori
  • PS-Aquarium

Relighting of human images has various applications in image synthesis. For relighting, we must infer albedo, shape, and illumination from a human portrait. Previous techniques rely on human faces for this inference, based on spherical harmonics (SH) lighting. However, because they often ignore light occlusion, inferred shapes are biased and relit images are unnaturally bright particularly at hollowed regions such as armpits, crotches, or garment wrinkles. This paper introduces the first attempt to infer light occlusion in the SH formulation directly. Based on supervised learning using convolutional neural networks (CNNs), we infer not only an albedo map, illumination but also a light transport map that encodes occlusion as nine SH coefficients per pixel. The main difficulty in this inference is the lack of training datasets compared to unlimited variations of human portraits. Surprisingly, geometric information including occlusion can be inferred plausibly even with a small dataset of synthesized human figures, by carefully preparing the dataset so that the CNNs can exploit the data coherency. Our method accomplishes more realistic relighting than the occlusion-ignored formulation.

Organizers: Senya Polikovsky Jinlong Yang


Self-supervised 3D hand pose estimation

Talk
  • 23 July 2019 • 11:00 12:00
  • Chengde Wan
  • PS-Aquarium

Deep learning has significantly advanced state-of-the-art for 3D hand pose estimation, of which accuracy can be improved with increased amounts of labelled data. However, acquiring 3D hand pose labels can be extremely difficult. In this talk, I will present our recent two works on leveraging self-supervised learning techniques for hand pose estimation from depth map. In both works, we incorporate differentiable renderer to the network and formulate training loss as model fitting error to update network parameters. In first part of the talk, I will present our earlier work which approximates hand surface with a set of spheres. We then model the pose prior as a variational lower bound with variational auto-encoder(VAE). In second part, I will present our latest work on regressing the vertex coordinates of a hand mesh model with 2D fully convolutional network(FCN) in a single forward pass. In the first stage, the network estimates a dense correspondence field for every pixel on the image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. Without any human annotation, both works can perform competitively with strongly supervised methods. The later work will also be later extended to be compatible with MANO model.

Organizers: Dimitrios Tzionas


An introduction to bladder cancer & challenges for translational research

Talk
  • 22 July 2019 • 10:30 AM - 22 April 2019 • 11:30 AM
  • Richard T Bryan
  • 2P4


  • Christoph Keplinger
  • MPI-IS Stuttgart, Room 2R04 / MPI-IS Tübingen, Room N0.002 (Broadcast)

Robots today rely on rigid components and electric motors based on metal and magnets, making them heavy, unsafe near humans, expensive and ill-suited for unpredictable environments. Nature, in contrast, makes extensive use of soft materials and has produced organisms that drastically outperform robots in terms of agility, dexterity, and adaptability. The Keplinger Lab aims to fundamentally challenge current limitations of robotic hardware, using an interdisciplinary approach that synergizes concepts from soft matter physics and chemistry with advanced engineering technologies to introduce robotic materials – material systems that integrate actuation, sensing and even computation – for a new generation of intelligent systems. This talk gives an overview of fundamental research questions that inspire current and future research directions. One major theme of research is the development of new classes of actuators – a key component of all robotic systems – that replicate the sweeping success of biological muscle, a masterpiece of evolution featuring astonishing all-around actuation performance, the ability to self-heal after damage, and seamless integration with sensing. A second theme of research are functional polymers with unusual combinations of properties, such as electrical conductivity paired with stretchability, transparency, biocompatibility and the ability to self-healing from mechanical and electrical damage. A third theme of research is the discovery of new energy capture principles that can provide power to intelligent autonomous systems, as well as – on larger scales – enable sustainable solutions for the use of waste heat from industrial processes or the use of untapped sources of renewable energy, such as ocean waves.


  • Joseph B. Tracy
  • MPI-IS Stuttgart, Room 2P04

Magnetic fields and light can be used to assemble, manipulate, and heat nanoparticles (NPs) and to remotely actuate polymer composites. Simple soft robots will be presented, where incorporation of magnetic and plasmonic NPs makes them responsive to magnetic fields and light. Application of magnetic fields to dispersions of magnetic NPs drives their assembly into chains. Dipolar coupling within the chains is a source of magnetic anisotropy, and chains of magnetic NPs embedded in a polymer matrix can be used to program the response of soft robots, while still using simple architectures. Wavelength-selective photothermal triggering of shape recovery in shape memory polymers with embedded Au nanospheres and nanorods can be used to remotely drive sequential processes. Combining magnetic actuation and photothermal heating enables remote configuration, locking, unlocking, and reconfiguration of soft robots, thus increasing their capabilities. Composite and multifunctional NPs are of interest for expanding the properties and applications of NPs. Silica shells are desirable for facilitating functionalization with silanes and enhancing the stability of NPs. Methods for depositing thin silica shells with controlled morphologies onto Au nanorods and CdSe/CdS core/shell quantum dot nanorods will be presented. Silica deposition can also be accompanied by etching and breakage of the core NPs. Assembly of Fe3O4 NPs onto silica-overcoated Au nanorods allows for magnetic manipulation, while retaining the surface plasmon resonance.

Organizers: Metin Sitti


  • Shunsuke Saito
  • PS Aquarium

Realistic digital avatars are increasingly important in digital media with potential to revolutionize 3D face-to-face communication and social interactions through compelling digital embodiment of ourselves. My goal is to efficiently create high-fidelity 3D avatars from a single image input, captured in an unconstrained environment. These avatars must be close in quality to those created by professional capture systems, yet require minimal computation and no special expertise from the user. These requirements pose several significant technical challenges. A single photograph provides only partial information due to occlusions, and intricate variations in shape and appearance may prevent us from applying traditional template-based approaches. In this talk, I will present our recent work on clothed human reconstruction from a single image. We demonstrate that a careful choice of data representation that can be easily handled by machine learning algorithms is the key to robust and high-fidelity synthesis and inference for human digitization.

Organizers: Timo Bolkart


Cognitive Production Systems – AI in Production

Talk
  • 09 July 2019 • 14:00 15:00
  • Prof. Dr.-Ing. Marco Huber
  • MPI-IS Stuttgart, Heisenbergstr. 3, seminar room 2P4

Fraunhofer IPA in Stuttgart is one of the largest institutes within the Fraunhofer Society with a strong focus on production technologies and automation. Research and technology transfer efforts on machine learning and artificial intelligence are concentrated at IPA’s Center for Cyber Cognitive Intelligence (CCI). This talk gives an introduction to CCI‘s mission and typical industrial applications being addressed. Furthermore, an overview of the research areas and a deep dive into selected topics are provided. Examples are 6D pose estimation for robotic bin-picking, explainable machine learning, Bayesian filtering for object tracking, or event correlation mining.

Organizers: Sebastian Trimpe