Institute Talks

SDF-2-SDF: 3D Reconstruction of Rigid and Deformable Objects from RGB-D Videos

Talk
  • 19 October 2017 • 10:00 11:00
  • Slobodan Ilic and Mira Slavcheva
  • PS Seminar Room (N3.022)

In this talk we will address the problem of 3D reconstruction of rigid and deformable objects from a single depth video stream. Traditional 3D registration techniques, such as ICP and its variants, are wide-spread and effective, but sensitive to initialization and noise due to the underlying correspondence estimation procedure. Therefore, we have developed SDF-2-SDF, a dense, correspondence-free method which aligns a pair of implicit representations of scene geometry, e.g. signed distance fields, by minimizing their direct voxel-wise difference. In its rigid variant, we apply it for static object reconstruction via real-time frame-to-frame camera tracking and posterior multiview pose optimization, achieving higher accuracy and a wider convergence basin than ICP variants. Its extension to scene reconstruction, SDF-TAR, carries out the implicit-to-implicit registration over several limited-extent volumes anchored in the scene and runs simultaneous GPU tracking and CPU refinement, with a lower memory footprint than other SLAM systems. Finally, to handle non-rigidly moving objects, we incorporate the SDF-2-SDF energy in a variational framework, regularized by a damped approximately Killing vector field. The resulting system, KillingFusion, is able to reconstruct objects undergoing topological changes and fast inter-frame motion in near-real time.

Organizers: Fatma Güney

3D lidar mapping: an accurate and performant approach

Talk
  • 20 October 2017 • 11:30 12:30
  • Michiel Vlaminck
  • PS Seminar Room (N3.022)

In my talk I will present my work regarding 3D mapping using lidar scanners. I will give an overview of the SLAM problem and its main challenges: robustness, accuracy and processing speed. Regarding robustness and accuracy, we investigate a better point cloud representation based on resampling and surface reconstruction. Moreover, we demonstrate how it can be incorporated in an ICP-based scan matching technique. Finally, we elaborate on globally consistent mapping using loop closures. Regarding processing speed, we propose the integration of our scan matching in a multi-resolution scheme and a GPU-accelerated implementation using our programming language Quasar.

Organizers: Simon Donne

TBA

IS Colloquium
  • 23 October 2017 • 11:15 12:15
  • Simon Lacoste-Julien

Ray Tracing for Computer Vision

Talk
  • 08 April 2016 • 10:30 11:30
  • Helge Rhodin
  • MRC seminar room

Proper handling of occlusions is a big challenge for model based reconstruction, e.g. for multi-view motion capture a major difficulty is the handling of occluding body parts. We propose a smooth volumetric scene representation, which implicitly converts occlusion into a smooth and differentiable phenomena (ICCV2015). Our ray tracing image formation model helps to express the objective in a single closed-form expression. This is in contrast to existing surface(mesh) representations, where occlusion is a local effect, causes non-differentiability, and is difficult to optimize. We demonstrate improvements for multi-view scene reconstruction, rigid object tracking, and motion capture. Moreover, I will show an application of motion tracking to the interactive control of virtual characters (SigAsia2015).


  • Aamir Ahmad
  • MRC seminar room

The core focus of my research is on robot perception. Within this broad categorization, I am mainly interested in understanding how teams of robots and sensors can cooperate and/or collaborate to improve the perception of themselves (self-localization) as well as their surroundings (target tracking, mapping, etc.). In this talk I will describe the inter-dependencies of such perception modules and present state-of-the-art methods to perform unified cooperative state estimation. The trade-off between accuracy of estimation and computational speed will be highlighted through a new optimization-based method for unified-state estimation. Furthermore, I will also describe how perception-based multirobot formation control can be achieved. Towards the end, I will present some recent results on cooperative vision-based target tracking and a few comments on our ongoing work regarding cooperative aerial mapping with human-in-the-loop.


  • Valsamis Ntouskos
  • MRC seminar room

Modeling and reconstruction of shape and motion are problems of fundamental importance in computer vision. Inverse Problem theory constitutes a powerful mathematical framework for dealing with ill-posed problems as the ones typically arising in shape and motion modeling. In this talk, I will present methods inspired by Inverse Problem theory, for dealing with four different shape and motion modeling problems. In particular, in the context of shape modeling, I will present a method for component-wise modeling of articulated objects and its application in computing 3D models of animals. Additionally, I will discuss the problem of modeling of specular surfaces via the properties of their material, and I will also present a model for confidence driven depth image fusion based on total variation regularization. Regarding motion, I will discuss a method for the recognition of human actions from motion capture data based on Nonparametric Bayesian models.


Computer Vision on UAVs – practical considerations

Talk
  • 10 March 2016 • 11:00 12:00
  • Eric Price
  • MRZ Seminar Room

Computer vision on flying robots - or UAVs - brings its own challenges, especially if conducted in real time. On-board processing is limited by tight weight and size constraints for the electronics while off-board processing is challenged by signal delays and connection quality, especially considering the data rates required for high fps high resolution video. Unlike ground based vehicles, precision odometry is unavailable. Positional information is provided by GPS, which can have signal losses and limited precision, especially near terrain. Exact orientation can be even more problematic due to magnetic interference and vibration affecting sensors. In this talk I'd like to present and discuss some examples of practical problems encountered when trying to get robotics airborne – as well as possible solutions.

Organizers: Alina Allakhverdieva


  • Catrin Misselhorn
  • Max Planck Haus Lecture Hall

The development of increasingly intelligent and autonomous technologies will inevitably lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. It will, therefore, be necessary in the long run to develop machines which have the capacity for a certain amount of autonomous moral decision-making. The goal of this talk is to provide the theoretical foundations for artificial morality, i.e., for implementing moral capacities in artificial systems in general and a roadmap for developing an assistive system in geriatric care which is capable of moral learning.

Organizers: Ludovic Righetti Philipp Hennig


From image restoration to image understanding

Talk
  • 03 March 2016 • 11:30 12:00
  • Lars Mescheder
  • MRZ Seminar Room

Inverse problems are ubiquitous in image processing and applied science in general. Such problems describe the challenge of computing the parameters that characterize a system from the outcomes. While this might seem easy at first for simple systems, many inverse problems share a property that makes them much more intricate: they are ill-posed. This means that either the problem does not have a unique solution or this solution does not depend continuously on the outcomes of the system. Bayesian statistics provides a framework that allows to treat such problems in a systematic way. The missing piece of information is encoded as a prior distribution on the space of possible solutions. In this talk, we will study probabilistic image models as priors for statistical inversion. In particular, we will give a probabilistic interpretation of the classical TV-prior and discuss how this interpretation can be used as a starting point for more complex models. We will see that many important auxiliary quantities such as edges and regions can be incorporated into the model in the form of latent variables. This leads to the conjecture that many image processing tasks, such as denoising and segmentation, should not be considered separately, but instead be treated together.


Images of planets orbiting other stars

Talk
  • 01 March 2016 • 11:00 12:00
  • Sascha Quantz
  • AGBS Seminar Room

The detection and characterization of planets orbiting other stars than the Sun, i.e., so-called extrasolar planets, is one of the fastest growing and most vibrant research fields in modern astrophysics. In the last 25 years, more than 5400 extrasolar planets and planet candidates were revealed, but the vast majority of these objects was detected with indirect techniques, where the existence of the planet is inferred from periodic changes in the light coming from the central star. No photons from the planets themselves are detected. In this talk, however, I will focus on the direct detection of extrasolar planets. On the one hand I will describe the main challenges that have to be overcome in order to image planets around other stars. In addition to using the world’s largest telescopes and optimized cameras it was realized in last few years that by applying advanced image processing techniques significant sensitivity gains can be achieved. On the other hand I will demonstrate what can be learned if one is successful in “taking a picture” of an extrasolar planet. After all, there must be good scientific reasons and a strong motivation why the direct detection of extrasolar planets is one of the key science drivers for current and future projects on major ground- and space-based telescopes.

Organizers: Diana Rebmann


Interaction of Science and Art

Talk
  • 24 February 2016 • 11:30 12:30
  • Helga Griffiths
  • MRZ Seminar room

In general Helga Griffiths is a Multi-Sense-Artist working on the intersection of science and art. She has been working for over 20 years on the integration of various sensory stimuli into her “multi-sense” installations. Typical for her work is to produce a sensory experience to transcend conventional boundaries of perception.

Organizers: Emma-Jayne Holderness


  • Felix Berkenkamp
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Bayesian optimization is a powerful tool that has been successfully used to automatically optimize the parameters of a fixed control policy. It has many desirable properties, such as data-efficiently and being able to handle noisy measurements. However, standard Bayesian optimization does not consider any constraints imposed by the real system, which limits its applications to highly controlled environments. In this talk, I will introduce an extension of this framework, which additionally considers multiple safety constraints during the optimization process. This method enables safe parameter optimization by only evaluating parameters that fulfill all safety constraints with high probability. I will show several experiments on a quadrotor vehicle which demonstrate the method. Lastly, I will briefly talk about how the ideas behind safe Bayesian optimization can be used to safely explore unknown environments (MDPs).

Organizers: Sebastian Trimpe


  • Aldo Faisal
  • MPH Lecture Hall

Our research questions are centred on a basic characteristic of human brains: variability in their behaviour and their underlying meaning for cognitive mechanisms. Such variability is emerging as a key ingredient in understanding biological principles (Faisal, Selen & Wolpert, 2008, Nature Rev Neurosci) and yet lacks adequate quantitative and computational methods for description and analysis. Crucially, we find that biological and behavioural variability contains important information that our brain and our technology can make us of (instead of just averaging it away): Using advanced body sensor networks, we measured eye-movements, full-body and hand kinematics of humans living in a studio flat and are going to present some insightful results on motor control and visual attention that suggest that the control of behaviour "in-the-wild" is predictably different ways than what we measure "in-the-lab". The results have implications for robotics, prosthetics and neuroscience.

Organizers: Matthias Hohmann