Header logo is


2017


Thumb xl amd intentiongan
Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets

Hausman, K., Chebotar, Y., Schaal, S., Sukhatme, G., Lim, J.

In Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (inproceedings)

am

pdf video [BibTex]

2017


pdf video [BibTex]


no image
ConvWave: Searching for Gravitational Waves with Fully Convolutional Neural Nets

Gebhard, T., Kilbertus, N., Parascandolo, G., Harry, I., Schölkopf, B.

Proceedings from the conference "Neural Information Processing Systems 2017., (Editors: Guyon I. and Luxburg U.v. and Bengio S. and Wallach H. and Fergus R. and Vishwanathan S. and Garnett R.), Curran Associates, Inc., Advances in Neural Information Processing Systems 30 (NIPS), December 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl fig1b
Acoustic Fabrication via the Assembly and Fusion of Particles

Melde, K., Choi, E., Wu, Z., Palagi, S., Qiu, T., Fischer, P.

Advanced Materials, pages: 1704507-n/a, December 2017, 1704507 (article)

Abstract
Acoustic assembly promises a route toward rapid parallel fabrication of whole objects directly from solution. This study reports the contact-free and maskless assembly, and fixing of silicone particles into arbitrary 2D shapes using ultrasound fields. Ultrasound passes through an acoustic hologram to form a target image. The particles assemble from a suspension along lines of high pressure in the image due to acoustic radiation forces and are then fixed (crosslinked) in a UV-triggered reaction. For this, the particles are loaded with a photoinitiator by solvent-induced swelling. This localizes the reaction and allows the bulk suspension to be reused. The final fabricated parts are mechanically stable and self-supporting.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl screenshot 2018 5 9 swimming back and forth using planar flagellar propulsion at low reynolds numbers   khalil   2018   adv ...
Swimming Back and Forth Using Planar Flagellar Propulsion at Low Reynolds Numbers

Khalil, I. S. M., Tabak, A. F., Hamed, Y., Mitwally, M. E., Tawakol, M., Klingner, A., Sitti, M.

Advanced Science, 5(2):1700461, December 2017 (article)

Abstract
Abstract Peritrichously flagellated Escherichia coli swim back and forth by wrapping their flagella together in a helical bundle. However, other monotrichous bacteria cannot swim back and forth with a single flagellum and planar wave propagation. Quantifying this observation, a magnetically driven soft two‐tailed microrobot capable of reversing its swimming direction without making a U‐turn trajectory or actively modifying the direction of wave propagation is designed and developed. The microrobot contains magnetic microparticles within the polymer matrix of its head and consists of two collinear, unequal, and opposite ultrathin tails. It is driven and steered using a uniform magnetic field along the direction of motion with a sinusoidally varying orthogonal component. Distinct reversal frequencies that enable selective and independent excitation of the first or the second tail of the microrobot based on their tail length ratio are found. While the first tail provides a propulsive force below one of the reversal frequencies, the second is almost passive, and the net propulsive force achieves flagellated motion along one direction. On the other hand, the second tail achieves flagellated propulsion along the opposite direction above the reversal frequency.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 41315 2017 39 fig3 html
A deep learning based fusion of RGB camera information and magnetic localization information for endoscopic capsule robots

Turan, M., Shabbir, J., Araujo, H., Konukoglu, E., Sitti, M.

International Journal of Intelligent Robotics and Applications, 1(4):442-450, December 2017 (article)

Abstract
A reliable, real time localization functionality is crutial for actively controlled capsule endoscopy robots, which are an emerging, minimally invasive diagnostic and therapeutic technology for the gastrointestinal (GI) tract. In this study, we extend the success of deep learning approaches from various research fields to the problem of sensor fusion for endoscopic capsule robots. We propose a multi-sensor fusion based localization approach which combines endoscopic camera information and magnetic sensor based localization information. The results performed on real pig stomach dataset show that our method achieves sub-millimeter precision for both translational and rotational movements.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl fig toyex lqr1kernel 1
On the Design of LQR Kernels for Efficient Controller Learning

Marco, A., Hennig, P., Schaal, S., Trimpe, S.

Proceedings of the 56th IEEE Conference on Decision and Control, December 2017 (conference) Accepted

Abstract
Finding optimal feedback controllers for nonlinear dynamic systems from data is hard. Recently, Bayesian optimization (BO) has been proposed as a powerful framework for direct controller tuning from experimental trials. For selecting the next query point and finding the global optimum, BO relies on a probabilistic description of the latent objective function, typically a Gaussian process (GP). As is shown herein, GPs with a common kernel choice can, however, lead to poor learning outcomes on standard quadratic control problems. For a first-order system, we construct two kernels that specifically leverage the structure of the well-known Linear Quadratic Regulator (LQR), yet retain the flexibility of Bayesian nonparametric learning. Simulations of uncertain linear and nonlinear systems demonstrate that the LQR kernels yield superior learning performance.

am ics pn

arXiv PDF Project Page [BibTex]

arXiv PDF Project Page [BibTex]


no image
Evaluation of high-fidelity simulation as a training tool in transoral robotic surgery

Bur, A. M., Gomez, E. D., Newman, J. G., Weinstein, G. S., Bert W. O’Malley, J., Rassekh, C. H., Kuchenbecker, K. J.

Laryngoscope, 127(12):2790-2795, December 2017 (article)

hi

DOI [BibTex]

DOI [BibTex]


Thumb xl robot legos
Interactive Perception: Leveraging Action in Perception and Perception in Action

Bohg, J., Hausman, K., Sankaran, B., Brock, O., Kragic, D., Schaal, S., Sukhatme, G.

IEEE Transactions on Robotics, 33, pages: 1273-1291, December 2017 (article)

Abstract
Recent approaches in robotics follow the insight that perception is facilitated by interactivity with the environment. These approaches are subsumed under the term of Interactive Perception (IP). We argue that IP provides the following benefits: (i) any type of forceful interaction with the environment creates a new type of informative sensory signal that would otherwise not be present and (ii) any prior knowledge about the nature of the interaction supports the interpretation of the signal. This is facilitated by knowledge of the regularity in the combined space of sensory information and action parameters. The goal of this survey is to postulate this as a principle and collect evidence in support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of Interactive Perception. We close this survey by discussing the remaining open questions. Thereby, we hope to define a field and inspire future work.

am

arXiv DOI Project Page [BibTex]

arXiv DOI Project Page [BibTex]


Thumb xl f2.large
Biohybrid actuators for robotics: A review of devices actuated by living cells

Ricotti, L., Trimmer, B., Feinberg, A. W., Raman, R., Parker, K. K., Bashir, R., Sitti, M., Martel, S., Dario, P., Menciassi, A.

Science Robotics, 2(12), Science Robotics, November 2017 (article)

Abstract
Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl fig1
Active colloidal propulsion over a crystalline surface

Choudhury, U., Straube, A., Fischer, P., Gibbs, J., Höfling, F.

New Journal of Physics, November 2017 (article)

Abstract
Abstract We study both experimentally and theoretically the dynamics of chemically self-propelled Janus colloids moving atop a two-dimensional crystalline surface. The surface is a hexagonally close-packed monolayer of colloidal particles of the same size as the mobile one. The dynamics of the self-propelled colloid reflects the competition between hindered diffusion due to the periodic surface and enhanced diffusion due to active motion. Which contribution dominates depends on the propulsion strength, which can be systematically tuned by changing the concentration of a chemical fuel. The mean-square displacements obtained from the experiment exhibit enhanced diffusion at long lag times. Our experimental data are consistent with a Langevin model for the effectively two-dimensional translational motion of an active Brownian particle in a periodic potential, combining the confining effects of gravity and the crystalline surface with the free rotational diffusion of the colloid. Approximate analytical predictions are made for the mean-square displacement describing the crossover from free Brownian motion at short times to active diffusion at long times. The results are in semi-quantitative agreement with numerical results of a refined Langevin model that treats translational and rotational degrees of freedom on the same footing.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl fig1
Wireless Acoustic-Surface Actuators for Miniaturized Endoscopes

Qiu, T., Adams, F., Palagi, S., Melde, K., Mark, A. G., Wetterauer, U., Miernik, A., Fischer, P.

ACS Applied Materials \& Interfaces, 0(ja):null, November 2017, PMID: 29148713 (article)

Abstract
Endoscopy enables minimally invasive procedures in many medical fields, such as urology. However, current endoscopes are normally cable-driven, which limits their dexterity and makes them hard to miniaturize. Indeed current urological endoscopes have an outer diameter of about 3 mm and still only possess one bending degree of freedom. In this paper, we report a novel wireless actuation mechanism that increases the dexterity and that permits the miniaturization of a urological endoscope. The novel actuator consists of thin active surfaces that can be readily attached to any device and are wirelessly powered by ultrasound. The surfaces consist of two-dimensional arrays of micro-bubbles, which oscillate under ultrasound excitation and thereby generate an acoustic streaming force. Bubbles of different sizes are addressed by their unique resonance frequency, thus multiple degrees of freedom can readily be incorporated. Two active miniaturized devices (with a side length of around 1 mm) are demonstrated: a miniaturized mechanical arm that realizes two degrees of freedom, and a flexible endoscope prototype equipped with a camera at the tip. With the flexible endoscope, an active endoscopic examination is successfully performed in a rabbit bladder. This results show the potential medical applicability of surface actuators wirelessly powered by ultrasound penetrating through biological tissues.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl teaser
Optimizing Long-term Predictions for Model-based Policy Search

Doerr, A., Daniel, C., Nguyen-Tuong, D., Marco, A., Schaal, S., Toussaint, M., Trimpe, S.

Proceedings of Machine Learning Research, 78, pages: 227-238, (Editors: Sergey Levine and Vincent Vanhoucke and Ken Goldberg), 1st Annual Conference on Robot Learning, November 2017 (conference) Accepted

Abstract
We propose a novel long-term optimization criterion to improve the robustness of model-based reinforcement learning in real-world scenarios. Learning a dynamics model to derive a solution promises much greater data-efficiency and reusability compared to model-free alternatives. In practice, however, modelbased RL suffers from various imperfections such as noisy input and output data, delays and unmeasured (latent) states. To achieve higher resilience against such effects, we propose to optimize a generative long-term prediction model directly with respect to the likelihood of observed trajectories as opposed to the common approach of optimizing a dynamics model for one-step-ahead predictions. We evaluate the proposed method on several artificial and real-world benchmark problems and compare it to PILCO, a model-based RL framework, in experiments on a manipulation robot. The results show that the proposed method is competitive compared to state-of-the-art model learning methods. In contrast to these more involved models, our model can directly be employed for policy search and outperforms a baseline method in the robot experiment.

am ics

PDF Project Page [BibTex]

PDF Project Page [BibTex]


Thumb xl probls sketch n3 0 ei0
Probabilistic Line Searches for Stochastic Optimization

Mahsereci, M., Hennig, P.

Journal of Machine Learning Research, 18(119):1-59, November 2017 (article)

pn

link (url) [BibTex]

link (url) [BibTex]


Thumb xl flamewebteaserwide
Learning a model of facial shape and expression from 4D scans

Li, T., Bolkart, T., Black, M. J., Li, H., Romero, J.

ACM Transactions on Graphics, 36(6):194:1-194:17, November 2017, Two first authors contributed equally (article)

Abstract
The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression from 4D face sequences in the D3DFACS dataset along with additional 4D sequences.We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).

ps

data/model video paper supplemental [BibTex]

data/model video paper supplemental [BibTex]


Thumb xl qg net rev
Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning

Li, W., Bohg, J., Fritz, M.

arXiv, November 2017 (article) Submitted

Abstract
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.

am

arXiv [BibTex]


Thumb xl molbert
Investigating Body Image Disturbance in Anorexia Nervosa Using Novel Biometric Figure Rating Scales: A Pilot Study

Mölbert, S. C., Thaler, A., Streuber, S., Black, M. J., Karnath, H., Zipfel, S., Mohler, B., Giel, K. E.

European Eating Disorders Review, 25(6):607-612, November 2017 (article)

Abstract
This study uses novel biometric figure rating scales (FRS) spanning body mass index (BMI) 13.8 to 32.2 kg/m2 and BMI 18 to 42 kg/m2. The aims of the study were (i) to compare FRS body weight dissatisfaction and perceptual distortion of women with anorexia nervosa (AN) to a community sample; (ii) how FRS parameters are associated with questionnaire body dissatisfaction, eating disorder symptoms and appearance comparison habits; and (iii) whether the weight spectrum of the FRS matters. Women with AN (n = 24) and a community sample of women (n = 104) selected their current and ideal body on the FRS and completed additional questionnaires. Women with AN accurately picked the body that aligned best with their actual weight in both FRS. Controls underestimated their BMI in the FRS 14–32 and were accurate in the FRS 18–42. In both FRS, women with AN desired a body close to their actual BMI and controls desired a thinner body. Our observations suggest that body image disturbance in AN is unlikely to be characterized by a visual perceptual disturbance, but rather by an idealization of underweight in conjunction with high body dissatisfaction. The weight spectrum of FRS can influence the accuracy of BMI estimation.

ps

publisher DOI [BibTex]


Thumb xl manoteaser
Embodied Hands: Modeling and Capturing Hands and Bodies Together

Romero, J., Tzionas, D., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):245:1-245:17, 245:1–245:17, ACM, November 2017 (article)

Abstract
Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.

ps

website youtube paper suppl video link (url) DOI Project Page [BibTex]

website youtube paper suppl video link (url) DOI Project Page [BibTex]


no image
Learning optimal gait parameters and impedance profiles for legged locomotion

Heijmink, E., Radulescu, A., Ponton, B., Barasuol, V., Caldwell, D., Semini, C.

Proceedings International Conference on Humanoid Robots, IEEE, 2017 IEEE-RAS 17th International Conference on Humanoid Robots, November 2017 (conference)

Abstract
The successful execution of complex modern robotic tasks often relies on the correct tuning of a large number of parameters. In this paper we present a methodology for improving the performance of a trotting gait by learning the gait parameters, impedance profile and the gains of the control architecture. We show results on a set of terrains, for various speeds using a realistic simulation of a hydraulically actuated system. Our method achieves a reduction in the gait's mechanical energy consumption during locomotion of up to 26%. The simulation results are validated in experimental trials on the hardware system.

am

paper [BibTex]

paper [BibTex]


Thumb xl teasercrop
A Generative Model of People in Clothing

Lassner, C., Pons-Moll, G., Gehler, P. V.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
We present the first image-based generative model of people in clothing in a full-body setting. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.

ps

link (url) [BibTex]

link (url) [BibTex]


Thumb xl website teaser
Semantic Video CNNs through Representation Warping

Gadde, R., Jampani, V., Gehler, P. V.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings) Accepted

Abstract
In this work, we propose a technique to convert CNN models for semantic segmentation of static images into CNNs for video data. We describe a warping method that can be used to augment existing architectures with very lit- tle extra computational cost. This module is called Net- Warp and we demonstrate its use for a range of network architectures. The main design principle is to use optical flow of adjacent frames for warping internal network repre- sentations across time. A key insight of this work is that fast optical flow methods can be combined with many different CNN architectures for improved performance and end-to- end training. Experiments validate that the proposed ap- proach incurs only little extra computational cost, while im- proving performance, when video streams are available. We achieve new state-of-the-art results on the standard CamVid and Cityscapes benchmark datasets and show reliable im- provements over different baseline networks. Our code and models are available at http://segmentation.is. tue.mpg.de

ps

pdf Supplementary [BibTex]

pdf Supplementary [BibTex]


no image
Online Video Deblurring via Dynamic Temporal Blending Network

Kim, T. H., Lee, K. M., Schölkopf, B., Hirsch, M.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 4038-4047, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser iccv2017
Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

Behl, A., Jafari, O. H., Mustikovela, S. K., Alhaija, H. A., Rother, C., Geiger, A.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Thumb xl 2016 enhancenet
EnhanceNet: Single Image Super-Resolution through Automated Texture Synthesis

Sajjadi, M. S. M., Schölkopf, B., Hirsch, M.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 4491-4500, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

Arxiv Project link (url) [BibTex]

Arxiv Project link (url) [BibTex]


no image
Learning Blind Motion Deblurring

Wieschollek, P., Hirsch, M., Schölkopf, B., Lensch, H.

Proceedings IEEE International Conference on Computer Vision (ICCV), pages: 231-240, IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl screen shot 2017 08 09 at 12.54.00
A simple yet effective baseline for 3d human pose estimation

Martinez, J., Hossain, R., Romero, J., Little, J. J.

In Proceedings IEEE International Conference on Computer Vision (ICCV), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computer Vision (ICCV), October 2017 (inproceedings)

Abstract
Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3-dimensional positions. With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, "lifting" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feed-forward network outperforms the best reported result by about 30\% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (\ie, using images as input) yields state of the art results -- this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.

ps

video code arxiv pdf preprint [BibTex]

video code arxiv pdf preprint [BibTex]


Thumb xl jonas teaser
Sparsity Invariant CNNs

Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments \wrt various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Thumb xl gernot teaser
OctNetFusion: Learning Depth Fusion from Data

Riegler, G., Ulusoy, A. O., Bischof, H., Geiger, A.

International Conference on 3D Vision (3DV) 2017, International Conference on 3D Vision (3DV), October 2017 (conference)

Abstract
In this paper, we present a learning based approach to depth fusion, i.e., dense 3D reconstruction from multiple depth images. The most common approach to depth fusion is based on averaging truncated signed distance functions, which was originally proposed by Curless and Levoy in 1996. While this method is simple and provides great results, it is not able to reconstruct (partially) occluded surfaces and requires a large number frames to filter out sensor noise and outliers. Motivated by the availability of large 3D model repositories and recent advances in deep learning, we present a novel 3D CNN architecture that learns to predict an implicit surface representation from the input depth maps. Our learning based method significantly outperforms the traditional volumetric fusion approach in terms of noise reduction and outlier suppression. By learning the structure of real world 3D objects and scenes, our approach is further able to reconstruct occluded regions and to fill in gaps in the reconstruction. We demonstrate that our learning based approach outperforms both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric fusion. Further, we demonstrate state-of-the-art 3D shape completion results.

avg

pdf Video 1 Video 2 Project Page [BibTex]

pdf Video 1 Video 2 Project Page [BibTex]


Thumb xl cover tro paper
An Online Scalable Approach to Unified Multirobot Cooperative Localization and Object Tracking

Ahmad, A., Lawless, G., Lima, P.

IEEE Transactions on Robotics (T-RO), 33, pages: 1184 - 1199, October 2017 (article)

Abstract
In this article we present a unified approach for multi-robot cooperative simultaneous localization and object tracking based on particle filters. Our approach is scalable with respect to the number of robots in the team. We introduce a method that reduces, from an exponential to a linear growth, the space and computation time requirements with respect to the number of robots in order to maintain a given level of accuracy in the full state estimation. Our method requires no increase in the number of particles with respect to the number of robots. However, in our method each particle represents a full state hypothesis, leading to the linear dependency on the number of robots of both space and time complexity. The derivation of the algorithm implementing our approach from a standard particle filter algorithm and its complexity analysis are presented. Through an extensive set of simulation experiments on a large number of randomized datasets, we demonstrate the correctness and efficacy of our approach. Through real robot experiments on a standardized open dataset of a team of four soccer playing robots tracking a ball, we evaluate our method's estimation accuracy with respect to the ground truth values. Through comparisons with other methods based on i) nonlinear least squares minimization and ii) joint extended Kalman filter, we further highlight our method's advantages. Finally, we also present a robustness test for our approach by evaluating it under scenarios of communication and vision failure in teammate robots.

ps

accepted pre-print version link (url) DOI [BibTex]


Thumb xl oe
Electrically tunable binary phase Fresnel lens based on a dielectric elastomer actuator

Park, S., Park, B., Nam, S., Yun, S., Park, S. K., Mun, S., Lim, J. M., Ryu, Y., Song, S. H., Kyung, K.

Opt. Express, 25(20):23801-23808, OSA, October 2017 (article)

Abstract
We propose and demonstrate an all-solid-state tunable binary phase Fresnel lens with electrically controllable focal length. The lens is composed of a binary phase Fresnel zone plate, a circular acrylic frame, and a dielectric elastomer (DE) actuator which is made of a thin DE layer and two compliant electrodes using silver nanowires. Under electric potential, the actuator produces in-plane deformation in a radial direction that can compress the Fresnel zones. The electrically-induced deformation compresses the Fresnel zones to be contracted as high as 9.1 % and changes the focal length, getting shorter from 20.0 cm to 14.5 cm. The measured change in the focal length of the fabricated lens is consistent with the result estimated from numerical simulation.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Editorial for the Special Issue on Microdevices and Microsystems for Cell Manipulation

Hu, W., Ohta, A. T.

8, Multidisciplinary Digital Publishing Institute, September 2017 (misc)

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl 7 byung fig
Multifunctional Bacteria-Driven Microswimmers for Targeted Active Drug Delivery

Park, B., Zhuang, J., Yasa, O., Sitti, M.

ACS Nano, 11(9):8910-8923, September 2017, PMID: 28873304 (article)

Abstract
High-performance, multifunctional bacteria-driven microswimmers are introduced using an optimized design and fabrication method for targeted drug delivery applications. These microswimmers are made of mostly single Escherichia coli bacterium attached to the surface of drug-loaded polyelectrolyte multilayer (PEM) microparticles with embedded magnetic nanoparticles. The PEM drug carriers are 1 μm in diameter and are intentionally fabricated with a more viscoelastic material than the particles previously studied in the literature. The resulting stochastic microswimmers are able to swim at mean speeds of up to 22.5 μm/s. They can be guided and targeted to specific cells, because they exhibit biased and directional motion under a chemoattractant gradient and a magnetic field, respectively. Moreover, we demonstrate the microswimmers delivering doxorubicin anticancer drug molecules, encapsulated in the polyelectrolyte multilayers, to 4T1 breast cancer cells under magnetic guidance in vitro. The results reveal the feasibility of using these active multifunctional bacteria-driven microswimmers to perform targeted drug delivery with significantly enhanced drug transfer, when compared with the passive PEM microparticles.

pi

link (url) DOI Project Page Project Page [BibTex]


Thumb xl admi201770108 gra 0001 m
Active Acoustic Surfaces Enable the Propulsion of a Wireless Robot

Qiu, T., Palagi, S., Mark, A. G., Melde, K., Adams, F., Fischer, P.

Advanced Materials Interfaces, pages: 1700933-n/a, September 2017, 1700933 (article)

Abstract
A major challenge that prevents the miniaturization of mechanically actuated systems is the lack of suitable methods that permit the efficient transfer of power to small scales. Acoustic energy holds great potential, as it is wireless, penetrates deep into biological tissues, and the mechanical vibrations can be directly converted into directional forces. Recently, active acoustic surfaces are developed that consist of 2D arrays of microcavities holding microbubbles that can be excited with an external acoustic field. At resonance, the surfaces give rise to acoustic streaming and thus provide a highly directional propulsive force. Here, this study advances these wireless surface actuators by studying their force output as the size of the bubble-array is increased. In particular, a general method is reported to dramatically improve the propulsive force, demonstrating that the surface actuators are actually able to propel centimeter-scale devices. To prove the flexibility of the functional surfaces as wireless ready-to-attach actuator, a mobile mini-robot capable of propulsion in water along multiple directions is presented. This work paves the way toward effectively exploiting acoustic surfaces as a novel wireless actuation scheme at small scales.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl publications toc
EndoSensorFusion: Particle Filtering-Based Multi-sensory Data Fusion with Switching State-Space Model for Endoscopic Capsule Robots

Turan, M., Almalioglu, Y., Gilbert, H., Araujo, H., Cemgil, T., Sitti, M.

ArXiv e-prints, September 2017 (article)

Abstract
A reliable, real time multi-sensor fusion functionality is crucial for localization of actively controlled capsule endoscopy robots, which are an emerging, minimally invasive diagnostic and therapeutic technology for the gastrointestinal (GI) tract. In this study, we propose a novel multi-sensor fusion approach based on a particle filter that incorporates an online estimation of sensor reliability and a non-linear kinematic model learned by a recurrent neural network. Our method sequentially estimates the true robot pose from noisy pose observations delivered by multiple sensors. We experimentally test the method using 5 degree-of-freedom (5-DoF) absolute pose measurement by a magnetic localization system and a 6-DoF relative pose measurement by visual odometry. In addition, the proposed method is capable of detecting and handling sensor failures by ignoring corrupted data, providing the robustness expected of a medical device. Detailed analyses and evaluations are presented using ex-vivo experiments on a porcine stomach model prove that our system achieves high translational and rotational accuracies for different types of endoscopic capsule robot trajectories.

pi

link (url) Project Page [BibTex]


Thumb xl andreas teaser
Direct Visual Odometry for a Fisheye-Stereo Camera

Liu, P., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

Abstract
We present a direct visual odometry algorithm for a fisheye-stereo camera. Our algorithm performs simultaneous camera motion estimation and semi-dense reconstruction. The pipeline consists of two threads: a tracking thread and a mapping thread. In the tracking thread, we estimate the camera pose via semi-dense direct image alignment. To have a wider field of view (FoV) which is important for robotic perception, we use fisheye images directly without converting them to conventional pinhole images which come with a limited FoV. To address the epipolar curve problem, plane-sweeping stereo is used for stereo matching and depth initialization. Multiple depth hypotheses are tracked for selected pixels to better capture the uncertainty characteristics of stereo matching. Temporal motion stereo is then used to refine the depth and remove false positive depth hypotheses. Our implementation runs at an average of 20 Hz on a low-end PC. We run experiments in outdoor environments to validate our algorithm, and discuss the experimental results. We experimentally show that we are able to estimate 6D poses with low drift, and at the same time, do semi-dense 3D reconstruction with high accuracy.

avg

pdf [BibTex]

pdf [BibTex]


no image
A New Data Source for Inverse Dynamics Learning

Kappler, D., Meier, F., Ratliff, N., Schaal, S.

In Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2017 (inproceedings)

am

[BibTex]

[BibTex]


Thumb xl publications toc
Endo-VMFuseNet: Deep Visual-Magnetic Sensor Fusion Approach for Uncalibrated, Unsynchronized and Asymmetric Endoscopic Capsule Robot Localization Data

Turan, M., Almalioglu, Y., Gilbert, H., Eren Sari, A., Soylu, U., Sitti, M.

ArXiv e-prints, September 2017 (article)

Abstract
In the last decade, researchers and medical device companies have made major advances towards transforming passive capsule endoscopes into active medical robots. One of the major challenges is to endow capsule robots with accurate perception of the environment inside the human body, which will provide necessary information and enable improved medical procedures. We extend the success of deep learning approaches from various research fields to the problem of uncalibrated, asynchronous, and asymmetric sensor fusion for endoscopic capsule robots. The results performed on real pig stomach datasets show that our method achieves sub-millimeter precision for both translational and rotational movements and contains various advantages over traditional sensor fusion techniques.

pi

link (url) Project Page [BibTex]


Thumb xl comp 5d copy
Magnetotactic Bacteria Powered Biohybrids Target E. coli Biofilms

Stanton, M. M., Park, B., Vilela, D., Bente, K., Faivre, D., Sitti, M., Sánchez, S.

ACS Nano, 0(0):null, September 2017, PMID: 28933815 (article)

Abstract
Biofilm colonies are typically resistant to general antibiotic treatment and require targeted methods for their removal. One of these methods includes the use of nanoparticles as carriers for antibiotic delivery, where they randomly circulate in fluid until they make contact with the infected areas. However, the required proximity of the particles to the biofilm results in only moderate efficacy. We demonstrate here that the nonpathogenic magnetotactic bacteria Magnetosopirrillum gryphiswalense (MSR-1) can be integrated with drug-loaded mesoporous silica microtubes to build controllable microswimmers (biohybrids) capable of antibiotic delivery to target an infectious biofilm. Applying external magnetic guidance capability and swimming power of the MSR-1 cells, the biohybrids are directed to and forcefully pushed into matured Escherichia coli (E. coli) biofilms. Release of the antibiotic, ciprofloxacin, is triggered by the acidic microenvironment of the biofilm, ensuring an efficient drug delivery system. The results reveal the capabilities of a nonpathogenic bacteria species to target and dismantle harmful biofilms, indicating biohybrid systems have great potential for antibiofilm applications.

pi

link (url) DOI Project Page Project Page [BibTex]

link (url) DOI Project Page Project Page [BibTex]


no image
Closing One’s Eyes Affects Amplitude Modulation but Not Frequency Modulation in a Cognitive BCI

Görner, M., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 165-170, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
A Guided Task for Cognitive Brain-Computer Interfaces

Moser, J., Hohmann, M. R., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 326-331, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Bayesian Regression for Artifact Correction in Electroencephalography

Fiebig, K., Jayaram, V., Hesse, T., Blank, A., Peters, J., M., G.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 131-136, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

am ei

DOI [BibTex]

DOI [BibTex]


no image
Investigating Music Imagery as a Cognitive Paradigm for Low-Cost Brain-Computer Interfaces

Grossberger, L., Hohmann, M. R., Peters, J., M., G.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 160-164, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

am ei

DOI [BibTex]

DOI [BibTex]


no image
Correlations of Motor Adaptation Learning and Modulation of Resting-State Sensorimotor EEG Activity

Ozdenizci, O., Yalcin, M., Erdogan, A., Patoglu, V., Grosse-Wentrup, M., Cetin, M.

Proceedings of the 7th Graz Brain-Computer Interface Conference 2017 - From Vision to Reality, pages: 384-388, (Editors: Müller-Putz G.R., Steyrl D., Wriessnegger S. C., Scherer R.), Graz University of Technology, Austria, Graz Brain-Computer Interface Conference, September 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb xl jeong et al 2017 advanced science
Corrosion-Protected Hybrid Nanoparticles

Jeong, H., Alarcón-Correa, M., Mark, A. G., Son, K., Lee, T., Fischer, P.

Advanced Science, pages: 1700234-n/a, September 2017, 1700234 (article)

Abstract
Nanoparticles composed of functional materials hold great promise for applications due to their unique electronic, optical, magnetic, and catalytic properties. However, a number of functional materials are not only difficult to fabricate at the nanoscale, but are also chemically unstable in solution. Hence, protecting nanoparticles from corrosion is a major challenge for those applications that require stability in aqueous solutions and biological fluids. Here, this study presents a generic scheme to grow hybrid 3D nanoparticles that are completely encapsulated by a nm thick protective shell. The method consists of vacuum-based growth and protection, and combines oblique physical vapor deposition with atomic layer deposition. It provides wide flexibility in the shape and composition of the nanoparticles, and the environments against which particles are protected. The work demonstrates the approach with multifunctional nanoparticles possessing ferromagnetic, plasmonic, and chiral properties. The present scheme allows nanocolloids, which immediately corrode without protection, to remain functional, at least for a week, in acidic solutions.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Using contact forces and robot arm accelerations to automatically rate surgeon skill at peg transfer

Brown, J. D., O’Brien, C. E., Leung, S. C., Dumon, K. R., Lee, D. I., Kuchenbecker, K. J.

IEEE Transactions on Biomedical Engineering, 64(9):2263-2275, September 2017 (article)

hi

[BibTex]

[BibTex]


Thumb xl screen shot 2017 08 01 at 15.41.10
On the relevance of grasp metrics for predicting grasp success

Rubert, C., Kappler, D., Morales, A., Schaal, S., Bohg, J.

In Proceedings of the IEEE/RSJ International Conference of Intelligent Robots and Systems, September 2017 (inproceedings) Accepted

Abstract
We aim to reliably predict whether a grasp on a known object is successful before it is executed in the real world. There is an entire suite of grasp metrics that has already been developed which rely on precisely known contact points between object and hand. However, it remains unclear whether and how they may be combined into a general purpose grasp stability predictor. In this paper, we analyze these questions by leveraging a large scale database of simulated grasps on a wide variety of objects. For each grasp, we compute the value of seven metrics. Each grasp is annotated by human subjects with ground truth stability labels. Given this data set, we train several classification methods to find out whether there is some underlying, non-trivial structure in the data that is difficult to model manually but can be learned. Quantitative and qualitative results show the complexity of the prediction problem. We found that a good prediction performance critically depends on using a combination of metrics as input features. Furthermore, non-parametric and non-linear classifiers best capture the structure in the data.

am

Project Page [BibTex]

Project Page [BibTex]


Thumb xl hassan paper teasere
Augmented Reality Meets Deep Learning for Car Instance Segmentation in Urban Scenes

Alhaija, H. A., Mustikovela, S. K., Mescheder, L., Geiger, A., Rother, C.

In Proceedings of the British Machine Vision Conference 2017, Proceedings of the British Machine Vision Conference, September 2017 (inproceedings)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. This allows us to create realistic composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D shapes of the target object category. We demonstrate the utility of the proposed approach for training a state-of-the-art high-capacity deep model for semantic instance segmentation. In particular, we consider the task of segmenting car instances on the KITTI dataset which we have annotated with pixel-accurate ground truth. Our experiments demonstrate that models trained on augmented imagery generalize better than those trained on synthetic data or models trained on limited amounts of annotated real data.

avg

pdf [BibTex]

pdf [BibTex]


no image
Swimming in low reynolds numbers using planar and helical flagellar waves

Khalil, I. S. M., Tabak, A. F., Seif, M. A., Klingner, A., Adel, B., Sitti, M.

In International Conference on Intelligent Robots and Systems (IROS) 2017, pages: 1907-1912, International Conference on Intelligent Robots and Systems, September 2017 (inproceedings)

Abstract
In travelling towards the oviducts, sperm cells undergo transitions between planar to helical flagellar propulsion by a beating tail based on the viscosity of the environment. In this work, we aim to model and mimic this behaviour in low Reynolds number fluids using externally actuated soft robotic sperms. We numerically investigate the effects of transition between planar to helical flagellar propulsion on the swimming characteristics of the robotic sperm using a model based on resistive-force theory to study the role of viscous forces on its flexible tail. Experimental results are obtained using robots that contain magnetic particles within the polymer matrix of its head and an ultra-thin flexible tail. The planar and helical flagellar propulsion are achieved using in-plane and out-of-plane uniform fields with sinusoidally varying components, respectively. We experimentally show that the swimming speed of the robotic sperm increases by a factor of 1.4 (fluid viscosity 5 Pa.s) when it undergoes a controlled transition between planar to helical flagellar propulsion, at relatively low actuation frequencies.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl kenny
Effects of animation retargeting on perceived action outcomes

Kenny, S., Mahmood, N., Honda, C., Black, M. J., Troje, N. F.

Proceedings of the ACM Symposium on Applied Perception (SAP’17), pages: 2:1-2:7, September 2017 (conference)

Abstract
The individual shape of the human body, including the geometry of its articulated structure and the distribution of weight over that structure, influences the kinematics of a person's movements. How sensitive is the visual system to inconsistencies between shape and motion introduced by retargeting motion from one person onto the shape of another? We used optical motion capture to record five pairs of male performers with large differences in body weight, while they pushed, lifted, and threw objects. Based on a set of 67 markers, we estimated both the kinematics of the actions as well as the performer's individual body shape. To obtain consistent and inconsistent stimuli, we created animated avatars by combining the shape and motion estimates from either a single performer or from different performers. In a virtual reality environment, observers rated the perceived weight or thrown distance of the objects. They were also asked to explicitly discriminate between consistent and hybrid stimuli. Observers were unable to accomplish the latter, but hybridization of shape and motion influenced their judgements of action outcome in systematic ways. Inconsistencies between shape and motion were assimilated into an altered perception of the action outcome.

ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Local Bayesian Optimization of Motor Skills

Akrour, R., Sorokin, D., Peters, J., Neumann, G.

Proceedings of the 34th International Conference on Machine Learning, 70, pages: 41-50, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (conference)

am ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl img01
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Mescheder, L., Nowozin, S., Geiger, A.

In Proceedings of the 34th International Conference on Machine Learning, 70, Proceedings of Machine Learning Research, (Editors: Doina Precup, Yee Whye Teh), PMLR, International Conference on Machine Learning (ICML), August 2017 (inproceedings)

Abstract
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.

avg

pdf suppmat Project Page arxiv-version [BibTex]

pdf suppmat Project Page arxiv-version [BibTex]