Header logo is de


2020


Label Efficient Visual Abstractions for Autonomous Driving
Label Efficient Visual Abstractions for Autonomous Driving

Behl, A., Chitta, K., Prakash, A., Ohn-Bar, E., Geiger, A.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, ie, the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (eg, object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

avg

pdf slides video Project Page [BibTex]

2020


pdf slides video Project Page [BibTex]


Chiroptical spectroscopy of a freely diffusing single nanoparticle
Chiroptical spectroscopy of a freely diffusing single nanoparticle

Sachs, J., Günther, J., Mark, A. G., Fischer, P.

Nature Communications, 11(4513), September 2020 (article)

Abstract
Chiral plasmonic nanoparticles can exhibit strong chiroptical signals compared to the corresponding molecular response. Observations are, however, generally restricted to measurements on stationary single particles with a fixed orientation, which complicates the spectral analysis. Here, we report the spectroscopic observation of a freely diffusing single chiral nanoparticle in solution. By acquiring time-resolved circular differential scattering signals we show that the spectral interpretation is significantly simplified. We experimentally demonstrate the equivalence between time-averaged chiral spectra observed for an individual nanostructure and the corresponding ensemble spectra, and thereby demonstrate the ergodic principle for chiroptical spectroscopy. We also show how it is possible for an achiral particle to yield an instantaneous chiroptical response, whereas the time-averaged signals are an unequivocal measure of chirality. Time-resolved chiroptical spectroscopy on a freely moving chiral nanoparticle advances the field of single-particle spectroscopy, and is a means to obtain the true signature of the nanoparticle’s chirality.

pf

link (url) DOI [BibTex]


Microchannels with Self-Pumping Walls
Microchannels with Self-Pumping Walls

Yu, T., Athanassiadis, A., Popescu, M., Chikkadi, V., Güth, A., Singh, D., Qiu, T., Fischer, P.

ACS Nano, September 2020 (article)

Abstract
When asymmetric Janus micromotors are immobilized on a surface, they act as chemically powered micropumps, turning chemical energy from the fluid into a bulk flow. However, such pumps have previously produced only localized recirculating flows, which cannot be used to pump fluid in one direction. Here, we demonstrate that an array of three-dimensional, photochemically active Au/TiO2 Janus pillars can pump water. Upon UV illumination, a water-splitting reaction rapidly creates a directional bulk flow above the active surface. By lining a 2D microchannel with such active surfaces, various flow profiles are created within the channels. Analytical and numerical models of a channel with active surfaces predict flow profiles that agree very well with the experimental results. The light-driven active surfaces provide a way to wirelessly pump fluids at small scales and could be used for real-time, localized flow control in complex microfluidic networks.

pf

link (url) DOI [BibTex]


Scalable Fabrication of Molybdenum Disulfide Nanostructures and their Assembly
Scalable Fabrication of Molybdenum Disulfide Nanostructures and their Assembly

Huang, Y., Yu, K., Li, H., Liang, Z., Walker, D., Ferreira, P., Fischer, P., Fan, D.

Adv. Mat., (2003439), September 2020 (article)

Abstract
Molybdenum disulfide (MoS2) is a multifunctional material that can be used for various applications. In the single‐crystalline form, MoS2 shows superior electronic properties. It is also an exceptionally useful nanomaterial in its polycrystalline form with applications in catalysis, energy storage, water treatment, and gas sensing. Here, the scalable fabrication of longitudinal MoS2 nanostructures, i.e., nanoribbons, and their oxide hybrids with tunable dimensions in a rational and well‐reproducible fashion, is reported. The nanoribbons, obtained at different reaction stages, that is, MoO3, MoS2/MoO2 hybrid, and MoS2, are fully characterized. The growth method presented herein has a high yield and is particularly robust. The MoS2 nanoribbons can readily be removed from its substrate and dispersed in solution. It is shown that functionalized MoS2 nanoribbons can be manipulated in solution and assembled in controlled patterns and directly on microelectrodes with UV‐click‐chemistry. Owing to the high chemical purity and polycrystalline nature, the MoS2 nanostructures demonstrate rapid optoelectronic response to wavelengths from 450 to 750 nm, and successfully remove mercury contaminants from water. The scalable fabrication and manipulation followed by light‐directed assembly of MoS2 nanoribbons, and their unique properties, will be inspiring for device fabrication and applications of the transition metal dichalcogenides.

pf

link (url) [BibTex]

link (url) [BibTex]


Spatial ultrasound modulation by digitally controlling microbubble arrays
Spatial ultrasound modulation by digitally controlling microbubble arrays

Ma, Z., Melde, K., Athanassiadis, A. G., Schau, M., Richter, H., Qiu, T., Fischer, P.

Nature Communications, 11(4537), September 2020 (article)

Abstract
Acoustic waves, capable of transmitting through optically opaque objects, have been widely used in biomedical imaging, industrial sensing and particle manipulation. High-fidelity wavefront shaping is essential to further improve performance in these applications. An acoustic analog to the successful spatial light modulator (SLM) in optics would be highly desirable. To date there have been no techniques shown that provide effective and dynamic modulation of a sound wave and which also support scale-up to a high number of individually addressable pixels. In the present study, we introduce a dynamic spatial ultrasound modulator (SUM),which dynamically reshapes incident plane waves into complex acoustic images. Its trans-mission function is set with a digitally generated pattern of microbubbles controlled by a complementary metal–oxide–semiconductor (CMOS) chip, which results in a binary amplitude acoustic hologram. We employ this device to project sequentially changing acoustic images and demonstrate the first dynamic parallel assembly of microparticles using a SUM.

pf

link (url) DOI [BibTex]


Characterization of active matter in dense suspensions with heterodyne laser Doppler velocimetry
Characterization of active matter in dense suspensions with heterodyne laser Doppler velocimetry

Sachs, J., Kottapalli, S. N., Fischer, P., Botin, D., Palberg, T.

Colloid and Polymer Science, August 2020 (article)

Abstract
We present a novel approach for characterizing the properties and performance of active matter in dilute suspension as well as in crowded environments. We use Super-Heterodyne Laser-Doppler-Velocimetry (SH-LDV) to study large ensembles of catalytically active Janus particles moving under UV illumination. SH-LDV facilitates a model-free determination of the swimming speed and direction, with excellent ensemble averaging. In addition, we obtain information on the distribution of the catalytic activity. Moreover, SH-LDV operates away from walls and permits a facile correction for multiple scattering contributions. It thus allows for studies of concentrated suspensions of swimmers or of systems where swimmers propel actively in an environment crowded by passive particles. We demonstrate the versatility and the scope of the method with a few selected examples. We anticipate that SH-LDV complements established methods and paves the way for systematic measurements at previously inaccessible boundary conditions.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Convolutional Occupancy Networks
Convolutional Occupancy Networks

Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fully-connected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


Category Level Object Pose Estimation via Neural Analysis-by-Synthesis
Category Level Object Pose Estimation via Neural Analysis-by-Synthesis

Chen, X., Dong, Z., Song, J., Geiger, A., Hilliges, O.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Many object pose estimation algorithms rely on the analysis-by-synthesis framework which requires explicit representations of individual object instances. In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module that is capable of implicitly representing the appearance, shape and pose of entire object categories, thus rendering the need for explicit CAD models per object instance unnecessary. The image synthesis network is designed to efficiently span the pose configuration space so that model capacity can be used to capture the shape and local appearance (i.e., texture) variations jointly. At inference time the synthesized images are compared to the target via an appearance based loss and the error signal is backpropagated through the network to the input parameters. Keeping the network parameters fixed, this allows for iterative optimization of the object pose, shape and appearance in a joint manner and we experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone. When provided with depth measurements, to overcome scale ambiguities, the method can accurately recover the full 6DOF pose successfully.

avg

Project Page pdf suppmat [BibTex]

Project Page pdf suppmat [BibTex]


Biocompatible magnetic micro‐ and nanodevices: Fabrication of FePt nanopropellers and cell transfection
Biocompatible magnetic micro‐ and nanodevices: Fabrication of FePt nanopropellers and cell transfection

Kadiri, V. M., Bussi, C., Holle, A. W., Son, K., Kwon, H., Schütz, G., Gutierrez, M. G., Fischer, P.

Adv. Mat., 32(2001114), May 2020 (article)

Abstract
The application of nanoparticles for drug or gene delivery promises benefits in the form of single‐cell‐specific therapeutic and diagnostic capabilities. Many methods of cell transfection rely on unspecific means to increase the transport of genetic material into cells. Targeted transport is in principle possible with magnetically propelled micromotors, which allow responsive nanoscale actuation and delivery. However, many commonly used magnetic materials (e.g., Ni and Co) are not biocompatible, possess weak magnetic remanence (Fe3O4), or cannot be implemented in nanofabrication schemes (NdFeB). Here, it is demonstrated that co‐depositing iron (Fe) and platinum (Pt) followed by one single annealing step, without the need for solution processing, yields ferromagnetic FePt nanomotors that are noncytotoxic, biocompatible, and possess a remanence and magnetization that rival those of permanent NdFeB micromagnets. Active cell targeting and magnetic transfection of lung carcinoma cells are demonstrated using gradient‐free rotating millitesla fields to drive the FePt nanopropellers. The carcinoma cells express enhanced green fluorescent protein after internalization and cell viability is unaffected by the presence of the FePt nanopropellers. The results establish FePt, prepared in the L10 phase, as a promising magnetic material for biomedical applications with superior magnetic performance, especially for micro‐ and nanodevices.

pf mms

link (url) DOI [BibTex]


Interface-mediated spontaneous symmetry breaking and mutual communication between drops containing chemically active particles
Interface-mediated spontaneous symmetry breaking and mutual communication between drops containing chemically active particles

Singh, D., Domínguez, A., Choudhury, U., Kottapalli, S., Popescu, M., Dietrich, S., Fischer, P.

Nature Communications, 11(2210), May 2020 (article)

Abstract
Symmetry breaking and the emergence of self-organized patterns is the hallmark of com- plexity. Here, we demonstrate that a sessile drop, containing titania powder particles with negligible self-propulsion, exhibits a transition to collective motion leading to self-organized flow patterns. This phenomenology emerges through a novel mechanism involving the interplay between the chemical activity of the photocatalytic particles, which induces Mar- angoni stresses at the liquid–liquid interface, and the geometrical confinement provided by the drop. The response of the interface to the chemical activity of the particles is the source of a significantly amplified hydrodynamic flow within the drop, which moves the particles. Furthermore, in ensembles of such active drops long-ranged ordering of the flow patterns within the drops is observed. We show that the ordering is dictated by a chemical com- munication between drops, i.e., an alignment of the flow patterns is induced by the gradients of the chemicals emanating from the active particles, rather than by hydrodynamic interactions.

pf icm

link (url) DOI [BibTex]


Spectrally selective and highly-sensitive UV photodetection with UV-A, C band specific polarity switching in silver plasmonic nanoparticle enhanced gallium oxide thin-film
Spectrally selective and highly-sensitive UV photodetection with UV-A, C band specific polarity switching in silver plasmonic nanoparticle enhanced gallium oxide thin-film

Arora, K., Singh, D., Fischer, P., Kumar, M.

Adv. Opt. Mat., March 2020 (article)

Abstract
Traditional photodetectors generally show a unipolar photocurrent response when illuminated with light of wavelength equal or shorter than the optical bandgap. Here, we report that a thin film of gallium oxide (GO) decorated with plasmonic nanoparticles, surprisingly, exhibits a change in the polarity of the photocurrent for different UV bands. Silver (Ag) nanoparticles are vacuum-deposited onto β-Ga2O3 and the AgNP@GO thin films show a record responsivity of 250 A/W, which significantly outperforms bare GO planar photodetectors. The photoresponsivity reverses sign from +157 µA/W in the UV-C band under unbiased operation to -353 µA/W in the UV-A band. The current reversal is rationalized by considering the charge dynamics stemming from hot electrons generated when the incident light excites a local surface plasmon resonance (LSPR) in the Ag nanoparticles. The Ag nanoparticles improve the external quantum efficiency and detectivity by nearly one order of magnitude with high values of 1.2×105 and 3.4×1014 Jones, respectively. This plasmon-enhanced solar blind GO detector allows UV regions to be spectrally distinguished, which is useful for the development of sensitive dynamic imaging photodetectors.

pf

link (url) DOI [BibTex]


Acoustofluidic Tweezers for the 3D Manipulation of Microparticles
Acoustofluidic Tweezers for the 3D Manipulation of Microparticles

Guo, X., Ma, Z., Goyal, R., Jeong, M., Pang, W., Fischer, P., Dian, X., Qiu, T.

In 2020 IEEE International Conference on Robotics and Automation (ICRA),, Febuary 2020 (conference)

Abstract
Non-contact manipulation is of great importance in the actuation of micro-robotics. It is challenging to contactless manipulate micro-scale objects over large spatial distance in fluid. Here, we describe a novel approach for the dynamic position control of microparticles in three-dimensional (3D) space, based on high-speed acoustic streaming generated by a micro-fabricated gigahertz transducer. Due to the vertical lifting force and the horizontal centripetal force generated by the streaming, microparticles are able to be stably trapped at a position far away from the transducer surface, and to be manipulated over centimeter distance in all three directions. Only the hydrodynamic force is utilized in the system for particle manipulation, making it a versatile tool regardless the material properties of the trapped particle. The system shows high reliability and manipulation velocity, revealing its potentials for the applications in robotics and automation at small scales.

pf

[BibTex]

[BibTex]


Investigating photoresponsivity of graphene-silver hybrid nanomaterials in the ultraviolet
Investigating photoresponsivity of graphene-silver hybrid nanomaterials in the ultraviolet

Deshpande, P., Suri, P., Jeong, H., Fischer, P., Ghosh, A., Ghosh, G.

J. Chem. Phys., 152, pages: 044709, January 2020 (article)

Abstract
There have been several reports of plasmonically enhanced graphene photodetectors in the visible and the near infrared regime but rarely in the ultraviolet. In a previous work, we have reported that a graphene-silver hybrid structure shows a high photoresponsivity of 13 A/W at 270 nm. Here, we consider the likely mechanisms that underlie this strong photoresponse. We investigate the role of the plasmonic layer and examine the response using silver and gold nanoparticles of similar dimensions and spatial arrangement. The effect on local doping, strain, and absorption properties of the hybrid is also probed by photocurrent measurements and Raman and UV-visible spectroscopy. We find that the local doping from the silver nanoparticles is stronger than that from gold and correlates with a measured photosensitivity that is larger in devices with a higher contact area between the plasmonic nanomaterials and the graphene layer.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


A High-Fidelity Phantom for the Simulation and Quantitative Evaluation of Transurethral Resection of the Prostate
A High-Fidelity Phantom for the Simulation and Quantitative Evaluation of Transurethral Resection of the Prostate

Choi, E., Adams, F., Gengenbacher, A., Schlager, D., Palagi, S., Müller, P., Wetterauer, U., Miernik, A., Fischer, P., Qiu, T.

Annals of Biomed. Eng., 48, pages: 437-446, January 2020 (article)

Abstract
Transurethral resection of the prostate (TURP) is a minimally invasive endoscopic procedure that requires experience and skill of the surgeon. To permit surgical training under realistic conditions we report a novel phantom of the human prostate that can be resected with TURP. The phantom mirrors the anatomy and haptic properties of the gland and permits quantitative evaluation of important surgical performance indicators. Mixtures of soft materials are engineered to mimic the physical properties of the human tissue, including the mechanical strength, the electrical and thermal conductivity, and the appearance under an endoscope. Electrocautery resection of the phantom closely resembles the procedure on human tissue. Ultrasound contrast agent was applied to the central zone, which was not detectable by the surgeon during the surgery but showed high contrast when imaged after the surgery, to serve as a label for the quantitative evaluation of the surgery. Quantitative criteria for performance assessment are established and evaluated by automated image analysis. We present the workflow of a surgical simulation on a prostate phantom followed by quantitative evaluation of the surgical performance. Surgery on the phantom is useful for medical training, and enables the development and testing of endoscopic and minimally invasive surgical instruments.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Interactive Materials – Drivers of Future Robotic Systems
Interactive Materials – Drivers of Future Robotic Systems

Fischer, P.

Adv. Mat., January 2020 (article)

Abstract
A robot senses its environment, processes the sensory information, acts in response to these inputs, and possibly communicates with the outside world. Robots generally achieve these tasks with electronics-based hardware or by receiving inputs from some external hardware. In contrast, simple microorganisms can autonomously perceive, act, and communicate via purely physicochemical processes in soft material systems. A key property of biological systems is that they are built from energy-consuming ‘active’ units. Exciting developments in material science show that even very simple artificial active building blocks can show surprisingly rich emergent behaviors. Active non-equilibrium systems are therefore predicted to play an essential role to realize interactive materials. A major challenge is to find robust ways to couple and integrate the energy-consuming building blocks to the mechanical structure of the material. However, success in this endeavor will lead to a new generation of sophisticated micro- and soft-robotic systems that can operate autonomously.

pf

link (url) DOI [BibTex]


Self-supervised motion deblurring
Self-supervised motion deblurring

Liu, P., Janai, J., Pollefeys, M., Sattler, T., Geiger, A.

IEEE Robotics and Automation Letters, 2020 (article)

Abstract
Motion blurry images challenge many computer vision algorithms, e.g., feature detection, motion estimation, or object recognition. Deep convolutional neural networks are state-of-the-art for image deblurring. However, obtaining training data with corresponding sharp and blurry image pairs can be difficult. In this paper, we present a differentiable reblur model for self-supervised motion deblurring, which enables the network to learn from real-world blurry image sequences without relying on sharp images for supervision. Our key insight is that motion cues obtained from consecutive images yield sufficient information to inform the deblurring task. We therefore formulate deblurring as an inverse rendering problem, taking into account the physical image formation process: we first predict two deblurred images from which we estimate the corresponding optical flow. Using these predictions, we re-render the blurred images and minimize the difference with respect to the original blurry inputs. We use both synthetic and real dataset for experimental evaluations. Our experiments demonstrate that self-supervised single image deblurring is really feasible and leads to visually compelling results.

avg

pdf Project Page Blog [BibTex]

pdf Project Page Blog [BibTex]


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]


GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

Schwarz, K., Liao, Y., Niemeyer, M., Geiger, A.

In Advances in Neural Information Processing Systems (NeurIPS), 2020 (inproceedings)

Abstract
While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxel-based representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, eg, the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]


Learning Neural Light Transport
Learning Neural Light Transport

Sanzenbacher, P., Mescheder, L., Geiger, A.

Arxiv, 2020 (article)

Abstract
In recent years, deep generative models have gained significance due to their ability to synthesize natural-looking images with applications ranging from virtual reality to data augmentation for training computer vision models. While existing models are able to faithfully learn the image distribution of the training set, they often lack controllability as they operate in 2D pixel space and do not model the physical image formation process. In this work, we investigate the importance of 3D reasoning for photorealistic rendering. We present an approach for learning light transport in static and dynamic 3D scenes using a neural network with the goal of predicting photorealistic images. In contrast to existing approaches that operate in the 2D image domain, our approach reasons in both 3D and 2D space, thus enabling global illumination effects and manipulation of 3D scene geometry. Experimentally, we find that our model is able to produce photorealistic renderings of static and dynamic scenes. Moreover, it compares favorably to baselines which combine path tracing and image denoising at the same computational budget.

avg

arxiv [BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]


HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking
HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking

Luiten, J., Osep, A., Dendorfer, P., Torr, P., Geiger, A., Leal-Taixe, L., Leibe, B.

International Journal of Computer Vision (IJCV), 2020 (article)

Abstract
Multi-Object Tracking (MOT) has been notoriously difficult to evaluate. Previous metrics overemphasize the importance of either detection or association. To address this, we present a novel MOT evaluation metric, HOTA (Higher Order Tracking Accuracy), which explicitly balances the effect of performing accurate detection, association and localization into a single unified metric for comparing trackers. HOTA decomposes into a family of sub-metrics which are able to evaluate each of five basic error types separately, which enables clear analysis of tracking performance. We evaluate the effectiveness of HOTA on the MOTChallenge benchmark, and show that it is able to capture important aspects of MOT performance not previously taken into account by established metrics. Furthermore, we show HOTA scores better align with human visual evaluation of tracking performance.

avg

pdf [BibTex]

pdf [BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page Slides Video Poster [BibTex]

pdf Project Page Slides Video Poster [BibTex]


Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition
Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition

Hassan Alhaija, Siva Mustikovela, Varun Jampani, Justus Thies, Matthias Niessner, Andreas Geiger, Carsten Rother

In International Conference on 3D Vision (3DV), 2020 (inproceedings)

Abstract
Neural rendering techniques promise efficient photo-realistic image synthesis while providing rich control over scene parameters by learning the physical image formation process. While several supervised methods have been pro-posed for this task, acquiring a dataset of images with accurately aligned 3D models is very difficult. The main contribution of this work is to lift this restriction by training a neural rendering algorithm from unpaired data. We pro-pose an auto encoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties. In contrast to a traditional graphics pipeline, our approach does not require to specify all scene properties, such as material parameters and lighting by hand.Instead, we learn photo-realistic deferred rendering from a small set of 3D models and a larger set of unaligned real images, both of which are easy to acquire in practice. Simultaneously, we obtain accurate intrinsic decompositions of real images while not requiring paired ground truth. Our experiments confirm that a joint treatment of rendering and de-composition is indeed beneficial and that our approach out-performs state-of-the-art image-to-image translation base-lines both qualitatively and quantitatively.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]


Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art
Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art

Janai, J., Güney, F., Behl, A., Geiger, A.

Arxiv, Foundations and Trends in Computer Graphics and Vision, 2020 (book)

Abstract
Recent years have witnessed enormous progress in AI-related fields such as computer vision, machine learning, and autonomous vehicles. As with any rapidly growing field, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several survey papers on particular sub-problems have appeared, no comprehensive survey on problems, datasets, and methods in computer vision for autonomous vehicles has been published. This monograph attempts to narrow this gap by providing a survey on the state-of-the-art datasets and techniques. Our survey includes both the historically most relevant literature as well as the current state of the art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding, and end-to-end learning for autonomous driving. Towards this goal, we analyze the performance of the state of the art on several challenging benchmarking datasets, including KITTI, MOT, and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we also provide a website that allows navigating topics as well as methods and provides additional information.

avg

pdf Project Page link Project Page [BibTex]


Learning Implicit Surface Light Fields
Learning Implicit Surface Light Fields

Oechsle, M., Niemeyer, M., Reiser, C., Mescheder, L., Strauss, T., Geiger, A.

In International Conference on 3D Vision (3DV), 2020 (inproceedings)

Abstract
Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.

avg

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]

2012


Fourier-transform photocurrent spectroscopy using a supercontinuum light source
Fourier-transform photocurrent spectroscopy using a supercontinuum light source

Petermann, C., Beigang, R., Fischer, P.

APPLIED PHYSICS LETTERS, 100(6), 2012 (article)

Abstract
We demonstrate an implementation of frequency-encoded photocurrent spectroscopy using a super-continuum light source. The spectrally broad light is spatially dispersed and modulated with a special mechanical chopper design that permits a continuous wavelength-dependent modulation. After recombination, the light beam contains a frequency encoded spectrum which enables us to map the spectral response of a given sample in 60 ms and with a lateral resolution of 10 mu m. (C) 2012 American Institute of Physics.

pf

DOI [BibTex]

2012


DOI [BibTex]


Eine neue Form von Cavity Enhanced Absorption Spectroscopy
Eine neue Form von Cavity Enhanced Absorption Spectroscopy

Petermann, C., Fischer, P.

DE Gruyter, 79(1), 2012, Best paper award OPTO 2011 (article)

Abstract
Wir stellen eine Kopplungsmethode für resonatorgestützte Absorptionsmessungen vor, bei der Licht durch einen im Resonator platzierten akustooptischen Modulator aktiv ein- und ausgekoppelt wird. Dies ermöglicht es Cavity-Ring-Down-Spektroskopie (CRDS) mit breitbandigen und zeitlich inkohärenten Lichtquellen niedriger spektraler Leistungsdichte durchzuführen. Das Verfahren wird zum ersten Mal mit einer breitbandigen Superkontinuum-Quelle demonstriert.

___________________________________________________________________________________________

A new coupling scheme for cavity enhanced absorption spectroscopy makes use of an intracavity acousto-optical modulator to actively switch light into (and out of) a resonator. This allows cavity ringdown spectroscopy (CRDS) to be implemented with broadband temporally incoherent light sources with low spectral power densities. The method is demonstrated for the first time using a broadband supercontinuum source. Best paper award OPTO 2011.

pf

link (url) [BibTex]

link (url) [BibTex]