Header logo is de


2020


Label Efficient Visual Abstractions for Autonomous Driving
Label Efficient Visual Abstractions for Autonomous Driving

Behl, A., Chitta, K., Prakash, A., Ohn-Bar, E., Geiger, A.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, ie, the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (eg, object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

avg

pdf slides video Project Page [BibTex]

2020


pdf slides video Project Page [BibTex]


Convolutional Occupancy Networks
Convolutional Occupancy Networks

Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fully-connected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


Transient coarsening and the motility of optically heated Janus colloids in a binary liquid mixture
Transient coarsening and the motility of optically heated Janus colloids in a binary liquid mixture

Gomez-Solano, J., Roy, S., Araki, T., Dietrich, S., Maciolek, A.

Soft Matter, 16, pages: 8359-8371, Royal Society of Chemistry, August 2020 (article)

Abstract
A gold-capped Janus particle suspended in a near-critical binary liquid mixture can self-propel under illumination. We have immobilized such a particle in a narrow channel and carried out a combined experimental and theoretical study of the non-equilibrium dynamics of a binary solvent around it – lasting from the very moment of switching illumination on until the steady state is reached. In the theoretical study we use both a purely diffusive and a hydrodynamic model, which we solve numerically. Our results demonstrate a remarkable complexity of the time evolution of the concentration field around the colloid. This evolution is governed by the combined effects of the temperature gradient and the wettability, and crucially depends on whether the colloid is free to move or is trapped. For the trapped colloid, all approaches indicate that the early time dynamics is purely diffusive and characterized by composition layers travelling with constant speed from the surface of the colloid into the bulk of the solvent. Subsequently, hydrodynamic effects set in. Anomalously large nonequilibrium fluctuations, which result from the temperature gradient and the vicinity of the critical point of the binary liquid mixture, give rise to strong concentration fluctuations in the solvent and to permanently changing coarsening patterns not observed for a mobile particle. The early time dynamics around initially still Janus colloids produces a force which is able to set the Janus colloid into motion. The propulsion due to this transient dynamics is in the direction opposite to that observed after the steady state is attained.

icm

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Category Level Object Pose Estimation via Neural Analysis-by-Synthesis
Category Level Object Pose Estimation via Neural Analysis-by-Synthesis

Chen, X., Dong, Z., Song, J., Geiger, A., Hilliges, O.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Many object pose estimation algorithms rely on the analysis-by-synthesis framework which requires explicit representations of individual object instances. In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module that is capable of implicitly representing the appearance, shape and pose of entire object categories, thus rendering the need for explicit CAD models per object instance unnecessary. The image synthesis network is designed to efficiently span the pose configuration space so that model capacity can be used to capture the shape and local appearance (i.e., texture) variations jointly. At inference time the synthesized images are compared to the target via an appearance based loss and the error signal is backpropagated through the network to the input parameters. Keeping the network parameters fixed, this allows for iterative optimization of the object pose, shape and appearance in a joint manner and we experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone. When provided with depth measurements, to overcome scale ambiguities, the method can accurately recover the full 6DOF pose successfully.

avg

Project Page pdf suppmat [BibTex]

Project Page pdf suppmat [BibTex]


Interface-mediated spontaneous symmetry breaking and mutual communication between drops containing chemically active particles
Interface-mediated spontaneous symmetry breaking and mutual communication between drops containing chemically active particles

Singh, D., Domínguez, A., Choudhury, U., Kottapalli, S., Popescu, M., Dietrich, S., Fischer, P.

Nature Communications, 11(2210), May 2020 (article)

Abstract
Symmetry breaking and the emergence of self-organized patterns is the hallmark of com- plexity. Here, we demonstrate that a sessile drop, containing titania powder particles with negligible self-propulsion, exhibits a transition to collective motion leading to self-organized flow patterns. This phenomenology emerges through a novel mechanism involving the interplay between the chemical activity of the photocatalytic particles, which induces Mar- angoni stresses at the liquid–liquid interface, and the geometrical confinement provided by the drop. The response of the interface to the chemical activity of the particles is the source of a significantly amplified hydrodynamic flow within the drop, which moves the particles. Furthermore, in ensembles of such active drops long-ranged ordering of the flow patterns within the drops is observed. We show that the ordering is dictated by a chemical com- munication between drops, i.e., an alignment of the flow patterns is induced by the gradients of the chemicals emanating from the active particles, rather than by hydrodynamic interactions.

pf icm

link (url) DOI [BibTex]


Self-supervised motion deblurring
Self-supervised motion deblurring

Liu, P., Janai, J., Pollefeys, M., Sattler, T., Geiger, A.

IEEE Robotics and Automation Letters, 2020 (article)

Abstract
Motion blurry images challenge many computer vision algorithms, e.g., feature detection, motion estimation, or object recognition. Deep convolutional neural networks are state-of-the-art for image deblurring. However, obtaining training data with corresponding sharp and blurry image pairs can be difficult. In this paper, we present a differentiable reblur model for self-supervised motion deblurring, which enables the network to learn from real-world blurry image sequences without relying on sharp images for supervision. Our key insight is that motion cues obtained from consecutive images yield sufficient information to inform the deblurring task. We therefore formulate deblurring as an inverse rendering problem, taking into account the physical image formation process: we first predict two deblurred images from which we estimate the corresponding optical flow. Using these predictions, we re-render the blurred images and minimize the difference with respect to the original blurry inputs. We use both synthetic and real dataset for experimental evaluations. Our experiments demonstrate that self-supervised single image deblurring is really feasible and leads to visually compelling results.

avg

pdf Project Page Blog [BibTex]

pdf Project Page Blog [BibTex]


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]


no image
Axisymmetric spheroidal squirmers and self-diffusiophoretic particles

Pöhnl, R., Popescu, M. N., Uspal, W. E.

Journal of Physics: Condensed Matter, 32(16), IOP Publishing, Bristol, 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Tracer diffusion on a crowded random Manhattan lattice

Mej\’\ia-Monasterio, C., Nechaev, S., Oshanin, G., Vasilyev, O.

New Journal of Physics, 22(3), IOP Publishing, Bristol, 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

Schwarz, K., Liao, Y., Niemeyer, M., Geiger, A.

In Advances in Neural Information Processing Systems (NeurIPS), 2020 (inproceedings)

Abstract
While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxel-based representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, eg, the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


no image
Wetting transitions on soft substrates

Napiorkowski, M., Schimmele, L., Dietrich, S.

{EPL}, 129(1), EDP Science, Les-Ulis, 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]


no image
Blessing and Curse: How a Supercapacitor Large Capacitance Causes its Slow Charging

Lian, C., Janssen, M., Liu, H., van Roij, R.

Physical Review Letters, 124(7), American Physical Society, Woodbury, N.Y., 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Interplay of quenching temperature and drift in Brownian dynamics

Khalilian, H., Nejad, M. R., Moghaddam, A. G., Rohwer, C. M.

EPL, 128(6), EDP Science, Les-Ulis, 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


Learning Neural Light Transport
Learning Neural Light Transport

Sanzenbacher, P., Mescheder, L., Geiger, A.

Arxiv, 2020 (article)

Abstract
In recent years, deep generative models have gained significance due to their ability to synthesize natural-looking images with applications ranging from virtual reality to data augmentation for training computer vision models. While existing models are able to faithfully learn the image distribution of the training set, they often lack controllability as they operate in 2D pixel space and do not model the physical image formation process. In this work, we investigate the importance of 3D reasoning for photorealistic rendering. We present an approach for learning light transport in static and dynamic 3D scenes using a neural network with the goal of predicting photorealistic images. In contrast to existing approaches that operate in the 2D image domain, our approach reasons in both 3D and 2D space, thus enabling global illumination effects and manipulation of 3D scene geometry. Experimentally, we find that our model is able to produce photorealistic renderings of static and dynamic scenes. Moreover, it compares favorably to baselines which combine path tracing and image denoising at the same computational budget.

avg

arxiv [BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]


no image
Fractal-seaweeds type functionalization of graphene

Amsharov, K., Sharapa, D. I., Vasilyev, O. A., Martin, O., Hauke, F., Görling, A., Soni, H., Hirsch, A.

Carbon, 158, pages: 435-448, Elsevier, Amsterdam, 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Effective pair interaction of patchy particles in critical fluids

Farahmand Bafi, N., Nowakowski, P., Dietrich, S.

The Journal of Chemical Physics, 152(11), American Institute of Physics, Woodbury, N.Y., 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking
HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking

Luiten, J., Osep, A., Dendorfer, P., Torr, P., Geiger, A., Leal-Taixe, L., Leibe, B.

International Journal of Computer Vision (IJCV), 2020 (article)

Abstract
Multi-Object Tracking (MOT) has been notoriously difficult to evaluate. Previous metrics overemphasize the importance of either detection or association. To address this, we present a novel MOT evaluation metric, HOTA (Higher Order Tracking Accuracy), which explicitly balances the effect of performing accurate detection, association and localization into a single unified metric for comparing trackers. HOTA decomposes into a family of sub-metrics which are able to evaluate each of five basic error types separately, which enables clear analysis of tracking performance. We evaluate the effectiveness of HOTA on the MOTChallenge benchmark, and show that it is able to capture important aspects of MOT performance not previously taken into account by established metrics. Furthermore, we show HOTA scores better align with human visual evaluation of tracking performance.

avg

pdf [BibTex]

pdf [BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page Slides Video Poster [BibTex]

pdf Project Page Slides Video Poster [BibTex]


no image
Cassie-Wenzel transition of a binary liquid mixture on a nanosculptured surface

Singh, S. L., Schimmele, L., Dietrich, S.

Physical Review E, 101(5), American Physical Society, Melville, NY, 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Adopting the Boundary Homogenization Approximation from Chemical Kinetics to Motile Chemically Active Particles

Popescu, M. N., Uspal, W. E.

In Chemical Kinetics, pages: 517-540, World Scientific, New Jersey, NJ, 2020 (incollection)

icm

DOI [BibTex]

DOI [BibTex]


Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition
Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition

Hassan Alhaija, Siva Mustikovela, Varun Jampani, Justus Thies, Matthias Niessner, Andreas Geiger, Carsten Rother

In International Conference on 3D Vision (3DV), 2020 (inproceedings)

Abstract
Neural rendering techniques promise efficient photo-realistic image synthesis while providing rich control over scene parameters by learning the physical image formation process. While several supervised methods have been pro-posed for this task, acquiring a dataset of images with accurately aligned 3D models is very difficult. The main contribution of this work is to lift this restriction by training a neural rendering algorithm from unpaired data. We pro-pose an auto encoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties. In contrast to a traditional graphics pipeline, our approach does not require to specify all scene properties, such as material parameters and lighting by hand.Instead, we learn photo-realistic deferred rendering from a small set of 3D models and a larger set of unaligned real images, both of which are easy to acquire in practice. Simultaneously, we obtain accurate intrinsic decompositions of real images while not requiring paired ground truth. Our experiments confirm that a joint treatment of rendering and de-composition is indeed beneficial and that our approach out-performs state-of-the-art image-to-image translation base-lines both qualitatively and quantitatively.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]


no image
Energy storage in steady states under cyclic local energy input

Zhang, Y., Holyst, R., Maciolek, A.

Physical Review E, 101(1), American Physical Society, Melville, NY, 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Numerical simulations of self-diffusiophoretic colloids at fluid interfaces

Peter, T., Malgaretti, P., Rivas, N., Scagliarini, A., Harting, J., Dietrich, S.

Soft Matter, 16(14):3536-3547, Royal Society of Chemistry, Cambridge, UK, 2020 (article)

icm

DOI [BibTex]

DOI [BibTex]


Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art
Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art

Janai, J., Güney, F., Behl, A., Geiger, A.

Arxiv, Foundations and Trends in Computer Graphics and Vision, 2020 (book)

Abstract
Recent years have witnessed enormous progress in AI-related fields such as computer vision, machine learning, and autonomous vehicles. As with any rapidly growing field, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several survey papers on particular sub-problems have appeared, no comprehensive survey on problems, datasets, and methods in computer vision for autonomous vehicles has been published. This monograph attempts to narrow this gap by providing a survey on the state-of-the-art datasets and techniques. Our survey includes both the historically most relevant literature as well as the current state of the art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding, and end-to-end learning for autonomous driving. Towards this goal, we analyze the performance of the state of the art on several challenging benchmarking datasets, including KITTI, MOT, and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we also provide a website that allows navigating topics as well as methods and provides additional information.

avg

pdf Project Page link Project Page [BibTex]


Learning Implicit Surface Light Fields
Learning Implicit Surface Light Fields

Oechsle, M., Niemeyer, M., Reiser, C., Mescheder, L., Strauss, T., Geiger, A.

In International Conference on 3D Vision (3DV), 2020 (inproceedings)

Abstract
Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.

avg

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]

2012


no image

no image
Structure and aggregation of colloids immersed in critical solvents

Mohry, T. F., Maciolek, A., Dietrich, S.

Journal of Chemical Physics, 136(22), 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Ewald sum for hydrodynamic interactions with periodicity in two dimensions

Bleibel, J.

Journal of Physics A: Mathematical and Theoretical, 45(22), 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Nanodroplets at topographic steps

Bartsch, H.

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Patchy worm-like micelles: solution structure studied by small-angle neutron scattering

Rosenfeldt, S., Luedel, F., Schulreich, C., Hellweg, T., Radulescu, A., Schmelz, J., Schmalz, H., Harnau, L.

Physical Chemistry Chemical Physics, 14, pages: 12750-12756, 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Time correlations and persistence probability of a Brownian particle in a shear flow

Chakraborty, D.

European Physical Journal B, 85(8), Springer-Verlag Heidelberg, Heidelberg, 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Local theory for ions in binary liquid mixtures

Bier, M., Gambassi, A., Dietrich, S.

The Journal of Chemical Physics, 137(3), American Institute of Physics, Woodbury, N.Y., 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Effect of ions on confined near-critical binary aqueous mixture

Pousaneh, F., Ciach, A., Maciolek, A.

Soft Matter, 8(29):7567-7581, 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Janus particles in critical liquids

Labbe-Laurent, M.

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Surface integration approach: a new technique for evaluating geometry dependent forces between objects of various geometry and a plate

Dantchev, D., Valchev, G.

Journal of Colloid and Interface Science, 372(1):148-163, 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Phase equilibria of binary liquid crystals

Klöss, Hans-Christian

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Pinning of drops at superhydrophobic surfaces

Daschke, Lena

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Stability of thin wetting films on chemically nanostructured surfaces

Checco, A., Ocko, B. M., Tasinkevych, M., Dietrich, S.

Physical Review Letters, 109(16), American Physical Society., Woodbury, N.Y., etc., 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
The structure of fluids with impurities

Bier, M., Harnau, L.

Zeitschrift f\"ur physikalische Chemie, 226(7-8):807-814, Akademische Verlagsgesellschaft, Frankfurt am Main, 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Impedance spectroscopy of ions at interfaces

Reindl, A.

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Stepwise swelling of a thin film of lamellae-forming poly(styrene-b-butadiene) in cyclohexane vapor

Di, Zhenyu, Posselt, Dorthe, Smilgies, Detlef-M., Li, Ruipeng, Rauscher, Markus, Potemkin, Igor I., Papadakis, Christine M.

Macromolecules, 45(12):5185-5195, 2012 (article)

icm

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A close look at proteins: submolecular resolution of two- and three-dimensionally folded cytochrome c at surfaces

Deng, Z., Thontasen, N., Malinowski, N., Rinke, G., Harnau, L., Rauschenbach, S., Kern, K.

Nano Letters, 12, pages: 2452-2458, 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Field-induced breakup of emulsion droplets stabilized by colloidal particles

Kim, E. G., Stratford, K., Clegg, P. S., Cates, M. E.

Physical Review E, 85(2), 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]


no image
Surface of an evaporating liquid

Arnold, Daniel

Universität Stuttgart, Stuttgart, 2012 (mastersthesis)

icm

[BibTex]

[BibTex]


no image
Precursor films in wetting phenomena

Popescu, M. N., Oshanin, G., Dietrich, S., Cazabat, A. M.

Journal of Physics: Condensed Matter, 24, 2012 (article)

icm

DOI [BibTex]

DOI [BibTex]