Header logo is de


2020


Label Efficient Visual Abstractions for Autonomous Driving
Label Efficient Visual Abstractions for Autonomous Driving

Behl, A., Chitta, K., Prakash, A., Ohn-Bar, E., Geiger, A.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, ie, the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (eg, object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

avg

pdf slides video Project Page [BibTex]

2020


pdf slides video Project Page [BibTex]


Convolutional Occupancy Networks
Convolutional Occupancy Networks

Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fully-connected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


Category Level Object Pose Estimation via Neural Analysis-by-Synthesis
Category Level Object Pose Estimation via Neural Analysis-by-Synthesis

Chen, X., Dong, Z., Song, J., Geiger, A., Hilliges, O.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Many object pose estimation algorithms rely on the analysis-by-synthesis framework which requires explicit representations of individual object instances. In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module that is capable of implicitly representing the appearance, shape and pose of entire object categories, thus rendering the need for explicit CAD models per object instance unnecessary. The image synthesis network is designed to efficiently span the pose configuration space so that model capacity can be used to capture the shape and local appearance (i.e., texture) variations jointly. At inference time the synthesized images are compared to the target via an appearance based loss and the error signal is backpropagated through the network to the input parameters. Keeping the network parameters fixed, this allows for iterative optimization of the object pose, shape and appearance in a joint manner and we experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone. When provided with depth measurements, to overcome scale ambiguities, the method can accurately recover the full 6DOF pose successfully.

avg

Project Page pdf suppmat [BibTex]

Project Page pdf suppmat [BibTex]


Biocompatible magnetic micro‐ and nanodevices: Fabrication of FePt nanopropellers and cell transfection
Biocompatible magnetic micro‐ and nanodevices: Fabrication of FePt nanopropellers and cell transfection

Kadiri, V. M., Bussi, C., Holle, A. W., Son, K., Kwon, H., Schütz, G., Gutierrez, M. G., Fischer, P.

Adv. Mat., 32(2001114), May 2020 (article)

Abstract
The application of nanoparticles for drug or gene delivery promises benefits in the form of single‐cell‐specific therapeutic and diagnostic capabilities. Many methods of cell transfection rely on unspecific means to increase the transport of genetic material into cells. Targeted transport is in principle possible with magnetically propelled micromotors, which allow responsive nanoscale actuation and delivery. However, many commonly used magnetic materials (e.g., Ni and Co) are not biocompatible, possess weak magnetic remanence (Fe3O4), or cannot be implemented in nanofabrication schemes (NdFeB). Here, it is demonstrated that co‐depositing iron (Fe) and platinum (Pt) followed by one single annealing step, without the need for solution processing, yields ferromagnetic FePt nanomotors that are noncytotoxic, biocompatible, and possess a remanence and magnetization that rival those of permanent NdFeB micromagnets. Active cell targeting and magnetic transfection of lung carcinoma cells are demonstrated using gradient‐free rotating millitesla fields to drive the FePt nanopropellers. The carcinoma cells express enhanced green fluorescent protein after internalization and cell viability is unaffected by the presence of the FePt nanopropellers. The results establish FePt, prepared in the L10 phase, as a promising magnetic material for biomedical applications with superior magnetic performance, especially for micro‐ and nanodevices.

pf mms

link (url) DOI [BibTex]


Self-supervised motion deblurring
Self-supervised motion deblurring

Liu, P., Janai, J., Pollefeys, M., Sattler, T., Geiger, A.

IEEE Robotics and Automation Letters, 2020 (article)

Abstract
Motion blurry images challenge many computer vision algorithms, e.g., feature detection, motion estimation, or object recognition. Deep convolutional neural networks are state-of-the-art for image deblurring. However, obtaining training data with corresponding sharp and blurry image pairs can be difficult. In this paper, we present a differentiable reblur model for self-supervised motion deblurring, which enables the network to learn from real-world blurry image sequences without relying on sharp images for supervision. Our key insight is that motion cues obtained from consecutive images yield sufficient information to inform the deblurring task. We therefore formulate deblurring as an inverse rendering problem, taking into account the physical image formation process: we first predict two deblurred images from which we estimate the corresponding optical flow. Using these predictions, we re-render the blurred images and minimize the difference with respect to the original blurry inputs. We use both synthetic and real dataset for experimental evaluations. Our experiments demonstrate that self-supervised single image deblurring is really feasible and leads to visually compelling results.

avg

pdf Project Page Blog [BibTex]

pdf Project Page Blog [BibTex]


no image
Effect of the soft layer thickness of magnetization reversal process of exchange-spring nanomagnet patterns

Son, K., Schütz, G., Goering, E.

{Current Applied Physics}, 20(4):477-483, Elsevier B.V., Amsterdam, 2020 (article)

mms

DOI [BibTex]


{Creating zero-field skyrmions in exchange-biased multilayers through X-ray illumination}
Creating zero-field skyrmions in exchange-biased multilayers through X-ray illumination

Guang, Y., Bykova, I., Liu, Y., Yu, G., Goering, E., Weigand, M., Gräfe, J., Kim, S. K., Zhang, J., Zhang, H., Yan, Z., Wan, C., Feng, J., Wang, X., Guo, C., Wei, H., Peng, Y., Tserkovnyak, Y., Han, X., Schütz, G.

{Nature Communications}, 11, Nature Publishing Group, London, 2020 (article)

Abstract
Skyrmions, magnetic textures with topological stability, hold promises for high-density and energy-efficient information storage devices owing to their small size and low driving-current density. Precise creation of a single nanoscale skyrmion is a prerequisite to further understand the skyrmion physics and tailor skyrmion-based applications. Here, we demonstrate the creation of individual skyrmions at zero-field in an exchange-biased magnetic multilayer with exposure to soft X-rays. In particular, a single skyrmion with 100-nm size can be created at the desired position using a focused X-ray spot of sub-50-nm size. This single skyrmion creation is driven by the X-ray-induced modification of the antiferromagnetic order and the corresponding exchange bias. Furthermore, artificial skyrmion lattices with various arrangements can be patterned using X-ray. These results demonstrate the potential of accurate optical control of single skyrmion at sub-100 nm scale. We envision that X-ray could serve as a versatile tool for local manipulation of magnetic orders.

mms

DOI [BibTex]

DOI [BibTex]


{Tuning the magnetic properties of permalloy-based magnetoplasmonic crystals for sensor applications}
Tuning the magnetic properties of permalloy-based magnetoplasmonic crystals for sensor applications

Murzin, D. V., Belyaev, V. K., Groß, F., Gräfe, J., Rivas, M., Rodionova, V. V.

{Japanese Journal of Applied Physics}, 59(SE), IOP Publishing Ltd, Bristol, England, 2020 (article)

Abstract
Miniature magnetic sensors based on magnetoplasmonic crystals (MPlCs) exhibit high sensitivity and high spatial resolution, which can be obtained by the excitation of surface plasmon polaritons. A field dependence of surface plasmon polaritons' enhanced magneto-optical response strongly correlates with magnetic properties of MPlCs that can be tuned by changing spatial parameters, such as the period and height of diffraction gratings and thicknesses of functional layers. This work compares the magnetic properties of MPlCs based on Ni80Fe20 (permalloy) obtained from local (longitudinal magneto-optical Kerr effect) and bulk (vibrating-sample magnetometry) measurements and demonstrates an ability to control sensors' performance through changing the magnetic properties of the MPlCs. The influence of the substrate's geometry (planar or sinusoidal and trapezoidal diffraction grating profiles) and the thickness of the surface layer is examined.

mms

DOI [BibTex]

DOI [BibTex]


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]


no image
Specific isotope-responsive breathing transition in flexible metal-organic frameworks

Kim, J. Y., Park, J., Ha, J., Jung, M., Wallacher, D., Franz, A., Balderas-Xicohténcatl, R., Hirscher, M., Kang, S. G., Park, J. T., Oh, I. H., Moon, H. R., Oh, H.

Journal of the American Chemical Society, 142(31):13278-13282, American Chemical Society, Washington, DC, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Magnetische Röntgenmikroskopie zur Untersuchung des lokalen Stromtransports in Supraleitern

Simmendinger, J.

Universität Stuttgart, Stuttgart (und Verlag Dr. Hut, München), 2020 (phdthesis)

mms

[BibTex]

[BibTex]


GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

Schwarz, K., Liao, Y., Niemeyer, M., Geiger, A.

In Advances in Neural Information Processing Systems (NeurIPS), 2020 (inproceedings)

Abstract
While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxel-based representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, eg, the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


no image
Magnetic state control via field-angle-selective switching in asymmetric rings

Schönke, D., Reeve, R. M., Stoll, H., Kläui, M.

Physical Review Applied, 14(3), American Physical Society, College Park, Md. [u.a.], 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Element-resolved study of the evolution of magnetic response in FexN compounds

Chen, Y., Gölden, D., Dirba, I., Huang, M., Gutfleisch, O., Nagel, P., Merz, M., Schuppler, S., Schütz, G., Alff, L., Goering, E.

{Journal of Magnetism and Magnetic Materials}, 498, NH, Elsevier, Amsterdam, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
The role of temperature and drive current in skyrmion dynamics

Litzius, K., Leliaert, J., Bassirian, P., Rodrigues, D., Kromin, S., Lemesh, I., Zazvorka, J., Lee, K., Mulkers, J., Kerber, N., Heinze, D., Keil, N., Reeve, R. M., Weigand, M., Van Waeyenberge, B., Schütz, G., Everschor-Sitte, K., Beach, G. S. D., Kläui, M.

{Nature Electronics}, 3(1):30-36, Springer Nature, London, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Magnetic flux penetration into micron-sized superconductor/ferromagnet bilayers

Simmendinger, J., Weigand, M., Schütz, G., Albrecht, J.

{Superconductor Science and Technology}, 33(2), IOP Pub., Bristol, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Interaction of hydrogen isotopes with flexible metal-organic frameworks

Bondorf, L.

Universität Stuttgart, Stuttgart, 2020 (mastersthesis)

mms

[BibTex]

[BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]


Learning Neural Light Transport
Learning Neural Light Transport

Sanzenbacher, P., Mescheder, L., Geiger, A.

Arxiv, 2020 (article)

Abstract
In recent years, deep generative models have gained significance due to their ability to synthesize natural-looking images with applications ranging from virtual reality to data augmentation for training computer vision models. While existing models are able to faithfully learn the image distribution of the training set, they often lack controllability as they operate in 2D pixel space and do not model the physical image formation process. In this work, we investigate the importance of 3D reasoning for photorealistic rendering. We present an approach for learning light transport in static and dynamic 3D scenes using a neural network with the goal of predicting photorealistic images. In contrast to existing approaches that operate in the 2D image domain, our approach reasons in both 3D and 2D space, thus enabling global illumination effects and manipulation of 3D scene geometry. Experimentally, we find that our model is able to produce photorealistic renderings of static and dynamic scenes. Moreover, it compares favorably to baselines which combine path tracing and image denoising at the same computational budget.

avg

arxiv [BibTex]


no image
Demonstration of k-vector selective microscopy for nanoscale mapping of higher order spin wave modes

Träger, N., Gruszecki, P., Lisiecki, F., Groß, F., Förster, J., Weigand, M., Glowinski, H., Kuswik, P., Dubowik, J., Krawczyk, M., Gräfe, J.

Nanoscale, 12(33):17238-17244, Royal Society of Chemistry, Cambridge, UK, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Direct observation of spin-wave focusing by a Fresnel lens

Gräfe, J., Gruszecki, P., Zelent, M., Decker, M., Keskinbora, K., Noske, M., Gawronski, P., Stoll, H., Weigand, M., Krawczyk, M., Back, C. H., Goering, E. J., Schütz, G.

Physical Review B, 102(2), American Physical Society, Woodbury, NY, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Bandgap-adjustment and enhanced surface photovoltage in Y-substituted LaTaIVO2N

Bubeck, C., Widenmeyer, M., De Denko, A. T., Richter, G., Coduri, M., Salas-Colera, E., Goering, E., Zhang, H., Yoon, S., Osterloh, F. E., Weidenkaff, A.

Journal of Materials Chemistry A, 8(23):11837-11848, Royal Society of Chemistry, Cambridge, UK, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Single shot acquisition of spatially resolved spin wave dispersion relations using X-ray microscopy

Träger, N., Groß, F., Förster, J., Baumgaertl, K., Stoll, H., Weigand, M., Schütz, G., Grundler, D., Gräfe, J.

Scientific Reports, 10, Nature Publishing Group, London, UK, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Fabrication and temperature-dependent magnetic properties of large-area L10-FePt/Co exchange-spring magnet nanopatterns

Son, K., Schütz, G.

{Physica E: Low-Dimensional Systems And Nanostructures}, 115, North-Holland, Amsterdam, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]


no image
Research trend of metal-organic frameworks for magnetic refrigeration materials application

Kim, S., Son, K., Oh, H.

Korean Journal of Materials Research, 30(3):136-141, Materials Society of Korea, Seoul, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


Magnetic Anisotropy in Thin Layers of (Mn,Zn)Fe2O4 on SrTiO3 (001)
Magnetic Anisotropy in Thin Layers of (Mn,Zn)Fe2O4 on SrTiO3 (001)

Denecke, R., Welke, M., Huth, P., Gräfe, J., Brachwitz, K., Lorenz, M., Grundmann, M., Ziese, M., Esquinazi, P. D., Goering, E., Schütz, G., Schindler, K., Chassé, A.

Physica Status Solidi (b), 257(7):1900627, 2020 (article)

Abstract
Herein, a ferrimagnetic manganese zinc ferrite (Mn0.5Zn0.5Fe2O4) film with a thickness of 200 nm is prepared without a buffer layer on strontium titanate (001) (SrTiO3) using pulsed laser deposition. Its magnetic properties are investigated using superconducting quantum interference device (SQUID), X-ray absorption spectroscopy with subsequent X-ray magnetic circular dichroism (XMCD) and magneto-optic Kerr effect (MOKE). Hysteresis loops derived from SQUID exhibits bulk-like properties. This can further be confirmed by bulk-like XMCD spectra. In remanent magnetization, an in-plane magnetization with basically no out-of-plane component is found. The magnetic moments derived by the sum rule formalism from the XMCD data are in good agreement to the magnetization observed by SQUID and MOKE. XMCD as well as MOKE reveal an in-plane angular fourfold magnetic anisotropy with the easy direction along [110] for (Mn0.5Zn0.5)Fe2O4 on SrTiO3. The element-specific magnetic moments from XMCD show a stronger contribution of Fe to the anisotropy than of Mn and distinct contributions of the orbital moments.

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
In situ x-ray diffraction and spectro-microscopic study of ALD protected copper films

Dogan, G., Sanli, U. T., Hahn, K., Müller, L., Gruhn, H., Silber, C., Schütz, G., Grévent, C., Keskinbora, K.

ACS Applied Materials and Interfaces, 12(29):33377-33385, American Chemical Society, Washington, DC, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking
HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking

Luiten, J., Osep, A., Dendorfer, P., Torr, P., Geiger, A., Leal-Taixe, L., Leibe, B.

International Journal of Computer Vision (IJCV), 2020 (article)

Abstract
Multi-Object Tracking (MOT) has been notoriously difficult to evaluate. Previous metrics overemphasize the importance of either detection or association. To address this, we present a novel MOT evaluation metric, HOTA (Higher Order Tracking Accuracy), which explicitly balances the effect of performing accurate detection, association and localization into a single unified metric for comparing trackers. HOTA decomposes into a family of sub-metrics which are able to evaluate each of five basic error types separately, which enables clear analysis of tracking performance. We evaluate the effectiveness of HOTA on the MOTChallenge benchmark, and show that it is able to capture important aspects of MOT performance not previously taken into account by established metrics. Furthermore, we show HOTA scores better align with human visual evaluation of tracking performance.

avg

pdf [BibTex]

pdf [BibTex]


no image
How to functionalise metal-organic frameworks to enable guest nanocluster embedment

King, J., Zhang, L., Doszczeczko, S., Sambalova, O., Luo, H., Rohman, F., Phillips, O., Borgschulte, A., Hirscher, M., Addicoat, M., Szilágyi, P. A.

{Journal of Materials Chemistry A}, 8(9):4889-4897, Royal Society of Chemistry, Cambridge, UK, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Magnetic and microstructural properties of anisotropic MnBi magnets compacted by spark plasma sintering

Chen, Y., Gregori, G., Rheingans, B., Huang, W., Kronmüller, H., Schütz, G., Goering, E.

{Journal of Alloys and Compounds}, 830, Elsevier B.V., Lausanne, Switzerland, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page Slides Video Poster [BibTex]

pdf Project Page Slides Video Poster [BibTex]


no image
Biocompatible magnetic micro- and nanodevices: Fabrication of FePt nanopropellers and cell transfection

Kadiri, V. M., Bussi, C., Holle, A. W., Son, K., Kwon, H., Schütz, G., Gutierrez, M. G., Fischer, P.

Advanced Materials, 32(25), Wiley-VCH, Weinheim, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Metal organic frameworks as tunable linear magnets

Son, K., Kim, R. K., Kim, S., Schütz, G., Choi, K. M., Oh, H.

Physica Status Solidi A, 217(12), Wiley-VCH, Weinheim, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition
Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition

Hassan Alhaija, Siva Mustikovela, Varun Jampani, Justus Thies, Matthias Niessner, Andreas Geiger, Carsten Rother

In International Conference on 3D Vision (3DV), 2020 (inproceedings)

Abstract
Neural rendering techniques promise efficient photo-realistic image synthesis while providing rich control over scene parameters by learning the physical image formation process. While several supervised methods have been pro-posed for this task, acquiring a dataset of images with accurately aligned 3D models is very difficult. The main contribution of this work is to lift this restriction by training a neural rendering algorithm from unpaired data. We pro-pose an auto encoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties. In contrast to a traditional graphics pipeline, our approach does not require to specify all scene properties, such as material parameters and lighting by hand.Instead, we learn photo-realistic deferred rendering from a small set of 3D models and a larger set of unaligned real images, both of which are easy to acquire in practice. Simultaneously, we obtain accurate intrinsic decompositions of real images while not requiring paired ground truth. Our experiments confirm that a joint treatment of rendering and de-composition is indeed beneficial and that our approach out-performs state-of-the-art image-to-image translation base-lines both qualitatively and quantitatively.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


no image
Observation of compact ferrimagnetic skyrmions in DyCo3 fim

Chen, K., Lott, D., Philippi-Kobs, A., Weigand, M., Luo, C., Radu, F.

Nanoscale, 12(35):18137-18143, Royal Society of Chemistry, Cambridge, UK, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Generation and characterization of focused helical x-ray beams

Loetgering, L., Baluktsian, M., Keskinbora, K., Horstmeyer, R., Wilhein, T., Schütz, G., Eikema, K. S. E., Witte, S.

Science Advances, 6(7), American Association for the Advancement of Science, 2020 (article)

mms

Generation and characterization of focused helical x-ray beams link (url) DOI [BibTex]

Generation and characterization of focused helical x-ray beams link (url) DOI [BibTex]


no image
Materials for hydrogen-based energy storage - past, recent progress and future outlook

Hirscher, M., Yartys, V. A., Baricco, M., Bellosta von Colbe, J., Blanchard, D., Bowman Jr., R. C., Broom, D. P., Buckley, C. E., Chang, F., Chen, P., Cho, Y. W., Crivello, J., Cuevas, F., David, W. I. F., de Jongh, P. E., Denys, R. V., Dornheim, M., Felderhoff, M., Filinchuk, Y., Froudakis, G. E., Grant, D. M., Gray, E. M., Hauback, B. C., He, T., Humphries, T. D., Jensen, T. R., Kim, S., Kojima, Y., Latroche, M., Li, H., Lotostskyy, M. V., Makepeace, J. W., M\oller, K. T., Naheed, L., Ngene, P., Noréus, D., Nyg\aard, M. M., Orimo, S., Paskevicius, M., Pasquini, L., Ravnsbaek, D. B., Sofianos, M. V., Udovic, T. J., Vegge, T., Walker, G. S., Webb, C. J., Weidenthaler, C., Zlotea, C.

{Journal of Alloys and Compounds}, 827, Elsevier B.V., Lausanne, Switzerland, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


{Thermal nucleation and high-resolution imaging of submicrometer magnetic bubbles in thin thulium iron garnet films with perpendicular anisotropy}
Thermal nucleation and high-resolution imaging of submicrometer magnetic bubbles in thin thulium iron garnet films with perpendicular anisotropy

Büttner, F., Mawass, M. A., Bauer, J., Rosenberg, E., Caretta, L., Avci, C. O., Gräfe, J., Finizio, S., Vaz, C. A. F., Novakovic, N., Weigand, M., Litzius, K., Förster, J., Träger, N., Groß, F., Suzuki, D., Huang, M., Bartell, J., Kronast, F., Raabe, J., Schütz, G., Ross, C. A., Beach, G. S. D.

{Physical Review Materials}, 4(1), American Physical Society, College Park, MD, 2020 (article)

Abstract
Ferrimagnetic iron garnets are promising materials for spintronics applications, characterized by ultralow damping and zero current shunting. It has recently been found that few nm-thick garnet films interfaced with a heavy metal can also exhibit sizable interfacial spin-orbit interactions, leading to the emergence, and efficient electrical control, of one-dimensional chiral domain walls. Two-dimensional bubbles, by contrast, have so far only been confirmed in micrometer-thick films. Here, we show by high resolution scanning transmission x-ray microscopy and photoemission electron microscopy that submicrometer bubbles can be nucleated and stabilized in ∼25-nm-thick thulium iron garnet films via short heat pulses generated by electric current in an adjacent Pt strip, or by ultrafast laser illumination. We also find that quasistatic processes do not lead to the formation of a bubble state, suggesting that the thermodynamic path to reaching that state requires transient dynamics. X-ray imaging reveals that the bubbles have Bloch-type walls with random chirality and topology, indicating negligible chiral interactions at the garnet film thickness studied here. The robustness of thermal nucleation and the feasibility demonstrated here to image garnet-based devices by x-rays both in transmission geometry and with sensitivity to the domain wall chirality are critical steps to enabling the study of small spin textures and dynamics in perpendicularly magnetized thin-film garnets.

mms

DOI [BibTex]

DOI [BibTex]


{Real-space imaging of confined magnetic skyrmion tubes}
Real-space imaging of confined magnetic skyrmion tubes

Birch, M. T., Cortés-Ortuño, D., Turnbull, L. A., Wilson, M. N., Groß, F., Träger, N., Laurenson, A., Bukin, N., Moody, S. H., Weigand, M., Schütz, G., Popescu, H., Fan, R., Steadman, P., Verezhak, J. A. T., Balakrishnan, G., Loudon, J. C., Twitchett-Harrison, A. C., Hovorka, O., Fangohr, H., Ogrin, F., Gräfe, J., Hatton, P. D.

Nature Communications, 11, pages: 1726, 2020 (article)

Abstract
Magnetic skyrmions are topologically nontrivial particles with a potential application as information elements in future spintronic device architectures. While they are commonly portrayed as two dimensional objects, in reality magnetic skyrmions are thought to exist as elongated, tube-like objects extending through the thickness of the host material. The study of this skyrmion tube state (SkT) is vital for furthering the understanding of skyrmion formation and dynamics for future applications. However, direct experimental imaging of skyrmion tubes has yet to be reported. Here, we demonstrate the real-space observation of skyrmion tubes in a lamella of FeGe using resonant magnetic x-ray imaging and comparative micromagnetic simulations, confirming their extended structure. The formation of these structures at the edge of the sample highlights the importance of confinement and edge effects in the stabilisation of the SkT state, opening the door to further investigation into this unexplored dimension of the skyrmion spin texture.

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]


no image
Current-induced dynamical tilting of chiral domain walls in curved microwires

Finizio, S., Wintz, S., Mayr, S., Huxtable, A. J., Langer, M., Bailey, J., Burnell, G., Marrows, C. H., Raabe, J.

Applied Physics Letters, 116(18), American Institute of Physics, Melville, NY, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Highly effective hydrogen isotope separation through dihydrogen bond on Cu(I)-exchanged zeolites well above liquid nitrogen temperature

Xiong, R., Zhang, L., Li, P., Luo, W., Tang, T., Ao, B., Sang, G., Chen, C., Yan, X., Chen, J., Hirscher, M.

Chemical Engineering Journal, 391, Elsevier, Lausanne, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art
Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art

Janai, J., Güney, F., Behl, A., Geiger, A.

Arxiv, Foundations and Trends in Computer Graphics and Vision, 2020 (book)

Abstract
Recent years have witnessed enormous progress in AI-related fields such as computer vision, machine learning, and autonomous vehicles. As with any rapidly growing field, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several survey papers on particular sub-problems have appeared, no comprehensive survey on problems, datasets, and methods in computer vision for autonomous vehicles has been published. This monograph attempts to narrow this gap by providing a survey on the state-of-the-art datasets and techniques. Our survey includes both the historically most relevant literature as well as the current state of the art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding, and end-to-end learning for autonomous driving. Towards this goal, we analyze the performance of the state of the art on several challenging benchmarking datasets, including KITTI, MOT, and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we also provide a website that allows navigating topics as well as methods and provides additional information.

avg

pdf Project Page link Project Page [BibTex]


no image
Ferrimagnetic skyrmions in topological insulator/ferrimagnet heterostructures

Wu, H., Groß, F., Dai, B. Q., Lujan, D., Razavi, S. A., Zhang, P., Liu, Y. X., Sobotkiewich, K., Förster, J., Weigand, M., Schütz, G., Li, X. Q., Gräfe, J., Wang, K. L.

Advanced Materials, 32(34), Wiley-VCH, Weinheim, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Spin-Bahn-Effekte in der Vortexdynamik und kürzeste Spinwellen in Y3Fe5O12

Förster, J.

Universität Stuttgart, Stuttgart (und Cuvillier Verlag, Göttingen), 2020 (phdthesis)

mms

[BibTex]

[BibTex]


Learning Implicit Surface Light Fields
Learning Implicit Surface Light Fields

Oechsle, M., Niemeyer, M., Reiser, C., Mescheder, L., Strauss, T., Geiger, A.

In International Conference on 3D Vision (3DV), 2020 (inproceedings)

Abstract
Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.

avg

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]


no image
Highly nonlinear magnetoelectric effect in buckled-honeycomb antiferromagnetic Co4Ta2O9

Lee, N., Oh, D. G., Choi, S., Moon, J. Y., Kim, J. H., Shin, H. J., Son, K., Nuss, J., Kiryukhin, V., Choi, Y. J.

Scientific Reports, 10, Nature Publishing Group, London, UK, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Room temperature ferromagnetism driven by Ca-doped BiFeO3 multiferroic functional material

Marzouk, M., Hashem, H. M., Soltan, S., Ramadan, A. A.

{Journal of Materials Science: Materials in Electronics}, 31(7):5599-5607, Springer, Norwell, MA, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]