Header logo is


2020


no image
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

Karimi*, A., von Kügelgen*, J., Schölkopf, B., Valera, I.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020, *equal contribution (conference) Accepted

ei

arXiv [BibTex]

2020


arXiv [BibTex]


no image
Self-Paced Deep Reinforcement Learning

Klink, P., D’Eramo, C., Peters, J., Pajarinen, J.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Probabilistic Linear Solvers for Machine Learning

Wenger, J., Hennig, P.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Barking up the right tree: an approach to search over molecule synthesis DAGs

Bradshaw, J., Paige, B., Kusner, M., Segler, M., Hernández-Lobato, J. M.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Learning Kernel Tests Without Data Splitting

Kübler, J., Jitkrittum, W., Schölkopf, B., Muandet, K.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Dual Instrumental Variable Regression

Muandet, K., Mehrjou, A., Lee, S. K., Raj, A.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings

Park, J., Muandet, K.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
MATE: Plugging in Model Awareness to Task Embedding for Meta Learning

Chen, X., Wang, Z., Tang, S., Muandet, K.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Object-Centric Learning with Slot Attention

Locatello, F., Weissenborn, D., Unterthiner, T., Mahendran, A., Heigold, G., Uszkoreit, J., Dosovitskiy, A., Kipf, T.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Relative gradient optimization of the Jacobian term in unsupervised deep learning

Gresele, L., Fissore, G., Javaloy, A., Schölkopf, B., Hyvarinen, A.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Causal analysis of Covid-19 Spread in Germany

Mastakouri, A., Schölkopf, B.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Modeling Shared responses in Neuroimaging Studies through MultiView ICA

Richard, H., Gresele, L., Hyvarinen, A., Thirion, B., Gramfort, A., Ablin, P.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Stochastic Stein Discrepancies

Gorham, J., Raj, A., Mackey, L.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining

Tripp, A., Daxberger, E., Hernández-Lobato, J. M.

Advances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


Grasping Field: Learning Implicit Representations for Human Grasps
Grasping Field: Learning Implicit Representations for Human Grasps

(Best Paper Award)

Karunratanakul, K., Yang, J., Zhang, Y., Black, M., Muandet, K., Tang, S.

In International Conference on 3D Vision (3DV), November 2020 (inproceedings)

Abstract
Robotic grasping of house-hold objects has made remarkable progress in recent years. Yet, human grasps are still difficult to synthesize realistically. There are several key reasons: (1) the human hand has many degrees of freedom (more than robotic manipulators); (2) the synthesized hand should conform to the surface of the object; and (3) it should interact with the object in a semantically and physically plausible manner. To make progress in this direction, we draw inspiration from the recent progress on learning-based implicit representations for 3D object reconstruction. Specifically, we propose an expressive representation for human grasp modelling that is efficient and easy to integrate with deep neural networks. Our insight is that every point in a three-dimensional space can be characterized by the signed distances to the surface of the hand and the object, respectively. Consequently, the hand, the object, and the contact area can be represented by implicit surfaces in a common space, in which the proximity between the hand and the object can be modelled explicitly. We name this 3D to 2D mapping as Grasping Field, parameterize it with a deep neural network, and learn it from data. We demonstrate that the proposed grasping field is an effective and expressive representation for human grasp generation. Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud. The extensive experiments demonstrate that our generative model compares favorably with a strong baseline and approaches the level of natural human grasps. Furthermore, based on the grasping field representation, we propose a deep network for the challenging task of 3D hand-object interaction reconstruction from a single RGB image. Our method improves the physical plausibility of the hand-object contact reconstruction and achieves comparable performance for 3D hand reconstruction compared to state-of-the-art methods. Our model and code are available for research purpose at https://github.com/korrawe/grasping_field.

ei ps

pdf arXiv code [BibTex]


{PLACE}: Proximity Learning of Articulation and Contact in {3D} Environments
PLACE: Proximity Learning of Articulation and Contact in 3D Environments

Zhang, S., Zhang, Y., Ma, Q., Black, M. J., Tang, S.

In International Conference on 3D Vision (3DV), November 2020 (inproceedings)

Abstract
High fidelity digital 3D environments have been proposed in recent years, however, it remains extremely challenging to automatically equip such environment with realistic human bodies. Existing work utilizes images, depth or semantic maps to represent the scene, and parametric human models to represent 3D bodies. While being straight-forward, their generated human-scene interactions often lack of naturalness and physical plausibility. Our key observation is that humans interact with the world through body-scene contact. To synthesize realistic human-scene interactions, it is essential to effectively represent the physical contact and proximity between the body and the world. To that end, we propose a novel interaction generation method, named PLACE(Proximity Learning of Articulation and Contact in 3D Environments), which explicitly models the proximity between the human body and the 3D scene around it. Specifically, given a set of basis points on a scene mesh, we leverage a conditional variational autoencoder to synthesize the minimum distances from the basis points to the human body surface. The generated proximal relationship exhibits which region of the scene is in contact with the person. Furthermore, based on such synthesized proximity, we are able to effectively obtain expressive 3D human bodies that interact with the 3D scene naturally. Our perceptual study shows that PLACE significantly improves the state-of-the-art method, approaching the realism of real human-scene interaction. We believe our method makes an important step towards the fully automatic synthesis of realistic 3D human bodies in 3D scenes. The code and model are available for research at https://sanweiliti.github.io/PLACE/PLACE.html

ps

pdf arXiv project code [BibTex]

pdf arXiv project code [BibTex]


{GIF}: Generative Interpretable Faces
GIF: Generative Interpretable Faces

Ghosh, P., Gupta, P. S., Uziel, R., Ranjan, A., Black, M. J., Bolkart, T.

In International Conference on 3D Vision (3DV), November 2020 (inproceedings)

Abstract
Photo-realistic visualization and animation of expressive human faces have been a long standing challenge. 3D face modeling methods provide parametric control but generates unrealistic images, on the other hand, generative 2D models like GANs (Generative Adversarial Networks) output photo-realistic face images, but lack explicit control. Recent methods gain partial control, either by attempting to disentangle different factors in an unsupervised manner, or by adding control post hoc to a pre-trained model. Unconditional GANs, however, may entangle factors that are hard to undo later. We condition our generative model on pre-defined control parameters to encourage disentanglement in the generation process. Specifically, we condition StyleGAN2 on FLAME, a generative 3D face model. While conditioning on FLAME parameters yields unsatisfactory results, we find that conditioning on rendered FLAME geometry and photometric details works well. This gives us a generative 2D face model named GIF (Generative Interpretable Faces) that offers FLAME's parametric control. Here, interpretable refers to the semantic meaning of different parameters. Given FLAME parameters for shape, pose, expressions, parameters for appearance, lighting, and an additional style vector, GIF outputs photo-realistic face images. We perform an AMT based perceptual study to quantitatively and qualitatively evaluate how well GIF follows its conditioning. The code, data, and trained model are publicly available for research purposes at http://gif.is.tue.mpg.de

ps

pdf project code video [BibTex]

pdf project code video [BibTex]


no image
MYND: Unsupervised Evaluation of Novel BCI Control Strategies on Consumer Hardware

Hohmann, M. R., Konieczny, L., Hackl, M., Wirth, B., Zaman, T., Enficiaud, R., Grosse-Wentrup, M., Schölkopf, B.

Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST), October 2020 (conference) Accepted

ei

arXiv DOI [BibTex]

arXiv DOI [BibTex]


Label Efficient Visual Abstractions for Autonomous Driving
Label Efficient Visual Abstractions for Autonomous Driving

Behl, A., Chitta, K., Prakash, A., Ohn-Bar, E., Geiger, A.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, ie, the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (eg, object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

avg

pdf slides video Project Page [BibTex]

pdf slides video Project Page [BibTex]


Learning a statistical full spine model from partial observations
Learning a statistical full spine model from partial observations

Meng, D., Keller, M., Boyer, E., Black, M., Pujades, S.

In Shape in Medical Imaging, pages: 122,133, (Editors: Reuter, Martin and Wachinger, Christian and Lombaert, Hervé and Paniagua, Beatriz and Goksel, Orcun and Rekik, Islem), Springer International Publishing, October 2020 (inproceedings)

Abstract
The study of the morphology of the human spine has attracted research attention for its many potential applications, such as image segmentation, bio-mechanics or pathology detection. However, as of today there is no publicly available statistical model of the 3D surface of the full spine. This is mainly due to the lack of openly available 3D data where the full spine is imaged and segmented. In this paper we propose to learn a statistical surface model of the full-spine (7 cervical, 12 thoracic and 5 lumbar vertebrae) from partial and incomplete views of the spine. In order to deal with the partial observations we use probabilistic principal component analysis (PPCA) to learn a surface shape model of the full spine. Quantitative evaluation demonstrates that the obtained model faithfully captures the shape of the population in a low dimensional space and generalizes to left out data. Furthermore, we show that the model faithfully captures the global correlations among the vertebrae shape. Given a partial observation of the spine, i.e. a few vertebrae, the model can predict the shape of unseen vertebrae with a mean error under 3 mm. The full-spine statistical model is trained on the VerSe 2019 public dataset and is publicly made available to the community for non-commercial purposes. (https://gitlab.inria.fr/spine/spine_model)

ps

Gitlab Code PDF DOI [BibTex]

Gitlab Code PDF DOI [BibTex]


A Gamified App that Helps People Overcome Self-Limiting Beliefs by Promoting Metacognition
A Gamified App that Helps People Overcome Self-Limiting Beliefs by Promoting Metacognition

Amo, V., Lieder, F.

SIG 8 Meets SIG 16, September 2020 (conference) Accepted

Abstract
Previous research has shown that approaching learning with a growth mindset is key for maintaining motivation and overcoming setbacks. Mindsets are systems of beliefs that people hold to be true. They influence a person's attitudes, thoughts, and emotions when they learn something new or encounter challenges. In clinical psychology, metareasoning (reflecting on one's mental processes) and meta-awareness (recognizing thoughts as mental events instead of equating them to reality) have proven effective for overcoming maladaptive thinking styles. Hence, they are potentially an effective method for overcoming self-limiting beliefs in other domains as well. However, the potential of integrating assisted metacognition into mindset interventions has not been explored yet. Here, we propose that guiding and training people on how to leverage metareasoning and meta-awareness for overcoming self-limiting beliefs can significantly enhance the effectiveness of mindset interventions. To test this hypothesis, we develop a gamified mobile application that guides and trains people to use metacognitive strategies based on Cognitive Restructuring (CR) and Acceptance Commitment Therapy (ACT) techniques. The application helps users to identify and overcome self-limiting beliefs by working with aversive emotions when they are triggered by fixed mindsets in real-life situations. Our app aims to help people sustain their motivation to learn when they face inner obstacles (e.g. anxiety, frustration, and demotivation). We expect the application to be an effective tool for helping people better understand and develop the metacognitive skills of emotion regulation and self-regulation that are needed to overcome self-limiting beliefs and develop growth mindsets.

re

A gamified app that helps people overcome self-limiting beliefs by promoting metacognition [BibTex]


Convolutional Occupancy Networks
Convolutional Occupancy Networks

Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fully-connected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


no image
Model-Agnostic Counterfactual Explanations for Consequential Decisions

Karimi, A., Barthe, G., Balle, B., Valera, I.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 895-905, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei plg

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
More Powerful Selective Kernel Tests for Feature Selection

Lim, J. N., Yamada, M., Jitkrittum, W., Terada, Y., Matsui, S., Shimodaira, H.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 820-830, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Bayesian Online Prediction of Change Points

Agudelo-España, D., Gomez-Gonzalez, S., Bauer, S., Schölkopf, B., Peters, J.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 320-329, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Semi-supervised learning, causality, and the conditional cluster assumption

von Kügelgen, J., Mey, A., Loog, M., Schölkopf, B.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI) , 124, pages: 1-10, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Kernel Conditional Moment Test via Maximum Moment Restriction

Muandet, K., Jitkrittum, W., Kübler, J. M.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 41-50, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Learning Sensory-Motor Associations from Demonstration
Learning Sensory-Motor Associations from Demonstration

Berenz, V., Bjelic, A., Herath, L., Mainprice, J.

29th IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 2020), August 2020 (conference) Accepted

Abstract
We propose a method which generates reactive robot behavior learned from human demonstration. In order to do so, we use the Playful programming language which is based on the reactive programming paradigm. This allows us to represent the learned behavior as a set of associations between sensor and motor primitives in a human readable script. Distinguishing between sensor and motor primitives introduces a supplementary level of granularity and more importantly enforces feedback, increasing adaptability and robustness. As the experimental section shows, useful behaviors may be learned from a single demonstration covering a very limited portion of the task space.

am

[BibTex]

[BibTex]


no image
On the design of consequential ranking algorithms

Tabibian, B., Gómez, V., De, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 171-180, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Importance Sampling via Local Sensitivity

Raj, A., Musco, C., Mackey, L.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 3099-3109, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
A Continuous-time Perspective for Modeling Acceleration in Riemannian Optimization

F Alimisis, F., Orvieto, A., Becigneul, G., Lucchi, A.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 1297-1307, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers

Rolinek, M., Swoboda, P., Zietlow, D., Paulus, A., Musil, V., Martius, G.

In Computer Vision – ECCV 2020, Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Building on recent progress at the intersection of combinatorial optimization and deep learning, we propose an end-to-end trainable architecture for deep graph matching that contains unmodified combinatorial solvers. Using the presence of heavily optimized combinatorial solvers together with some improvements in architecture design, we advance state-of-the-art on deep graph matching benchmarks for keypoint correspondence. In addition, we highlight the conceptual advantages of incorporating solvers into deep learning architectures, such as the possibility of post-processing with a strong multi-graph matching solver or the indifference to changes in the training setting. Finally, we propose two new challenging experimental setups.

al

Code Arxiv [BibTex]

Code Arxiv [BibTex]


no image
Fair Decisions Despite Imperfect Predictions

Kilbertus, N., Gomez Rodriguez, M., Schölkopf, B., Muandet, K., Valera, I.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 277-287, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei plg

link (url) [BibTex]

link (url) [BibTex]


no image
Integrals over Gaussians under Linear Domain Constraints

Gessner, A., Kanjilal, O., Hennig, P.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 2764-2774, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


STAR: Sparse Trained Articulated Human Body Regressor
STAR: Sparse Trained Articulated Human Body Regressor

Osman, A. A. A., Bolkart, T., Black, M. J.

In European Conference on Computer Vision (ECCV) , LNCS 12355, pages: 598-613, August 2020 (inproceedings)

Abstract
The SMPL body model is widely used for the estimation, synthesis, and analysis of 3D human pose and shape. While popular, we show that SMPL has several limitations and introduce STAR, which is quantitatively and qualitatively superior to SMPL. First, SMPL has a huge number of parameters resulting from its use of global blend shapes. These dense pose-corrective offsets relate every vertex on the mesh to all the joints in the kinematic tree, capturing spurious long-range correlations. To address this, we define per-joint pose correctives and learn the subset of mesh vertices that are influenced by each joint movement. This sparse formulation results in more realistic deformations and significantly reduces the number of model parameters to 20% of SMPL. When trained on the same data as SMPL, STAR generalizes better despite having many fewer parameters. Second, SMPL factors pose-dependent deformations from body shape while, in reality, people with different shapes deform differently. Consequently, we learn shape-dependent pose-corrective blend shapes that depend on both body pose and BMI. Third, we show that the shape space of SMPL is not rich enough to capture the variation in the human population. We address this by training STAR with an additional 10,000 scans of male and female subjects, and show that this results in better model generalization. STAR is compact, generalizes better to new bodies and is a drop-in replacement for SMPL. STAR is publicly available for research purposes at http://star.is.tue.mpg.de.

ps

Project Page Code Video paper supplemental DOI [BibTex]

Project Page Code Video paper supplemental DOI [BibTex]


no image
Modular Block-diagonal Curvature Approximations for Feedforward Architectures

Dangel, F., Harmeling, S., Hennig, P.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 799-808, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Monocular Expressive Body Regression through Body-Driven Attention
Monocular Expressive Body Regression through Body-Driven Attention

Choutas, V., Pavlakos, G., Bolkart, T., Tzionas, D., Black, M. J.

In Computer Vision – ECCV 2020, LNCS 12355, pages: 20-40, Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
To understand how people look, interact, or perform tasks,we need to quickly and accurately capture their 3D body, face, and hands together from an RGB image. Most existing methods focus only on parts of the body. A few recent approaches reconstruct full expressive 3D humans from images using 3D body models that include the face and hands. These methods are optimization-based and thus slow, prone to local optima, and require 2D keypoints as input. We address these limitations by introducing ExPose (EXpressive POse and Shape rEgression), which directly regresses the body, face, and hands, in SMPL-X format, from an RGB image. This is a hard problem due to the high dimensionality of the body and the lack of expressive training data. Additionally, hands and faces are much smaller than the body, occupying very few image pixels. This makes hand and face estimation hard when body images are downscaled for neural networks. We make three main contributions. First, we account for the lack of training data by curating a dataset of SMPL-X fits on in-the-wild images. Second, we observe that body estimation localizes the face and hands reasonably well. We introduce body-driven attention for face and hand regions in the original image to extract higher-resolution crops that are fed to dedicated refinement modules. Third, these modules exploit part-specific knowledge from existing face and hand-only datasets. ExPose estimates expressive 3D humans more accurately than existing optimization methods at a small fraction of the computational cost. Our data, model and code are available for research at https://expose.is.tue.mpg.de.

ps

code Short video Long video arxiv pdf suppl link (url) DOI Project Page Project Page [BibTex]

code Short video Long video arxiv pdf suppl link (url) DOI Project Page Project Page [BibTex]


Category Level Object Pose Estimation via Neural Analysis-by-Synthesis
Category Level Object Pose Estimation via Neural Analysis-by-Synthesis

Chen, X., Dong, Z., Song, J., Geiger, A., Hilliges, O.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Many object pose estimation algorithms rely on the analysis-by-synthesis framework which requires explicit representations of individual object instances. In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module that is capable of implicitly representing the appearance, shape and pose of entire object categories, thus rendering the need for explicit CAD models per object instance unnecessary. The image synthesis network is designed to efficiently span the pose configuration space so that model capacity can be used to capture the shape and local appearance (i.e., texture) variations jointly. At inference time the synthesized images are compared to the target via an appearance based loss and the error signal is backpropagated through the network to the input parameters. Keeping the network parameters fixed, this allows for iterative optimization of the object pose, shape and appearance in a joint manner and we experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone. When provided with depth measurements, to overcome scale ambiguities, the method can accurately recover the full 6DOF pose successfully.

avg

Project Page pdf suppmat [BibTex]

Project Page pdf suppmat [BibTex]


no image
Testing Goodness of Fit of Conditional Density Models with Kernels

Jitkrittum, W., Kanagawa, H., Schölkopf, B.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 221-230, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


GRAB: A Dataset of Whole-Body Human Grasping of Objects
GRAB: A Dataset of Whole-Body Human Grasping of Objects

Taheri, O., Ghorbani, N., Black, M. J., Tzionas, D.

In Computer Vision – ECCV 2020, LNCS 12355, pages: 581-600, Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de.

ps

pdf suppl video (long) video (short) link (url) DOI Project Page [BibTex]

pdf suppl video (long) video (short) link (url) DOI Project Page [BibTex]


no image
How to navigate everyday distractions: Leveraging optimal feedback to train attention control

Wirzberger, M., Lado, A., Eckerstorfer, L., Oreshnikov, I., Passy, J., Stock, A., Shenhav, A., Lieder, F.

Annual Meeting of the Cognitive Science Society, July 2020 (conference)

Abstract
To stay focused on their chosen tasks, people have to inhibit distractions. The underlying attention control skills can improve through reinforcement learning, which can be accelerated by giving feedback. We applied the theory of metacognitive reinforcement learning to develop a training app that gives people optimal feedback on their attention control while they are working or studying. In an eight-day field experiment with 99 participants, we investigated the effect of this training on people's productivity, sustained attention, and self-control. Compared to a control condition without feedback, we found that participants receiving optimal feedback learned to focus increasingly better (f = .08, p < .01) and achieved higher productivity scores (f = .19, p < .01) during the training. In addition, they evaluated their productivity more accurately (r = .12, p < .01). However, due to asymmetric attrition problems, these findings need to be taken with a grain of salt.

re sf

How to navigate everyday distractions: Leveraging optimal feedback to train attention control DOI Project Page [BibTex]


no image
Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization

Negiar, G., Dresdner, G., Tsai, A. Y., El Ghaoui, L., Locatello, F., Freund, R. M., Pedregosa, F.

37th International Conference on Machine Learning (ICML), pages: 296-305, July 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


How to Train Your Differentiable Filter
How to Train Your Differentiable Filter

Alina Kloss, G. M. J. B.

In July 2020 (inproceedings)

Abstract
In many robotic applications, it is crucial to maintain a belief about the state of a system. These state estimates serve as input for planning and decision making and provide feedback during task execution. Recursive Bayesian Filtering algorithms address the state estimation problem, but they require models of process dynamics and sensory observations as well as noise characteristics of these models. Recently, multiple works have demonstrated that these models can be learned by end-to-end training through differentiable versions of Recursive Filtering algorithms.The aim of this work is to improve understanding and applicability of such differentiable filters (DF). We implement DFs with four different underlying filtering algorithms and compare them in extensive experiments. We find that long enough training sequences are crucial for DF performance and that modelling heteroscedastic observation noise significantly improves results. And while the different DFs perform similarly on our example task, we recommend the differentiable Extended Kalman Filter for getting started due to its simplicity.

am

pdf [BibTex]


no image
Variational Autoencoders with Riemannian Brownian Motion Priors

Kalatzis, D., Eklund, D., Arvanitidis, G., Hauberg, S.

37th International Conference on Machine Learning (ICML), pages: 6789-6799, July 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Variational Bayes in Private Settings (VIPS) (Extended Abstract)

Foulds, J. R., Park, M., Chaudhuri, K., Welling, M.

Proceedings of the 29th International Joint Conference on Artificial Intelligence, (IJCAI-PRICAI), pages: 5050-5054, (Editors: Christian Bessiere), International Joint Conferences on Artificial Intelligence Organization, July 2020, Journal track (conference)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Weakly-Supervised Disentanglement Without Compromises

Locatello, F., Poole, B., Rätsch, G., Schölkopf, B., Bachem, O., Tschannen, M.

37th International Conference on Machine Learning (ICML), pages: 7753-7764, July 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Measuring the Costs of Planning

Felso, V., Jain, Y. R., Lieder, F.

CogSci 2020, July 2020 (poster) Accepted

Abstract
Which information is worth considering depends on how much effort it would take to acquire and process it. From this perspective people’s tendency to neglect considering the long-term consequences of their actions (present bias) might reflect that looking further into the future becomes increasingly more effortful. In this work, we introduce and validate the use of Bayesian Inverse Reinforcement Learning (BIRL) for measuring individual differences in the subjective costs of planning. We extend the resource-rational model of human planning introduced by Callaway, Lieder, et al. (2018) by parameterizing the cost of planning. Using BIRL, we show that increased subjective cost for considering future outcomes may be associated with both the present bias and acting without planning. Our results highlight testing the causal effects of the cost of planning on both present bias and mental effort avoidance as a promising direction for future work.

re

[BibTex]

[BibTex]