Header logo is


2020


AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning
AirCapRL: Autonomous Aerial Human Motion Capture Using Deep Reinforcement Learning

Tallamraju, R., Saini, N., Bonetto, E., Pabst, M., Liu, Y. T., Black, M., Ahmad, A.

IEEE Robotics and Automation Letters, IEEE Robotics and Automation Letters, 5(4):6678 - 6685, IEEE, October 2020, Also accepted and presented in the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (article)

Abstract
In this letter, we introduce a deep reinforcement learning (DRL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose, and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system, and observation models. Such models are difficult to derive, and generalize across different systems. Moreover, the non-linearities, and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions.

ps

link (url) DOI [BibTex]

2020


link (url) DOI [BibTex]


Label Efficient Visual Abstractions for Autonomous Driving
Label Efficient Visual Abstractions for Autonomous Driving

Behl, A., Chitta, K., Prakash, A., Ohn-Bar, E., Geiger, A.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, ie, the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (eg, object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

avg

pdf slides video Project Page [BibTex]

pdf slides video Project Page [BibTex]


Chiroptical spectroscopy of a freely diffusing single nanoparticle
Chiroptical spectroscopy of a freely diffusing single nanoparticle

Sachs, J., Günther, J., Mark, A. G., Fischer, P.

Nature Communications, 11(4513), September 2020 (article)

Abstract
Chiral plasmonic nanoparticles can exhibit strong chiroptical signals compared to the corresponding molecular response. Observations are, however, generally restricted to measurements on stationary single particles with a fixed orientation, which complicates the spectral analysis. Here, we report the spectroscopic observation of a freely diffusing single chiral nanoparticle in solution. By acquiring time-resolved circular differential scattering signals we show that the spectral interpretation is significantly simplified. We experimentally demonstrate the equivalence between time-averaged chiral spectra observed for an individual nanostructure and the corresponding ensemble spectra, and thereby demonstrate the ergodic principle for chiroptical spectroscopy. We also show how it is possible for an achiral particle to yield an instantaneous chiroptical response, whereas the time-averaged signals are an unequivocal measure of chirality. Time-resolved chiroptical spectroscopy on a freely moving chiral nanoparticle advances the field of single-particle spectroscopy, and is a means to obtain the true signature of the nanoparticle’s chirality.

pf

link (url) DOI [BibTex]


Spatial ultrasound modulation by digitally controlling microbubble arrays
Spatial ultrasound modulation by digitally controlling microbubble arrays

Ma, Z., Melde, K., Athanassiadis, A. G., Schau, M., Richter, H., Qiu, T., Fischer, P.

Nature Communications, 11(4537), September 2020 (article)

Abstract
Acoustic waves, capable of transmitting through optically opaque objects, have been widely used in biomedical imaging, industrial sensing and particle manipulation. High-fidelity wavefront shaping is essential to further improve performance in these applications. An acoustic analog to the successful spatial light modulator (SLM) in optics would be highly desirable. To date there have been no techniques shown that provide effective and dynamic modulation of a sound wave and which also support scale-up to a high number of individually addressable pixels. In the present study, we introduce a dynamic spatial ultrasound modulator (SUM),which dynamically reshapes incident plane waves into complex acoustic images. Its trans-mission function is set with a digitally generated pattern of microbubbles controlled by a complementary metal–oxide–semiconductor (CMOS) chip, which results in a binary amplitude acoustic hologram. We employ this device to project sequentially changing acoustic images and demonstrate the first dynamic parallel assembly of microparticles using a SUM.

pf

link (url) DOI [BibTex]


Characterization of a Magnetic Levitation Haptic Interface for Realistic Tool-Based Interactions
Characterization of a Magnetic Levitation Haptic Interface for Realistic Tool-Based Interactions

Lee, H., Tombak, G. I., Park, G., Kuchenbecker, K. J.

Work-in-progress poster presented at EuroHaptics, Leiden, The Netherlands, September 2020 (misc)

Abstract
We introduce our recent study on the characterization of a commercial magnetic levitation haptic interface (MagLev 200, Butterfly Haptics LLC) for realistic high-bandwidth interactions. This device’s haptic rendering scheme can provide strong 6-DoF (force and torque) feedback without friction at all poses in its small workspace. The objective of our study is to enable the device to accurately render realistic multidimensional vibrotactile stimuli measured from a stylus-like tool. Our approach is to characterize the dynamics between the commanded wrench and the resulting translational acceleration across the frequency range of interest. To this end, we first custom-designed and attached a pen-shaped manipulandum (11.5 cm, aluminum) to the top of the MagLev 200’s end-effector for better usability in grasping. An accelerometer (ADXL354, Analog Devices) was rigidly mounted inside the manipulandum. Then, we collected a data set where the input is a 30-second-long force and/or torque signal commanded as a sweep function from 10 to 500 Hz; the output is the corresponding acceleration measurement, which we collected both with and without a user holding the handle. We succeeded at fitting both non-parametric and parametric versions of the transfer functions for both scenarios, with a fitting accuracy of about 95% for the parametric transfer functions. In the future, we plan to find the best method of applying the inverse parametric transfer function to our system. We will then employ that compensation method in a user study to evaluate the realism of different algorithms for reducing the dimensionality of tool-based vibrotactile cues.

hi

link (url) [BibTex]

link (url) [BibTex]


Combining learned and analytical models for predicting action effects from sensory data
Combining learned and analytical models for predicting action effects from sensory data

Kloss, A., Schaal, S., Bohg, J.

International Journal of Robotics Research, September 2020 (article)

Abstract
One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. This enables optimal action selection to reach a certain goal state. Traditionally, dynamics are approximated by physics-based analytical models. These models rely on specific state representations that may be hard to obtain from raw sensory data, especially if no knowledge of the object shape is assumed. More recently, we have seen learning approaches that can predict the effect of complex physical interactions directly from sensory input. It is however an open question how far these models generalize beyond their training data. In this work, we investigate the advantages and limitations of neural network based learning approaches for predicting the effects of actions based on sensory input and show how analytical and learned models can be combined to leverage the best of both worlds. As physical interaction task, we use planar pushing, for which there exists a well-known analytical model and a large real-world dataset. We propose to use a convolutional neural network to convert raw depth images or organized point clouds into a suitable representation for the analytical model and compare this approach to using neural networks for both, perception and prediction. A systematic evaluation of the proposed approach on a very large real-world dataset shows two main advantages of the hybrid architecture. Compared to a pure neural network, it significantly (i) reduces required training data and (ii) improves generalization to novel physical interaction.

am

arXiv pdf link (url) DOI [BibTex]


Tactile Textiles: An Assortment of Fabric-Based Tactile Sensors for Contact Force and Contact Location
Tactile Textiles: An Assortment of Fabric-Based Tactile Sensors for Contact Force and Contact Location

Burns, R. B., Thomas, N., Lee, H., Faulkner, R., Kuchenbecker, K. J.

Hands-on demonstration presented at EuroHaptics, Leiden, The Netherlands, September 2020, Rachael Bevill Burns, Neha Thomas, and Hyosang Lee contributed equally to this publication (misc)

Abstract
Fabric-based tactile sensors are promising for the construction of robotic skin due to their soft and flexible nature. Conductive fabric layers can be used to form piezoresistive structures that are sensitive to contact force and/or contact location. This demonstration showcases three diverse fabric-based tactile sensors we have created. The first detects dynamic tactile events anywhere within a region on a robot’s body. The second design measures the precise location at which a single low-force contact is applied. The third sensor uses electrical resistance tomography to output both the force and location of multiple simultaneous contacts applied across a surface.

hi

Project Page Project Page [BibTex]

Project Page Project Page [BibTex]


no image
Estimating Human Handshape by Feeling the Wrist

Forte, M., Young, E. M., Kuchenbecker, K. J.

Work-in-progress poster presented at EuroHaptics, Leiden, The Netherlands, September 2020 (misc)

hi

[BibTex]

[BibTex]


no image
Optimal Sensor Placement for Recording the Contact Vibrations of a Medical Tool

Gourishetti, R., Serhat, G., Kuchenbecker, K. J.

Work in Progress poster presented at EuroHaptics, Leiden, The Netherlands, EuroHaptics 2020, September 2020 (misc)

[BibTex]

[BibTex]


Sweat softens the outermost layer of the human finger pad: evidence from simulations and experiments
Sweat softens the outermost layer of the human finger pad: evidence from simulations and experiments

Nam, S., Kuchenbecker, K. J.

Work-in-progress poster presented at Eurohaptics Conference, Leiden, The Netherlands, September 2020, Award for best poster in 2020 (poster)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Intermediate Ridges Amplify Mechanoreceptor Strains in Static and Dynamic Touch

Serhat, G., Kuchenbecker, K. J.

Work-in-progress poster presented at the EuroHaptics (EH), Leiden, The Netherlands, September 2020 (misc)

hi

[BibTex]

[BibTex]


no image
Seeing through Touch: Contact-Location Sensing and Tactile Feedback for Prosthetic Hands

Thomas, N., Kuchenbecker, K. J.

Works-in-progress abstract and poster presented at Eurohaptics 2020, Leiden, Netherlands, September 2020 (misc)

Abstract
Locating and picking up an object without vision is a simple task for able-bodied people, due in part to their rich tactile perception capabilities. The same cannot be said for users of standard myoelectric prostheses, who must rely largely on visual cues to successfully interact with the environment. To enable prosthesis users to locate and grasp objects without looking at them, we propose two changes: adding specialized contact-location sensing to the dorsal and palmar aspects of the prosthetic hand’s fingers, and providing the user with tactile feedback of where an object touches the fingers. To evaluate the potential utility of these changes, we developed a simple, sensitive, fabric-based tactile sensor which provides continuous contact location information via a change in voltage of a voltage divider circuit. This sensor was wrapped around the fingers of a commercial prosthetic hand (Ottobock SensorHand Speed). Using an ATI Nano17 force sensor, we characterized the tactile sensor’s response to normal force at distributed contact locations and obtained an average detection threshold of 0.63 +/- 0.26 N. We also confirmed that the voltage-to-location mapping is linear (R squared = 0.99). Sensor signals were adapted to the stationary vibrotactile funneling illusion to provide haptic feedback of contact location. These preliminary results indicate a promising system that imitates a key aspect of the sensory capabilities of the intact hand. Future work includes testing the system in a modified reach-grasp-and-lift study, in which participants must accomplish the task blindfolded.

hi

[BibTex]

[BibTex]


A Gamified App that Helps People Overcome Self-Limiting Beliefs by Promoting Metacognition
A Gamified App that Helps People Overcome Self-Limiting Beliefs by Promoting Metacognition

Amo, V., Lieder, F.

SIG 8 Meets SIG 16, September 2020 (conference) Accepted

Abstract
Previous research has shown that approaching learning with a growth mindset is key for maintaining motivation and overcoming setbacks. Mindsets are systems of beliefs that people hold to be true. They influence a person's attitudes, thoughts, and emotions when they learn something new or encounter challenges. In clinical psychology, metareasoning (reflecting on one's mental processes) and meta-awareness (recognizing thoughts as mental events instead of equating them to reality) have proven effective for overcoming maladaptive thinking styles. Hence, they are potentially an effective method for overcoming self-limiting beliefs in other domains as well. However, the potential of integrating assisted metacognition into mindset interventions has not been explored yet. Here, we propose that guiding and training people on how to leverage metareasoning and meta-awareness for overcoming self-limiting beliefs can significantly enhance the effectiveness of mindset interventions. To test this hypothesis, we develop a gamified mobile application that guides and trains people to use metacognitive strategies based on Cognitive Restructuring (CR) and Acceptance Commitment Therapy (ACT) techniques. The application helps users to identify and overcome self-limiting beliefs by working with aversive emotions when they are triggered by fixed mindsets in real-life situations. Our app aims to help people sustain their motivation to learn when they face inner obstacles (e.g. anxiety, frustration, and demotivation). We expect the application to be an effective tool for helping people better understand and develop the metacognitive skills of emotion regulation and self-regulation that are needed to overcome self-limiting beliefs and develop growth mindsets.

re

A gamified app that helps people overcome self-limiting beliefs by promoting metacognition [BibTex]


no image
Haptify: a Comprehensive Benchmarking System for Grounded Force-Feedback Haptic Devices

Fazlollahi, F., Kuchenbecker, K. J.

Work-in-progress poster presented at the IEEE Eurohaptics Conference, Leiden, Netherlands, September 2020 (poster)

hi

[BibTex]

[BibTex]


Characterization of active matter in dense suspensions with heterodyne laser Doppler velocimetry
Characterization of active matter in dense suspensions with heterodyne laser Doppler velocimetry

Sachs, J., Kottapalli, S. N., Fischer, P., Botin, D., Palberg, T.

Colloid and Polymer Science, August 2020 (article)

Abstract
We present a novel approach for characterizing the properties and performance of active matter in dilute suspension as well as in crowded environments. We use Super-Heterodyne Laser-Doppler-Velocimetry (SH-LDV) to study large ensembles of catalytically active Janus particles moving under UV illumination. SH-LDV facilitates a model-free determination of the swimming speed and direction, with excellent ensemble averaging. In addition, we obtain information on the distribution of the catalytic activity. Moreover, SH-LDV operates away from walls and permits a facile correction for multiple scattering contributions. It thus allows for studies of concentrated suspensions of swimmers or of systems where swimmers propel actively in an environment crowded by passive particles. We demonstrate the versatility and the scope of the method with a few selected examples. We anticipate that SH-LDV complements established methods and paves the way for systematic measurements at previously inaccessible boundary conditions.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Convolutional Occupancy Networks
Convolutional Occupancy Networks

Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fully-connected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.

avg

pdf suppmat video Project Page [BibTex]

pdf suppmat video Project Page [BibTex]


no image
Model-Agnostic Counterfactual Explanations for Consequential Decisions

Karimi, A., Barthe, G., Balle, B., Valera, I.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 895-905, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei plg

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
More Powerful Selective Kernel Tests for Feature Selection

Lim, J. N., Yamada, M., Jitkrittum, W., Terada, Y., Matsui, S., Shimodaira, H.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 820-830, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


no image
Bayesian Online Prediction of Change Points

Agudelo-España, D., Gomez-Gonzalez, S., Bauer, S., Schölkopf, B., Peters, J.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 320-329, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Semi-supervised learning, causality, and the conditional cluster assumption

von Kügelgen, J., Mey, A., Loog, M., Schölkopf, B.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI) , 124, pages: 1-10, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Kernel Conditional Moment Test via Maximum Moment Restriction

Muandet, K., Jitkrittum, W., Kübler, J. M.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 41-50, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Learning Sensory-Motor Associations from Demonstration
Learning Sensory-Motor Associations from Demonstration

Berenz, V., Bjelic, A., Herath, L., Mainprice, J.

29th IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 2020), August 2020 (conference) Accepted

Abstract
We propose a method which generates reactive robot behavior learned from human demonstration. In order to do so, we use the Playful programming language which is based on the reactive programming paradigm. This allows us to represent the learned behavior as a set of associations between sensor and motor primitives in a human readable script. Distinguishing between sensor and motor primitives introduces a supplementary level of granularity and more importantly enforces feedback, increasing adaptability and robustness. As the experimental section shows, useful behaviors may be learned from a single demonstration covering a very limited portion of the task space.

am

[BibTex]

[BibTex]


no image
Vision-based Force Estimation for a da Vinci Instrument Using Deep Neural Networks

Lee, Y., Husin, H. M., Forte, M., Lee, S., Kuchenbecker, K. J.

Extended abstract presented as an Emerging Technology ePoster at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Cleveland, Ohio, USA, August 2020 (misc) Accepted

hi

[BibTex]

[BibTex]


no image
On the design of consequential ranking algorithms

Tabibian, B., Gómez, V., De, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 171-180, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Importance Sampling via Local Sensitivity

Raj, A., Musco, C., Mackey, L.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 3099-3109, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
A Continuous-time Perspective for Modeling Acceleration in Riemannian Optimization

F Alimisis, F., Orvieto, A., Becigneul, G., Lucchi, A.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 1297-1307, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers

Rolinek, M., Swoboda, P., Zietlow, D., Paulus, A., Musil, V., Martius, G.

In Computer Vision – ECCV 2020, Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Building on recent progress at the intersection of combinatorial optimization and deep learning, we propose an end-to-end trainable architecture for deep graph matching that contains unmodified combinatorial solvers. Using the presence of heavily optimized combinatorial solvers together with some improvements in architecture design, we advance state-of-the-art on deep graph matching benchmarks for keypoint correspondence. In addition, we highlight the conceptual advantages of incorporating solvers into deep learning architectures, such as the possibility of post-processing with a strong multi-graph matching solver or the indifference to changes in the training setting. Finally, we propose two new challenging experimental setups.

al

Code Arxiv [BibTex]

Code Arxiv [BibTex]


no image
Fair Decisions Despite Imperfect Predictions

Kilbertus, N., Gomez Rodriguez, M., Schölkopf, B., Muandet, K., Valera, I.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 277-287, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei plg

link (url) [BibTex]

link (url) [BibTex]


no image
Integrals over Gaussians under Linear Domain Constraints

Gessner, A., Kanjilal, O., Hennig, P.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 2764-2774, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


STAR: Sparse Trained Articulated Human Body Regressor
STAR: Sparse Trained Articulated Human Body Regressor

Osman, A. A. A., Bolkart, T., Black, M. J.

In European Conference on Computer Vision (ECCV) , August 2020 (inproceedings)

Abstract
The SMPL body model is widely used for the estimation, synthesis, and analysis of 3D human pose and shape. While popular, we show that SMPL has several limitations and introduce STAR, which is quantitatively and qualitatively superior to SMPL. First, SMPL has a huge number of parameters resulting from its use of global blend shapes. These dense pose-corrective offsets relate every vertex on the mesh to all the joints in the kinematic tree, capturing spurious long-range correlations. To address this, we define per-joint pose correctives and learn the subset of mesh vertices that are influenced by each joint movement. This sparse formulation results in more realistic deformations and significantly reduces the number of model parameters to 20% of SMPL. When trained on the same data as SMPL, STAR generalizes better despite having many fewer parameters. Second, SMPL factors pose-dependent deformations from body shape while, in reality, people with different shapes deform differently. Consequently, we learn shape-dependent pose-corrective blend shapes that depend on both body pose and BMI. Third, we show that the shape space of SMPL is not rich enough to capture the variation in the human population. We address this by training STAR with an additional 10,000 scans of male and female subjects, and show that this results in better model generalization. STAR is compact, generalizes better to new bodies and is a drop-in replacement for SMPL. STAR is publicly available for research purposes at http://star.is.tue.mpg.de.

ps

Project Page Code Video paper supplemental [BibTex]


3D Morphable Face Models - Past, Present and Future
3D Morphable Face Models - Past, Present and Future

Egger, B., Smith, W. A. P., Tewari, A., Wuhrer, S., Zollhoefer, M., Beeler, T., Bernard, F., Bolkart, T., Kortylewski, A., Romdhani, S., Theobalt, C., Blanz, V., Vetter, T.

ACM Transactions on Graphics, 39(5), August 2020 (article)

Abstract
In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications.

ps

project page pdf preprint DOI [BibTex]

project page pdf preprint DOI [BibTex]


Monocular Expressive Body Regression through Body-Driven Attention
Monocular Expressive Body Regression through Body-Driven Attention

Choutas, V., Pavlakos, G., Bolkart, T., Tzionas, D., Black, M. J.

In Computer Vision – ECCV 2020, Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
To understand how people look, interact, or perform tasks,we need to quickly and accurately capture their 3D body, face, and hands together from an RGB image. Most existing methods focus only on parts of the body. A few recent approaches reconstruct full expressive 3D humans from images using 3D body models that include the face and hands. These methods are optimization-based and thus slow, prone to local optima, and require 2D keypoints as input. We address these limitations by introducing ExPose (EXpressive POse and Shape rEgression), which directly regresses the body, face, and hands, in SMPL-X format, from an RGB image. This is a hard problem due to the high dimensionality of the body and the lack of expressive training data. Additionally, hands and faces are much smaller than the body, occupying very few image pixels. This makes hand and face estimation hard when body images are downscaled for neural networks. We make three main contributions. First, we account for the lack of training data by curating a dataset of SMPL-X fits on in-the-wild images. Second, we observe that body estimation localizes the face and hands reasonably well. We introduce body-driven attention for face and hand regions in the original image to extract higher-resolution crops that are fed to dedicated refinement modules. Third, these modules exploit part-specific knowledge from existing face and hand-only datasets. ExPose estimates expressive 3D humans more accurately than existing optimization methods at a small fraction of the computational cost. Our data, model and code are available for research at https://expose.is.tue.mpg.de.

ps

code Short video Long video arxiv pdf suppl link (url) Project Page [BibTex]


no image
Modular Block-diagonal Curvature Approximations for Feedforward Architectures

Dangel, F., Harmeling, S., Hennig, P.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 799-808, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Category Level Object Pose Estimation via Neural Analysis-by-Synthesis
Category Level Object Pose Estimation via Neural Analysis-by-Synthesis

Chen, X., Dong, Z., Song, J., Geiger, A., Hilliges, O.

In European Conference on Computer Vision (ECCV), Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Many object pose estimation algorithms rely on the analysis-by-synthesis framework which requires explicit representations of individual object instances. In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module that is capable of implicitly representing the appearance, shape and pose of entire object categories, thus rendering the need for explicit CAD models per object instance unnecessary. The image synthesis network is designed to efficiently span the pose configuration space so that model capacity can be used to capture the shape and local appearance (i.e., texture) variations jointly. At inference time the synthesized images are compared to the target via an appearance based loss and the error signal is backpropagated through the network to the input parameters. Keeping the network parameters fixed, this allows for iterative optimization of the object pose, shape and appearance in a joint manner and we experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone. When provided with depth measurements, to overcome scale ambiguities, the method can accurately recover the full 6DOF pose successfully.

avg

Project Page pdf suppmat [BibTex]

Project Page pdf suppmat [BibTex]


GRAB: A Dataset of Whole-Body Human Grasping of Objects
GRAB: A Dataset of Whole-Body Human Grasping of Objects

Taheri, O., Ghorbani, N., Black, M. J., Tzionas, D.

In Computer Vision – ECCV 2020, Springer International Publishing, Cham, August 2020 (inproceedings)

Abstract
Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de.

ps

pdf suppl video (long) video (short) link (url) DOI [BibTex]

pdf suppl video (long) video (short) link (url) DOI [BibTex]


Optimal To-Do List Gamification
Optimal To-Do List Gamification

Stojcheski, J., Felso, V., Lieder, F.

arXiv, August 2020 (techreport)

Abstract
What should I work on first? What can wait until later? Which projects should I prioritize and which tasks are not worth my time? These are challenging questions that many people face every day. People’s intuitive strategy is to prioritize their immediate experience over the long-term consequences. This leads to procrastination and the neglect of important long-term projects in favor of seemingly urgent tasks that are less important. Optimal gamification strives to help people overcome these problems by incentivizing each task by a number of points that communicates how valuable it is in the long-run. Unfortunately, computing the optimal number of points with standard dynamic programming methods quickly becomes intractable as the number of a person’s projects and the number of tasks required by each project increase. Here, we introduce and evaluate a scalable method for identifying which tasks are most important in the long run and incentivizing each task according to its long-term value. Our method makes it possible to create to-do list gamification apps that can handle the size and complexity of people’s to-do lists in the real world.

re

link (url) Project Page [BibTex]


no image
Testing Goodness of Fit of Conditional Density Models with Kernels

Jitkrittum, W., Kanagawa, H., Schölkopf, B.

Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 221-230, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
How to navigate everyday distractions: Leveraging optimal feedback to train attention control

Wirzberger, M., Lado, A., Eckerstorfer, L., Oreshnikov, I., Passy, J., Stock, A., Shenhav, A., Lieder, F.

Annual Meeting of the Cognitive Science Society, July 2020 (conference) Accepted

Abstract
To stay focused on their chosen tasks, people have to inhibit distractions. The underlying attention control skills can improve through reinforcement learning, which can be accelerated by giving feedback. We applied the theory of metacognitive reinforcement learning to develop a training app that gives people optimal feedback on their attention control while they are working or studying. In an eight-day field experiment with 99 participants, we investigated the effect of this training on people's productivity, sustained attention, and self-control. Compared to a control condition without feedback, we found that participants receiving optimal feedback learned to focus increasingly better (f = .08, p < .01) and achieved higher productivity scores (f = .19, p < .01) during the training. In addition, they evaluated their productivity more accurately (r = .12, p < .01). However, due to asymmetric attrition problems, these findings need to be taken with a grain of salt.

re sf

How to navigate everyday distractions: Leveraging optimal feedback to train attention control DOI Project Page [BibTex]


no image
Algorithmic Recourse: from Counterfactual Explanations to Interventions

Karimi, A., Schölkopf, B., Valera, I.

37th International Conference on Machine Learning (ICML), July 2020 (conference) Submitted

ei plg

[BibTex]

[BibTex]


Learning Variable Impedance Control for Contact Sensitive Tasks
Learning Variable Impedance Control for Contact Sensitive Tasks

Bogdanovic, M., Khadiv, M., Righetti, L.

IEEE Robotics and Automation Letters ( Early Access ), IEEE, July 2020 (article)

Abstract
Reinforcement learning algorithms have shown great success in solving different problems ranging from playing video games to robotics. However, they struggle to solve delicate robotic problems, especially those involving contact interactions. Though in principle a policy outputting joint torques should be able to learn these tasks, in practice we see that they have difficulty to robustly solve the problem without any structure in the action space. In this paper, we investigate how the choice of action space can give robust performance in presence of contact uncertainties. We propose to learn a policy that outputs impedance and desired position in joint space as a function of system states without imposing any other structure to the problem. We compare the performance of this approach to torque and position control policies under different contact uncertainties. Extensive simulation results on two different systems, a hopper (floating-base) with intermittent contacts and a manipulator (fixed-base) wiping a table, show that our proposed approach outperforms policies outputting torque or position in terms of both learning rate and robustness to environment uncertainty.

mg

DOI [BibTex]

DOI [BibTex]


How to Train Your Differentiable Filter
How to Train Your Differentiable Filter

Alina Kloss, G. M. J. B.

In July 2020 (inproceedings)

Abstract
In many robotic applications, it is crucial to maintain a belief about the state of a system. These state estimates serve as input for planning and decision making and provide feedback during task execution. Recursive Bayesian Filtering algorithms address the state estimation problem, but they require models of process dynamics and sensory observations as well as noise characteristics of these models. Recently, multiple works have demonstrated that these models can be learned by end-to-end training through differentiable versions of Recursive Filtering algorithms.The aim of this work is to improve understanding and applicability of such differentiable filters (DF). We implement DFs with four different underlying filtering algorithms and compare them in extensive experiments. We find that long enough training sequences are crucial for DF performance and that modelling heteroscedastic observation noise significantly improves results. And while the different DFs perform similarly on our example task, we recommend the differentiable Extended Kalman Filter for getting started due to its simplicity.

am

pdf [BibTex]


no image
Variational Bayes in Private Settings (VIPS) (Extended Abstract)

Foulds, J. R., Park, M., Chaudhuri, K., Welling, M.

Proceedings of the 29th International Joint Conference on Artificial Intelligence, (IJCAI-PRICAI), pages: 5050-5054, (Editors: Christian Bessiere), International Joint Conferences on Artificial Intelligence Organization, July 2020, Journal track (conference)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Event-triggered Learning
Event-triggered Learning

Solowjow, F., Trimpe, S.

Automatica, 117, Elsevier, July 2020 (article)

ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


no image
Measuring the Costs of Planning

Felso, V., Jain, Y. R., Lieder, F.

CogSci 2020, July 2020 (poster) Accepted

Abstract
Which information is worth considering depends on how much effort it would take to acquire and process it. From this perspective people’s tendency to neglect considering the long-term consequences of their actions (present bias) might reflect that looking further into the future becomes increasingly more effortful. In this work, we introduce and validate the use of Bayesian Inverse Reinforcement Learning (BIRL) for measuring individual differences in the subjective costs of planning. We extend the resource-rational model of human planning introduced by Callaway, Lieder, et al. (2018) by parameterizing the cost of planning. Using BIRL, we show that increased subjective cost for considering future outcomes may be associated with both the present bias and acting without planning. Our results highlight testing the causal effects of the cost of planning on both present bias and mental effort avoidance as a promising direction for future work.

re

[BibTex]

[BibTex]


Learning of sub-optimal gait controllers for magnetic walking soft millirobots
Learning of sub-optimal gait controllers for magnetic walking soft millirobots

Culha, U., Demir, S. O., Trimpe, S., Sitti, M.

In Proceedings of Robotics: Science and Systems, July 2020, Culha and Demir are equally contributing authors (inproceedings)

Abstract
Untethered small-scale soft robots have promising applications in minimally invasive surgery, targeted drug delivery, and bioengineering applications as they can access confined spaces in the human body. However, due to highly nonlinear soft continuum deformation kinematics, inherent stochastic variability during fabrication at the small scale, and lack of accurate models, the conventional control methods cannot be easily applied. Adaptivity of robot control is additionally crucial for medical operations, as operation environments show large variability, and robot materials may degrade or change over time,which would have deteriorating effects on the robot motion and task performance. Therefore, we propose using a probabilistic learning approach for millimeter-scale magnetic walking soft robots using Bayesian optimization (BO) and Gaussian processes (GPs). Our approach provides a data-efficient learning scheme to find controller parameters while optimizing the stride length performance of the walking soft millirobot robot within a small number of physical experiments. We demonstrate adaptation to fabrication variabilities in three different robots and to walking surfaces with different roughness. We also show an improvement in the learning performance by transferring the learning results of one robot to the others as prior information.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Leveraging Machine Learning to Automatically Derive Robust Planning Strategies from Biased Models of the Environment

Kemtur, A., Jain, Y. R., Mehta, A., Callaway, F., Consul, S., Stojcheski, J., Lieder, F.

CogSci 2020, July 2020, Anirudha Kemtur and Yash Raj Jain contributed equally to this publication. (conference)

Abstract
Teaching clever heuristics is a promising approach to improve decision-making. We can leverage machine learning to discover clever strategies automatically. Current methods require an accurate model of the decision problems people face in real life. But most models are misspecified because of limited information and cognitive biases. To address this problem we develop strategy discovery methods that are robust to model misspecification. Robustness is achieved by model-ing model-misspecification and handling uncertainty about the real-world according to Bayesian inference. We translate our methods into an intelligent tutor that automatically discovers and teaches robust planning strategies. Our robust cognitive tutor significantly improved human decision-making when the model was so biased that conventional cognitive tutors were no longer effective. These findings highlight that our robust strategy discovery methods are a significant step towards leveraging artificial intelligence to improve human decision-making in the real world.

re

Project Page [BibTex]

Project Page [BibTex]


Actively Learning Gaussian Process Dynamics
Actively Learning Gaussian Process Dynamics

Buisson-Fenet, M., Solowjow, F., Trimpe, S.

2nd Annual Conference on Learning for Dynamics and Control, June 2020 (conference) Accepted

Abstract
Despite the availability of ever more data enabled through modern sensor and computer technology, it still remains an open problem to learn dynamical systems in a sample-efficient way. We propose active learning strategies that leverage information-theoretical properties arising naturally during Gaussian process regression, while respecting constraints on the sampling process imposed by the system dynamics. Sample points are selected in regions with high uncertainty, leading to exploratory behavior and data-efficient training of the model. All results are verified in an extensive numerical benchmark.

ics

ArXiv [BibTex]

ArXiv [BibTex]


Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), pages: 6468-6477, IEEE, June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

ps

Project page Code Short video Long video arXiv DOI [BibTex]

Project page Code Short video Long video arXiv DOI [BibTex]