Header logo is


2019


no image
Foundations of Comparison-Based Hierarchical Clustering

Ghoshdastidar, D., Perrot, M., von Luxburg, U.

Advances in Neural Information Processing Systems 32 (NIPS 2019), NeurIPS, Neural Information Processing Systems 2019, December 2019 (conference)

slt

link (url) Project Page [BibTex]

2019


link (url) Project Page [BibTex]


no image
Assessing Aesthetics of Generated Abstract Images Using Correlation Structure

Khajehabdollahi, S., Martius, G., Levina, A.

In Proceedings 2019 IEEE Symposium Series on Computational Intelligence (SSCI), pages: 306-313, IEEE, 2019 IEEE Symposium Series on Computational Intelligence (SSCI), December 2019 (inproceedings)

al

DOI [BibTex]

DOI [BibTex]


no image
Fisher Efficient Inference of Intractable Models

Liu, S., Kanamori, T., Jitkrittum, W., Chen, Y.

Advances in Neural Information Processing Systems 32, pages: 8790-8800, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., Neural Information Processing Systems 2019, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Soft-magnetic coatings as possible sensors for magnetic imaging of superconductors
Soft-magnetic coatings as possible sensors for magnetic imaging of superconductors

Ionescu, A., Simmendinger, J., Bihler, M., Miksch, C., Fischer, P., Soltan, S., Schütz, G., Albrecht, J.

Supercond. Sci. and Tech., 33, pages: 015002, IOP, December 2019 (article)

Abstract
Magnetic imaging of superconductors typically requires a soft-magnetic material placed on top of the superconductor to probe local magnetic fields. For reasonable results the influence of the magnet onto the superconductor has to be small. Thin YBCO films with soft-magnetic coatings are investigated using SQUID magnetometry. Detailed measurements of the magnetic moment as a function of temperature, magnetic field and time have been performed for different heterostructures. It is found that the modification of the superconducting transport in these heterostructures strongly depends on the magnetic and structural properties of the soft-magnetic material. This effect is especially pronounced for an inhomogeneous coating consisting of ferromagnetic nanoparticles.

pf mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Towards Geometric Understanding of Motion
Towards Geometric Understanding of Motion

Ranjan, A.

University of Tübingen, December 2019 (phdthesis)

Abstract

The motion of the world is inherently dependent on the spatial structure of the world and its geometry. Therefore, classical optical flow methods try to model this geometry to solve for the motion. However, recent deep learning methods take a completely different approach. They try to predict optical flow by learning from labelled data. Although deep networks have shown state-of-the-art performance on classification problems in computer vision, they have not been as effective in solving optical flow. The key reason is that deep learning methods do not explicitly model the structure of the world in a neural network, and instead expect the network to learn about the structure from data. We hypothesize that it is difficult for a network to learn about motion without any constraint on the structure of the world. Therefore, we explore several approaches to explicitly model the geometry of the world and its spatial structure in deep neural networks.

The spatial structure in images can be captured by representing it at multiple scales. To represent multiple scales of images in deep neural nets, we introduce a Spatial Pyramid Network (SpyNet). Such a network can leverage global information for estimating large motions and local information for estimating small motions. We show that SpyNet significantly improves over previous optical flow networks while also being the smallest and fastest neural network for motion estimation. SPyNet achieves a 97% reduction in model parameters over previous methods and is more accurate.

The spatial structure of the world extends to people and their motion. Humans have a very well-defined structure, and this information is useful in estimating optical flow for humans. To leverage this information, we create a synthetic dataset for human optical flow using a statistical human body model and motion capture sequences. We use this dataset to train deep networks and see significant improvement in the ability of the networks to estimate human optical flow.

The structure and geometry of the world affects the motion. Therefore, learning about the structure of the scene together with the motion can benefit both problems. To facilitate this, we introduce Competitive Collaboration, where several neural networks are constrained by geometry and can jointly learn about structure and motion in the scene without any labels. To this end, we show that jointly learning single view depth prediction, camera motion, optical flow and motion segmentation using Competitive Collaboration achieves state-of-the-art results among unsupervised approaches.

Our findings provide support for our hypothesis that explicit constraints on structure and geometry of the world lead to better methods for motion estimation.

ps

PhD Thesis [BibTex]

PhD Thesis [BibTex]


HPLC of monolayer-protected Gold clusters with baseline separation
HPLC of monolayer-protected Gold clusters with baseline separation

Knoppe, S., Vogt, P.

Analytical Chemistry, 91, pages: 1603, December 2019 (article)

Abstract
The properties of ultrasmall metal nanoparticles (ca. 10–200 metal atoms), or monolayer-protected metal clusters (MPCs), drastically depend on their atomic structure. For systematic characterization and application, assessment of their purity is of high importance. Currently, the gold standard for purity control of MPCs is mass spectrometry (MS). Mass spectrometry, however, cannot always detect small impurities; MS of certain clusters, for example, ESI-TOF of Au40(SR)24, is not successful at all. We here present a simple reversed-phase HPLC method for purity control of a series of small alkanethiolate-protected gold clusters. The method allows the detection of small impurities with high sensitivity. Linear correlation between alkyl chain length of Au25(SC_n H_(2n+1))18 clusters (n = 6, 8, 10, 12) and their retention time was noticed.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Semi-supervised learning, causality, and the conditional cluster assumption
Semi-supervised learning, causality, and the conditional cluster assumption

von Kügelgen, J., Mey, A., Loog, M., Schölkopf, B.

Advances in Neural Information Processing Systems 32, Curran Associates, Inc., Neural Information Processing Systems 2019 - Workshop Do the right thing: machine learning and causal inference for improved decision making, December 2019 (conference)

ei

Poster PDF link (url) [BibTex]

Poster PDF link (url) [BibTex]


Optimal experimental design via Bayesian optimization: active causal structure learning for Gaussian process networks
Optimal experimental design via Bayesian optimization: active causal structure learning for Gaussian process networks

von Kügelgen, J., Rubenstein, P. K., Schölkopf, B., Weller, A.

NeurIPS 2019 Workshop Do the right thing: machine learning and causal inference for improved decision making, NeurIPS, NeurIPS 2019 Workshop Do the right thing: machine learning and causal inference for improved decision making, December 2019 (conference)

ei

arXiv Poster link (url) [BibTex]

arXiv Poster link (url) [BibTex]


no image
Selecting causal brain features with a single conditional independence test per feature

Mastakouri, A., Schölkopf, B., Janzing, D.

Advances in Neural Information Processing Systems 32, pages: 12532-12543, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Practical and Consistent Estimation of f-Divergences

Rubenstein, P. K., Bousquet, O., Djolonga, J., Riquelme, C., Tolstikhin, I.

Advances in Neural Information Processing Systems 32, pages: 4072-4082, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Controlling Heterogeneous Stochastic Growth Processes on Lattices with Limited Resources
Controlling Heterogeneous Stochastic Growth Processes on Lattices with Limited Resources

Haksar, R., Solowjow, F., Trimpe, S., Schwager, M.

In Proceedings of the 58th IEEE International Conference on Decision and Control (CDC) , pages: 1315-1322, 58th IEEE International Conference on Decision and Control (CDC), December 2019 (conference)

ics

PDF [BibTex]

PDF [BibTex]


no image
Invert to Learn to Invert

Putzky, P., Welling, M.

Advances in Neural Information Processing Systems 32, pages: 444-454, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
On the Fairness of Disentangled Representations

Locatello, F., Abbati, G., Rainforth, T., Bauer, S., Schölkopf, B., Bachem, O.

Advances in Neural Information Processing Systems 32, pages: 14584-14597, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Limitations of the empirical Fisher approximation for natural gradient descent

Kunstner, F., Hennig, P., Balles, L.

Advances in Neural Information Processing Systems 32, pages: 4158-4169, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei pn

link (url) [BibTex]

link (url) [BibTex]


no image
A Model to Search for Synthesizable Molecules

Bradshaw, J., Paige, B., Kusner, M. J., Segler, M., Hernández-Lobato, J. M.

Advances in Neural Information Processing Systems 32, pages: 7935-7947, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement

Hu, S., Kuchenbecker, K. J.

Applied Bionics and Biomechanics, (9765383), December 2019 (article)

Abstract
Learning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit from robotic support. Our approach uses the task-parameterized Gaussian mixture model (TP-GMM) algorithm to encode sets of demonstrations in separate models that each correspond to a different task situation. The robot then maximizes its expected performance in a new situation by either selecting a good existing model or requesting new demonstrations. Compared to a standard implementation that encodes all demonstrations together for all test situations, the proposed approach offers four advantages. First, a simply defined distance function can be used to estimate test performance by calculating the similarity between a test situation and the existing models. Second, the proposed approach can improve generalization, e.g., better satisfying the demonstrated task constraints and speeding up task execution. Third, because the hierarchical structure encodes each demonstrated situation individually, a wider range of task situations can be modeled in the same framework without deteriorating performance. Last, adding or removing demonstrations incurs low computational load, and thus, the robot’s skill library can be built incrementally. We first instantiate the proposed approach in a simulated task to validate these advantages. We then show that the advantages transfer to real hardware for a task where naive participants collaborated with a Willow Garage PR2 robot to move a handheld object. For most tested scenarios, our hierarchical method achieved significantly better task performance and subjective ratings than both a passive model with only gravity compensation and a single TP-GMM encoding all demonstrations.

hi

DOI [BibTex]


no image
Kernel Stein Tests for Multiple Model Comparison

Lim, J. N., Yamada, M., Schölkopf, B., Jitkrittum, W.

Advances in Neural Information Processing Systems 32, pages: 2240-2250, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset

Gondal, M. W., Wuthrich, M., Miladinovic, D., Locatello, F., Breidt, M., Volchkov, V., Akpo, J., Bachem, O., Schölkopf, B., Bauer, S.

Advances in Neural Information Processing Systems 32, pages: 15714-15725, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

am ei sf

link (url) [BibTex]

link (url) [BibTex]


no image
Convergence Guarantees for Adaptive Bayesian Quadrature Methods

Kanagawa, M., Hennig, P.

Advances in Neural Information Processing Systems 32, pages: 6234-6245, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei pn

link (url) [BibTex]

link (url) [BibTex]


no image
Robot Learning for Muscular Systems

Büchler, D.

Technical University Darmstadt, Germany, December 2019 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Are Disentangled Representations Helpful for Abstract Visual Reasoning?

van Steenkiste, S., Locatello, F., Schmidhuber, J., Bachem, O.

Advances in Neural Information Processing Systems 32, pages: 14222-14235, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Real Time Probabilistic Models for Robot Trajectories

Gomez-Gonzalez, S.

Technical University Darmstadt, Germany, December 2019 (phdthesis)

ei

[BibTex]

[BibTex]


Life Improvement Science: A Manifesto
Life Improvement Science: A Manifesto

Lieder, F.

December 2019 (article) In revision

Abstract
Rapid technological advances present unprecedented opportunities for helping people thrive. This manifesto presents a road map for establishing a solid scientific foundation upon which those opportunities can be realized. It highlights fundamental open questions about the cognitive underpinnings of effective living and how they can be improved, supported, and augmented. These questions are at the core of my proposal for a new transdisciplinary research area called life improvement science. Recent advances have made these questions amenable to scientific rigor, and emerging approaches are paving the way towards practical strategies, clever interventions, and (intelligent) apps for empowering people to reach unprecedented levels of personal effectiveness and wellbeing.

re

Life improvement science: a manifesto DOI [BibTex]


no image
Perceiving the arrow of time in autoregressive motion

Meding, K., Janzing, D., Schölkopf, B., Wichmann, F. A.

Advances in Neural Information Processing Systems 32, pages: 2303-2314, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Stochastic Frank-Wolfe for Composite Convex Minimization

Locatello, F., Yurtsever, A., Fercoq, O., Cevher, V.

Advances in Neural Information Processing Systems 32, pages: 14246-14256, (Editors: H. Wallach and H. Larochelle and A. Beygelzimer and F. d’Alché-Buc and E. Fox and R. Garnett), Curran Associates, Inc., 33rd Annual Conference on Neural Information Processing Systems, December 2019 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Flex-Convolution

Groh*, F., Wieschollek*, P., Lensch, H. P. A.

Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, 11361, pages: 105-122, Lecture Notes in Computer Science, (Editors: Jawahar, C. V. and Li, Hongdong and Mori, Greg and Schindler, Konrad), Springer International Publishing, December 2019, *equal contribution (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Experience Reuse with Probabilistic Movement Primitives

Stark, S., Peters, J., Rueckert, E.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 1210-1217, IEEE, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2019 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Learning to Explore in Motion and Interaction Tasks
Learning to Explore in Motion and Interaction Tasks

Bogdanovic, M., Righetti, L.

Proceedings 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 2686-2692, IEEE, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2019, ISSN: 2153-0866 (conference)

Abstract
Model free reinforcement learning suffers from the high sampling complexity inherent to robotic manipulation or locomotion tasks. Most successful approaches typically use random sampling strategies which leads to slow policy convergence. In this paper we present a novel approach for efficient exploration that leverages previously learned tasks. We exploit the fact that the same system is used across many tasks and build a generative model for exploration based on data from previously solved tasks to improve learning new tasks. The approach also enables continuous learning of improved exploration strategies as novel tasks are learned. Extensive simulations on a robot manipulator performing a variety of motion and contact interaction tasks demonstrate the capabilities of the approach. In particular, our experiments suggest that the exploration strategy can more than double learning speed, especially when rewards are sparse. Moreover, the algorithm is robust to task variations and parameter tuning, making it beneficial for complex robotic problems.

mg

DOI [BibTex]

DOI [BibTex]


no image
Improving Local Trajectory Optimisation using Probabilistic Movement Primitives

Shyam, R. A., Lightbody, P., Das, G., Liu, P., Gomez-Gonzalez, S., Neumann, G.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 2666-2671, IEEE, International Conference on Intelligent Robots and Systems 2019 (IROS) , November 2019 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Attacking Optical Flow
Attacking Optical Flow

Ranjan, A., Janai, J., Geiger, A., Black, M. J.

In Proceedings International Conference on Computer Vision (ICCV), pages: 2404-2413, IEEE, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), November 2019, ISSN: 2380-7504 (inproceedings)

Abstract
Deep neural nets achieve state-of-the-art performance on the problem of optical flow estimation. Since optical flow is used in several safety-critical applications like self-driving cars, it is important to gain insights into the robustness of those techniques. Recently, it has been shown that adversarial attacks easily fool deep neural networks to misclassify objects. The robustness of optical flow networks to adversarial attacks, however, has not been studied so far. In this paper, we extend adversarial patch attacks to optical flow networks and show that such attacks can compromise their performance. We show that corrupting a small patch of less than 1% of the image size can significantly affect optical flow estimates. Our attacks lead to noisy flow estimates that extend significantly beyond the region of the attack, in many cases even completely erasing the motion of objects in the scene. While networks using an encoder-decoder architecture are very sensitive to these attacks, we found that networks using a spatial pyramid architecture are less affected. We analyse the success and failure of attacking both architectures by visualizing their feature maps and comparing them to classical optical flow techniques which are robust to these attacks. We also demonstrate that such attacks are practical by placing a printed pattern into real scenes.

avg ps

Video Project Page Paper Supplementary Material link (url) DOI [BibTex]

Video Project Page Paper Supplementary Material link (url) DOI [BibTex]


Acoustic hologram enhanced phased arrays for ultrasonic particle manipulation
Acoustic hologram enhanced phased arrays for ultrasonic particle manipulation

Cox, L., Melde, K., Croxford, A., Fischer, P., Drinkwater, B.

Phys. Rev. Applied, 12, pages: 064055, November 2019 (article)

Abstract
The ability to shape ultrasound fields is important for particle manipulation, medical therapeutics and imaging applications. If the amplitude and/or phase is spatially varied across the wavefront then it is possible to project ‘acoustic images’. When attempting to form an arbitrary desired static sound field, acoustic holograms are superior to phased arrays due to their significantly higher phase fidelity. However, they lack the dynamic flexibility of phased arrays. Here, we demonstrate how to combine the high-fidelity advantages of acoustic holograms with the dynamic control of phased arrays in the ultrasonic frequency range. Holograms are used with a 64-element phased array, driven with continuous excitation. Moving the position of the projected hologram via phase delays which steer the output beam is demonstrated experimentally. This allows the creation of a much more tightly focused point than with the phased array alone, whilst still being reconfigurable. It also allows the complex movement at a water-air interface of a “phase surfer” along a phase track or the manipulation of a more arbitrarily shaped particle via amplitude traps. Furthermore, a particle manipulation device with two emitters and a single split hologram is demonstrated that allows the positioning of a “phase surfer” along a 1D axis. This paper opens the door for new applications with complex manipulation of ultrasound whilst minimising the complexity and cost of the apparatus.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Decoding subcategories of human bodies from both body- and face-responsive cortical regions
Decoding subcategories of human bodies from both body- and face-responsive cortical regions

Foster, C., Zhao, M., Romero, J., Black, M. J., Mohler, B. J., Bartels, A., Bülthoff, I.

NeuroImage, 202(15):116085, November 2019 (article)

Abstract
Our visual system can easily categorize objects (e.g. faces vs. bodies) and further differentiate them into subcategories (e.g. male vs. female). This ability is particularly important for objects of social significance, such as human faces and bodies. While many studies have demonstrated category selectivity to faces and bodies in the brain, how subcategories of faces and bodies are represented remains unclear. Here, we investigated how the brain encodes two prominent subcategories shared by both faces and bodies, sex and weight, and whether neural responses to these subcategories rely on low-level visual, high-level visual or semantic similarity. We recorded brain activity with fMRI while participants viewed faces and bodies that varied in sex, weight, and image size. The results showed that the sex of bodies can be decoded from both body- and face-responsive brain areas, with the former exhibiting more consistent size-invariant decoding than the latter. Body weight could also be decoded in face-responsive areas and in distributed body-responsive areas, and this decoding was also invariant to image size. The weight of faces could be decoded from the fusiform body area (FBA), and weight could be decoded across face and body stimuli in the extrastriate body area (EBA) and a distributed body-responsive area. The sex of well-controlled faces (e.g. excluding hairstyles) could not be decoded from face- or body-responsive regions. These results demonstrate that both face- and body-responsive brain regions encode information that can distinguish the sex and weight of bodies. Moreover, the neural patterns corresponding to sex and weight were invariant to image size and could sometimes generalize across face and body stimuli, suggesting that such subcategorical information is encoded with a high-level visual or semantic code.

ps

paper pdf DOI [BibTex]

paper pdf DOI [BibTex]


A Learnable Safety Measure
A Learnable Safety Measure

Heim, S., Rohr, A. V., Trimpe, S., Badri-Spröwitz, A.

Conference on Robot Learning, November 2019 (conference) Accepted

dlg ics

Arxiv [BibTex]

Arxiv [BibTex]


no image
Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction

Trick, S., Koert, D., Peters, J., Rothkopf, C. A.

International Conference on Intelligent Robots and Systems (IROS), pages: 7009-7016, IEEE, November 2019 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Deep Neural Network Approach in Electrical Impedance Tomography-Based Real-Time Soft Tactile Sensor

Park, H., Lee, H., Park, K., Mo, S., Kim, J.

In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 7447-7452, Macau, China, November 2019 (inproceedings)

Abstract
Recently, a whole-body tactile sensing have emerged in robotics for safe human-robot interaction. A key issue in the whole-body tactile sensing is ensuring large-area manufacturability and high durability. To fulfill these requirements, a reconstruction method called electrical impedance tomography (EIT) was adopted in large-area tactile sensing. This method maps voltage measurements to conductivity distribution using only a few number of measurement electrodes. A common approach for the mapping is using a linearized model derived from the Maxwell's equation. This linearized model shows fast computation time and moderate robustness against measurement noise but reconstruction accuracy is limited. In this paper, we propose a novel nonlinear EIT algorithm through Deep Neural Network (DNN) approach to improve the reconstruction accuracy of EIT-based tactile sensors. The neural network architecture with rectified linear unit (ReLU) function ensured extremely low computational time (0.002 seconds) and nonlinear network structure which provides superior measurement accuracy. The DNN model was trained with dataset synthesized in simulation environment. To achieve the robustness against measurement noise, the training proceeded with additive Gaussian noise that estimated through actual measurement noise. For real sensor application, the trained DNN model was transferred to a conductive fabric-based soft tactile sensor. For validation, the reconstruction error and noise robustness were mainly compared using conventional linearized model and proposed approach in simulation environment. As a demonstration, the tactile sensor equipped with the trained DNN model is presented for a contact force estimation.

hi

DOI [BibTex]

DOI [BibTex]


no image
Deep Lagrangian Networks for end-to-end learning of energy-based control for under-actuated systems

Lutter, M., Listmann, K., Peters, J.

International Conference on Intelligent Robots and Systems (IROS), pages: 7718-7725, IEEE, November 2019 (conference)

ei

DOI [BibTex]

DOI [BibTex]


AirCap -- Aerial Outdoor Motion Capture
AirCap – Aerial Outdoor Motion Capture

Ahmad, A., Price, E., Tallamraju, R., Saini, N., Lawless, G., Ludwig, R., Martinovic, I., Bülthoff, H. H., Black, M. J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Workshop on Aerial Swarms, November 2019 (misc)

Abstract
This paper presents an overview of the Grassroots project Aerial Outdoor Motion Capture (AirCap) running at the Max Planck Institute for Intelligent Systems. AirCap's goal is to achieve markerless, unconstrained, human motion capture (mocap) in unknown and unstructured outdoor environments. To that end, we have developed an autonomous flying motion capture system using a team of aerial vehicles (MAVs) with only on-board, monocular RGB cameras. We have conducted several real robot experiments involving up to 3 aerial vehicles autonomously tracking and following a person in several challenging scenarios using our approach of active cooperative perception developed in AirCap. Using the images captured by these robots during the experiments, we have demonstrated a successful offline body pose and shape estimation with sufficiently high accuracy. Overall, we have demonstrated the first fully autonomous flying motion capture system involving multiple robots for outdoor scenarios.

ps

Talk slides Project Page Project Page [BibTex]

Talk slides Project Page Project Page [BibTex]


Fast Feedback Control over Multi-hop Wireless Networks with Mode Changes and Stability Guarantees
Fast Feedback Control over Multi-hop Wireless Networks with Mode Changes and Stability Guarantees

Baumann, D., Mager, F., Jacob, R., Thiele, L., Zimmerling, M., Trimpe, S.

ACM Transactions on Cyber-Physical Systems, 4(2):18, November 2019 (article)

ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


no image
Reinforcement Learning of Trajectory Distributions: Applications in Assisted Teleoperation and Motion Planning

Ewerton, M., Guilherme, M., Koert, D., Kolev, Z., Takahashi, M., Peters, J.

International Conference on Intelligent Robots and Systems (IROS), pages: 4294-4300, IEEE, November 2019 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Sampling on Networks: Estimating Eigenvector Centrality on Incomplete Networks

Ruggeri, N., De Bacco, C.

International Conference on Complex Networks and Their Applications, November 2019 (article)

Abstract
We develop a new sampling method to estimate eigenvector centrality on incomplete networks. Our goalis to estimate this global centrality measure having at disposal a limited amount of data. This is the case inmany real-world scenarios where data collection is expensive, the network is too big for data storage capacityor only partial information is available. The sampling algorithm is theoretically grounded by results derivedfrom spectral approximation theory. We studied the problemon both synthetic and real data and tested theperformance comparing with traditional methods, such as random walk and uniform sampling. We show thatapproximations obtained from such methods are not always reliable and that our algorithm, while preservingcomputational scalability, improves performance under different error measures.

pio

Code Preprint pdf DOI [BibTex]

Code Preprint pdf DOI [BibTex]


no image
Interactive Augmented Reality for Robot-Assisted Surgery

Forte, M., Kuchenbecker, K. J.

Workshop extended abstract presented as a podium presentation at the IROS Workshop on Legacy Disruptors in Applied Telerobotics, Macau, November 2019 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Chance-Constrained Trajectory Optimization for Non-linear Systems with Unknown Stochastic Dynamics

Celik, O., Abdulsamad, H., Peters, J.

International Conference on Intelligent Robots and Systems (IROS), pages: 6828-6833, IEEE, November 2019 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Generalized Multiple Correlation Coefficient as a Similarity Measurement between Trajectories

Urain, J., Peters, J.

International Conference on Intelligent Robots and Systems (IROS), pages: 1363-1369, IEEE, November 2019 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Ultracold atoms in disordered potentials: elastic scattering time in the strong scattering regime

Adrien Signoles, Baptiste Lecoutre, Jérémie Richard, Lih-King Lim, Vincent Denechaud, Valentin Volchkov, Vasiliki Angelopoulou, Fred Jendrzejewski, Alain Aspect, Laurent Sanchez-Palencia, Vincent Josse

New Journal of Physics, 21, pages: 105002, IOP Publishing and Deutsche Physikalische Gesellschaft, October 2019 (article)

sf

DOI [BibTex]

DOI [BibTex]


Markerless Outdoor Human Motion Capture Using Multiple Autonomous Micro Aerial Vehicles
Markerless Outdoor Human Motion Capture Using Multiple Autonomous Micro Aerial Vehicles

Saini, N., Price, E., Tallamraju, R., Enficiaud, R., Ludwig, R., Martinović, I., Ahmad, A., Black, M.

Proceedings 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages: 823-832, IEEE, International Conference on Computer Vision (ICCV), October 2019 (conference)

Abstract
Capturing human motion in natural scenarios means moving motion capture out of the lab and into the wild. Typical approaches rely on fixed, calibrated, cameras and reflective markers on the body, significantly limiting the motions that can be captured. To make motion capture truly unconstrained, we describe the first fully autonomous outdoor capture system based on flying vehicles. We use multiple micro-aerial-vehicles(MAVs), each equipped with a monocular RGB camera, an IMU, and a GPS receiver module. These detect the person, optimize their position, and localize themselves approximately. We then develop a markerless motion capture method that is suitable for this challenging scenario with a distant subject, viewed from above, with approximately calibrated and moving cameras. We combine multiple state-of-the-art 2D joint detectors with a 3D human body model and a powerful prior on human pose. We jointly optimize for 3D body pose and camera pose to robustly fit the 2D measurements. To our knowledge, this is the first successful demonstration of outdoor, full-body, markerless motion capture from autonomous flying vehicles.

ps

Code Data Video Paper Manuscript DOI Project Page [BibTex]

Code Data Video Paper Manuscript DOI Project Page [BibTex]


Resolving {3D} Human Pose Ambiguities with {3D} Scene Constraints
Resolving 3D Human Pose Ambiguities with 3D Scene Constraints

Hassan, M., Choutas, V., Tzionas, D., Black, M. J.

In Proceedings International Conference on Computer Vision, pages: 2282-2292, IEEE, International Conference on Computer Vision, October 2019 (inproceedings)

Abstract
To understand and analyze human behavior, we need to capture humans moving in, and interacting with, the world. Most existing methods perform 3D human pose estimation without explicitly considering the scene. We observe however that the world constrains the body and vice-versa. To motivate this, we show that current 3D human pose estimation methods produce results that are not consistent with the 3D scene. Our key contribution is to exploit static 3D scene structure to better estimate human pose from monocular images. The method enforces Proximal Relationships with Object eXclusion and is called PROX. To test this, we collect a new dataset composed of 12 different 3D scenes and RGB sequences of 20 subjects moving in and interacting with the scenes. We represent human pose using the 3D human body model SMPL-X and extend SMPLify-X to estimate body pose using scene constraints. We make use of the 3D scene information by formulating two main constraints. The interpenetration constraint penalizes intersection between the body model and the surrounding 3D scene. The contact constraint encourages specific parts of the body to be in contact with scene surfaces if they are close enough in distance and orientation. For quantitative evaluation we capture a separate dataset with 180 RGB frames in which the ground-truth body pose is estimated using a motion-capture system. We show quantitatively that introducing scene constraints significantly reduces 3D joint error and vertex error. Our code and data are available for research at https://prox.is.tue.mpg.de.

ps

pdf poster link (url) DOI [BibTex]

pdf poster link (url) DOI [BibTex]


Learning to Reconstruct {3D} Human Pose and Shape via Model-fitting in the Loop
Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop

Kolotouros, N., Pavlakos, G., Black, M. J., Daniilidis, K.

Proceedings International Conference on Computer Vision (ICCV), pages: 2252-2261, IEEE, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), October 2019, ISSN: 2380-7504 (conference)

Abstract
Model-based human pose estimation is currently approached through two different paradigms. Optimization-based methods fit a parametric body model to 2D observations in an iterative manner, leading to accurate image-model alignments, but are often slow and sensitive to the initialization. In contrast, regression-based methods, that use a deep network to directly estimate the model parameters from pixels, tend to provide reasonable, but not pixel accurate, results while requiring huge amounts of supervision. In this work, instead of investigating which approach is better, our key insight is that the two paradigms can form a strong collaboration. A reasonable, directly regressed estimate from the network can initialize the iterative optimization making the fitting faster and more accurate. Similarly, a pixel accurate fit from iterative optimization can act as strong supervision for the network. This is the core of our proposed approach SPIN (SMPL oPtimization IN the loop). The deep network initializes an iterative optimization routine that fits the body model to 2D joints within the training loop, and the fitted estimate is subsequently used to supervise the network. Our approach is self-improving by nature, since better network estimates can lead the optimization to better solutions, while more accurate optimization fits provide better supervision for the network. We demonstrate the effectiveness of our approach in different settings, where 3D ground truth is scarce, or not available, and we consistently outperform the state-of-the-art model-based pose estimation approaches by significant margins.

ps

pdf code project DOI [BibTex]

pdf code project DOI [BibTex]


Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild"
Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images "In the Wild"

Zuffi, S., Kanazawa, A., Berger-Wolf, T., Black, M. J.

In International Conference on Computer Vision, pages: 5358-5367, IEEE, International Conference on Computer Vision, October 2019 (inproceedings)

Abstract
We present the first method to perform automatic 3D pose, shape and texture capture of animals from images acquired in-the-wild. In particular, we focus on the problem of capturing 3D information about Grevy's zebras from a collection of images. The Grevy's zebra is one of the most endangered species in Africa, with only a few thousand individuals left. Capturing the shape and pose of these animals can provide biologists and conservationists with information about animal health and behavior. In contrast to research on human pose, shape and texture estimation, training data for endangered species is limited, the animals are in complex natural scenes with occlusion, they are naturally camouflaged, travel in herds, and look similar to each other. To overcome these challenges, we integrate the recent SMAL animal model into a network-based regression pipeline, which we train end-to-end on synthetically generated images with pose, shape, and background variation. Going beyond state-of-the-art methods for human shape and pose estimation, our method learns a shape space for zebras during training. Learning such a shape space from images using only a photometric loss is novel, and the approach can be used to learn shape in other settings with limited 3D supervision. Moreover, we couple 3D pose and shape prediction with the task of texture synthesis, obtaining a full texture map of the animal from a single image. We show that the predicted texture map allows a novel per-instance unsupervised optimization over the network features. This method, SMALST (SMAL with learned Shape and Texture) goes beyond previous work, which assumed manual keypoints and/or segmentation, to regress directly from pixels to 3D animal shape, pose and texture. Code and data are available at https://github.com/silviazuffi/smalst

ps

code pdf supmat iccv19 presentation DOI Project Page [BibTex]

code pdf supmat iccv19 presentation DOI Project Page [BibTex]


EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association
EM-Fusion: Dynamic Object-Level SLAM With Probabilistic Data Association

Strecke, M., Stückler, J.

Proceedings International Conference on Computer Vision 2019 (ICCV), pages: 5864-5873, IEEE, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), October 2019 (conference)

ev

preprint Project page Code Poster DOI [BibTex]

preprint Project page Code Poster DOI [BibTex]


Trunk Pitch Oscillations for Joint Load Redistribution in Humans and Humanoid Robots
Trunk Pitch Oscillations for Joint Load Redistribution in Humans and Humanoid Robots

Drama, Ö., Badri-Spröwitz, A.

Proceedings of 2019 IEEE-RAS 19th International Conference on Humanoid Robots, pages: 531-536, IEEE, Humanoids, October 2019 (conference)

Abstract
Creating natural-looking running gaits for humanoid robots is a complex task due to the underactuated degree of freedom in the trunk, which makes the motion planning and control difficult. The research on trunk movements in human locomotion is insufficient, and no formalism is known to transfer human motion patterns onto robots. Related work mostly focuses on the lower extremities, and simplifies the problem by stabilizing the trunk at a fixed angle. In contrast, humans display significant trunk motions that follow the natural dynamics of the gait. In this work, we use a spring-loaded inverted pendulum model with a trunk (TSLIP) together with a virtual point (VP) target to create trunk oscillations and investigate the impact of these movements. We analyze how the VP location and forward speed determine the direction and magnitude of the trunk oscillations. We show that positioning the VP below the center of mass (CoM) can explain the forward trunk pitching observed in human running. The VP below the CoM leads to a synergistic work between the hip and leg, reducing the leg loading. However, it comes at the cost of increased peak hip torque. Our results provide insights for leveraging the trunk motion to redistribute joint loads and potentially improve the energy efficiency in humanoid robots.

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]