I am a intern in Perceiving Systems (PS) Department. I am working on 2D and 3D multi-person pose (and shape) estimation, synthetic data generation and multi-person optical flow estimation. Before joining PS I received a M.Sc. in Neural Information Processing at the university of Tübingen. Prior to that I received a B.Sc. in Cognitive Science at the university of Osnabrück.
Our paper "Learning Multi-Human Optical Flow" got accepted to IJCV!
Anurag Ranjan*, David T. Hoffmann*, Dimitrios Tzionas, Siyu Tang, Javier Romero and Michael J. Black
*Contributed equally Project Website Paper
The paper "Learning to Train with Synthetic Humans" got accepted to GCPR!
David T. Hoffmann, Dimitrios Tzionas, Michael J. Black and Siyu Tang Project Website Paper
International Journal of Computer Vision (IJCV), January 2020 (article)
The optical flow of humans is well known to be useful for the analysis of human action. Recent optical flow methods focus on training deep networks to approach the problem. However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset. We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single-and multi-person images. We then train optical flow networks to estimate human flow fields from pairs of images. We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. The code, trained models and the dataset are available for research.
In German Conference on Pattern Recognition (GCPR), September 2019 (inproceedings)
Neural networks need big annotated datasets for training. However, manual annotation can be too expensive or even unfeasible for certain tasks, like multi-person 2D pose estimation with severe occlusions. A remedy for this is synthetic data with perfect ground truth. Here we explore two variations of synthetic data for this challenging problem; a dataset with purely synthetic humans, as well as a real dataset augmented with synthetic humans. We then study which approach better generalizes to real data, as well as the influence of virtual humans in the training loss. We observe that not all synthetic samples are equally informative for training, while the informative samples are different for each training stage. To exploit this observation, we employ an adversarial student-teacher framework; the teacher improves the student by providing the hardest samples for its current state as a challenge. Experiments show that this student-teacher framework outperforms all our baselines.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems