Header logo is

Estimating Human Pose with Flowing Puppets


Conference Paper


We address the problem of upper-body human pose estimation in uncontrolled monocular video sequences, without manual initialization. Most current methods focus on isolated video frames and often fail to correctly localize arms and hands. Inferring pose over a video sequence is advantageous because poses of people in adjacent frames exhibit properties of smooth variation due to the nature of human and camera motion. To exploit this, previous methods have used prior knowledge about distinctive actions or generic temporal priors combined with static image likelihoods to track people in motion. Here we take a different approach based on a simple observation: Information about how a person moves from frame to frame is present in the optical flow field. We develop an approach for tracking articulated motions that "links" articulated shape models of people in adjacent frames trough the dense optical flow. Key to this approach is a 2D shape model of the body that we use to compute how the body moves over time. The resulting "flowing puppets" provide a way of integrating image evidence across frames to improve pose inference. We apply our method on a challenging dataset of TV video sequences and show state-of-the-art performance.

Author(s): Silvia Zuffi and Javier Romero and Cordelia Schmid and Michael J Black
Book Title: IEEE International Conference on Computer Vision (ICCV)
Pages: 3312-3319
Year: 2013

Department(s): Perceiving Systems
Research Project(s): 2D Pose from Optical Flow
Deformable Structures
Flowing Puppets
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Conference

DOI: 10.1109/ICCV.2013.411

Links: pdf


  title = {Estimating Human Pose with Flowing Puppets},
  author = {Zuffi, Silvia and Romero, Javier and Schmid, Cordelia and Black, Michael J},
  booktitle = {IEEE International Conference on Computer Vision (ICCV)},
  pages = {3312-3319},
  year = {2013}