Institute Talks
  • Anuj Srivastava

Shape analysis and modeling of 2D and 3D objects has important applications in many branches of science and engineering. The general goals in shape analysis include: derivation of efficient shape metrics, computation of shape templates, representation of dominant shape variability in a shape class, and development of probability models that characterize shape variation within and across classes. While past work on shape analysis is dominated by point representations -- finite sets of ordered or triangulated points on objects' boundaries -- the emphasis has lately shifted to continuous formulations.

The shape analysis of parametrized curves and surfaces introduces an additional shape invariance, the re-parametrization group, in additional to the standard invariants of rigid motions and global scales. Treating re-parametrization as a tool for registration of points across objects, we incorporate this group in shape analysis, in the same way orientation is handled in Procrustes analysis. For shape analysis of parametrized curves, I will describe an elastic Riemannian metric and a mathematical representation, called square-root-velocity-function (SRVF), that allows optimal registration and analysis using simple tools.

This framework provides proper metrics, geodesics, and sample statistics of shapes. These sample statistics are further useful in statistical modeling of shapes in different shape classes. Then, I will describe some preliminary extensions of these ideas to shape analysis of parametrized surfaces, I will demonstrate these ideas using applications from medical image analysis, protein structure analysis, 3D face recognition, and human activity recognition in videos.


  • Edward H. Adelson

Abstract: We can modify the optical properties of surfaces by “coating” them with a micron-thin membrane supported by an elastomeric gel. Using an opaque, matte membrane, we can make reflected light micrographs with a distinctive SEM-like appearance. These have modest magnification (e.g., 50X), but they reveal fine surface details not normally seen with an optical microscope.

The system, which we call “GelSight,” removes optical complexities such as specular reflection, albedo, and subsurface scattering, and isolates the shading information that signals 3D shape. One can then see the topography of optically challenging subjects like sandpaper, machined metal, and living human skin. In addition, one can capture 3D surface geometry through photometric stereo. This leads to a non-destructive contact-based optical profilometer that is simple, fast, and compact.


  • Edward H. Adelson

Human can easily see 3D shape from single 2D images, exploiting multiple kinds of information. This has given rise to multiple subfields (in both human vision and computer vision) devoted to the study of shape-from-shading, shape-from-texture, shape-from-contours, and so on.

The proposed algorithms for each type of shape-from-x remain specialized and fragile (in contrast with the flexibility and robustness of human vision). Recent work in graphics and psychophysics has demonstrated the importance of local orientation structure in conveying 3D shape. This information is fairly stable and reliable, even when a given shape is rendered in multiple styles (including non-photorealistic styles such as line drawings.)

We have developed an exemplar-based system (which we call Shape Collage) that learns to associate image patches with corresponding 3D shape patches. We train it with synthetic images of “blobby” objects rendered in various ways, including solid texture, Phong shading, and line drawings. Given a new image, it finds the best candidate scene patches and assembles them into a coherent interpretation of the object shape.

Our system is the first that can retrieve the shape of naturalistic objects from line drawings. The same system, without modification, works for shape-from-texture and can also get shape from shading, even with non-Lambertian surfaces. Thus disparate types of image information can be processed by a single mechanism to extract 3D shape. Collaborative work with Forrester Cole, Phillip Isola, Fredo Durand, and William Freeman.


  • E.J. Chichilnisky

A central aspect of visual processing in the retina is the existence of nonlinear subunits within the receptive fields of retinal ganglion cells. These subunits have been implicated in visual computations such as segregation of object motion from background motion. However, relatively little is known about the spatial structure of subunits and its emergence from nonlinear interactions in the interneuron circuitry of the retina.

We used physiological measurements of functional circuitry in the isolated primate retina at single-cell resolution, combined with novel computational approaches, to explore the neural computations that produce subunits. Preliminary results suggest that these computations can be understood in terms of convergence of photoreceptor signals via specific types of interneurons to ganglion cells.


  • Ruth Rosenholtz

Considerable research has demonstrated that the representation is not equally faithful throughout the visual field; representation appears to be coarser in peripheral vision, perhaps as a strategy for dealing with an information bottleneck in visual processing. In the last few years, a convergence of evidence has suggested that in peripheral and unattended regions, the information available consists of summary statistics.

For a complex set of statistics, such a representation can provide a rich and detailed percept of many aspects of a visual scene. However, such a representation is also lossy; we would expect the inherent ambiguities and confusions to have profound implications for vision.

For example, a complex pattern, viewed peripherally, might be poorly represented by its summary statistics, leading to the degraded recognition experienced under conditions of visual crowding. Difficult visual search might occur when summary statistics could not adequately discriminate between a target-present and distractor-only patch of the stimuli. Certain illusory percepts might arise from valid interpretations of the available – lossy – information. It is precisely visual tasks upon which a statistical representation has significant impact that provide the evidence for such a representation in early vision. I will summarize recent evidence that early vision computes summary statistics based upon such tasks.



  • Martin Giese

Human body movements are highly complex spatio-temporal patterns and their control and recognition represent challenging problems for technical as well as neural systems. The talk will present an overview of recent work of our group, exploiting biologically-inspired learning-based reprensentations for the recognition and synthesis of body motion.

The first part of the talk will present a neural theory for the visual processing of goal-directed actions, which reproduces and partially correctly predicts electrophysiological results from action-selective cortical neurons in monkey cortex. In particular, we show that the same neural circuits might account for the recognition of natural and abstract action stimuli.

In the second part of the talk different techniques for the learning of structured online-capable synthesis models for complex body movements are discussed. One approach is based on the learning of kinematic primitives, exploiting anechoic demixing, and the generation of such primitives by networks of canonical dynamical systems.

An approach for the design of a stable overall system dynamics of such nonlinear networks is discussed. The second approach is the learning of hierarchical models for interactive movements, combining Gaussian Process Latent Variable models and Gaussian process Dynamical Models, and resulting in animations that pass the Turing test of computer graphics. The presented work was funded by the DFG, and EC FP 7 projects SEARISE, TANGO and AMARSI.


  • Ronen Basri

Variations in lighting can have a significant effect on the appearance of an object. Modeling these variations is important for object recognition and shape reconstruction, particularly of smooth, textureless objects. The recent decade has seen significant progress in handling lambertian objects. In that context I will present our work on using harmonic representations to represent the reflectance of lambertian objects under complex lighting configurations and their application to photometric stereo and prior-assisted shape from shading. In addition, I will present preliminary results in handling specular objects and methods for dealing with moving objects.


  • Carlos Vargas-Irwin

Dimensionality reduction applied to neural ensemble data has led to the concept of a 'neural trajectory', a low-dimensional representation of how the state of the network evolves over time. Here we present a novel neural trajectory extraction algorithm which combines spike train distance metrics (Victor and Purpura, 1996) with dimensionality reduction based on local neighborhood statistics (van der Maaten and Hinton, 2008.) . We apply this technique to describe and quantify the activity of primate ventral premotor cortex neuronal ensembles in the context of a cued reaching and grasping task with instructed delay.


  • Martin Butz

Humans interact with their environment in a highly flexible manner. One important component for the successful control of such flexible interactions is an internal body model. To maintain a consistent internal body model, the brain appears to continuously and probabilistically integrate multiple sources of information, including various sensory modalities but also anticipatory, re-afferent information about current body motion. A modular, multimodal arm model (MMM) is presented.

The model represents a seven degree of freedom arm in various interactive modality frames. The modality frames distinguish between proprioceptive, limb-relative orientation, head-relative orientation, and head-relative location frames. Each arm limb is represented separately but highly interactively in each of these modality frames. Incoming sensory and motor feedback information is continuously exchanged in a rigorous, probabilistic fashion, while a consistent overall arm model is maintained due to the local interactions.

The model is able to automatically identify sensory failures and sensory noise. Moreover, it is able to mimic the rubber hand illusion phenomenon. Currently, we endow the model with neural representations for each modality frame to play-out its full potential for planning and goal-directed control.