Imagine a futuristic version of Google Street View that could dial up any possible place in the world, at any possible time. Effectively, such a service would be a recording of the plenoptic function—the hypothetical function described by Adelson and Bergen that captures all light rays passing through space at all times. While the plenoptic function is completely impractical to capture in its totality, every photo ever taken represents a sample of this function. I will present recent methods we've developed to reconstruct the plenoptic function from sparse space-time samples of photos—including Street View itself, as well as tourist photos of famous landmarks. The results of this work include the ability to take a single photo and synthesize a full dawn-to-dusk timelapse video, as well as compelling 4D view synthesis capabilities where a scene can simultaneously be explored in space and time.
One of the most striking characteristics of human behavior in contrast to all other animal is that we show extraordinary variability across populations. Human cultural diversity is a biological oddity. More specifically, we propose that what makes humans unique is the nature of the individual ontogenetic process, that results in this unparalleled cultural diversity. Hence, our central question is: How is human ontogeny adapted to cultural diversity and how does it contribute to it? This question is critical, because cultural diversity does not only entail our predominant mode of adaptation to local ecologies, but is key in the construction of our cognitive architecture. The colors we see, the tones that we hear, the memories we form, the norms we adhere to are all the consequence of an interaction between our emerging cognitive system and our lived experiences. While psychologists make careers measuring cognitive systems, we are terrible at measuring experience as are anthropologists, sociologists, etc. The standard methods all face unsurmountable limitations. In our department, we hope to apply Machine Learning, Deep Learning and Computer Vision to automatically extract developmentally important indicators of humans’ daily experience. Similarly to the way that modern sequencing technologies allow us to study the human genotype at scale, applying AI methods to reliably quantify humans’ lived experience would allow us to study the human behavioral phenotype at scale, and fundamentally alter the science of human behavior and its application in education, mental health and medicine: The phenotyping revolution.
Organizers: Timo Bolkart
In the past few years, significant progress has been made on shape modeling of human body, face, and hands. Yet clothing shape is currently not well presented. Modeling clothing using physics-based simulation can sometimes involve tedious manual work and heavy computation. Therefore, a data-driven learning approach has emerged in the community. In this talk, I will present a stream of work that targeted to learn the shape of clothed human from captured data. It involves 3D body estimation, clothing surface registration and clothing deformation modeling. I will conclude this talk by outlining the current challenges and some promising research directions in this field.
Organizers: Timo Bolkart
Programming cellular devices to deliver proteins or small molecules using synthetic genetic regulation can be employed in many areas such as biomedicine, living therapeutics, living materials and many others. A biological device composed of a cellular sensor coupled with a programmed protein delivery system can lead the formation of a synthetic system that can sense the environmental inputs, carry out calculations and create an output. Using this approach, we have built cellular devices those can sense environmental signals and creates an output in the form of protein secretion. In this talk I will mention about a self-actuated cellular protein delivery system which utilizes logic gate based, and state-machine based operations for sequential protein delivery. Also, I will mention about our recent studies to create synthetic genetic circuits those rely on a sense-response approach. These will include a cellular device for a whole cell biocatalyst and another device for nanomaterial templating.
Since the release of the Kinect, RGB-D cameras have been used in several consumer devices, including smartphones. In this talk, I will present two challenging uses of this technology. With multiple RGB-D cameras, it is possible to reconstruct a 3D scene and visualize it from any point of view. In the first part of the talk, I will show how such a scene can be streamed and rendered as a point cloud in a compelling way and its appearance improved by the use of external cinema cameras. In the second part of the talk, I will present my work on how an RGB-D camera can be used for enabling real-walking in virtual reality by making the user aware of the surrounding obstacles. I present a pipeline to create an occupancy map from a point cloud on the fly on a mobile phone used as a virtual reality headset. This occupancy map can then be used to prevent the user from hitting physical obstacles when walking in the virtual scene.
Organizers: Sergi Pujades
I’ll start with a concept of 1990 that has become popular: unsupervised learning without a teacher through two adversarial neural networks (NNs) that duel in a minimax game, where one NN minimizes the objective function maximized by the other. The first NN generates data through its output actions, the second NN predicts the data. The second NN minimizes its error, thus becoming a better predictor. But it is a zero sum game: the first NN tries to find actions that maximize the error of the second NN. The system exhibits what I called “artificial curiosity” because the first NN is motivated to invent actions that yield data that the second NN still finds surprising, until the data becomes familiar and eventually boring. A similar adversarial zero sum game was used for another unsupervised method called "predictability minimization," where two NNs fight each other to discover a disentangled code of the incoming data (since 1991), remarkably similar to codes found in biological brains. I’ll also discuss passive unsupervised learning through predictive coding of an agent’s observation stream (since 1991) to overcome the fundamental deep learning problem through data compression. I’ll offer thoughts as to why most current commercial applications don’t use unsupervised learning, and whether that will change in the future.
Organizers: Bernhard Schölkopf
Insect chemical ecology is a mature, long standing field, with its own journal. By contrast, insect physical ecology is much less studied and the worked scattered. Using work done in my group, I will highlight locomotion, both in granular materials like sand and at the water surface as well as sensing, in particular olfaction and flow sensing. The bio-inspired implementations in MEMS technologies will be the closing chapter.
Organizers: Metin Sitti
In an effort to improve the performance of deep neural networks in data-scarce, non-i.i.d., or unsupervised settings, much recent research has been devoted to encoding invariance under symmetry transformations into neural network architectures. We treat the neural network input and output as random variables, and consider group invariance from the perspective of probabilistic symmetry. Drawing on tools from probability and statistics, we establish a link between functional and probabilistic symmetry, and obtain functional representations of probability distributions that are invariant or equivariant under the action of a compact group. Those representations characterize the structure of neural networks that can be used to represent such distributions and yield a general program for constructing invariant stochastic or deterministic neural networks. We develop the details of the general program for exchangeable sequences and arrays, recovering a number of recent examples as special cases. This is work in collaboration with Yee Whye Teh. https://arxiv.org/abs/1901.06082
Organizers: Isabel Valera
Learning new control strategies for (possibly unknown) dynamical systems is a challenging task. Reinforcement learning algorithms typically require 'fresh' data regularly, but obtaining data safely and in sufficient quantities is a challenge on real systems. Thus, it is no surprise that most recent successes have been in domains where massive amounts of data can easily be generated in simulation (e.g., games such as Atari and Go).
The molecular connectivity between genes and proteins inside a cell shows a good degree of resemblance with complex electrical circuits. This inspires the possibility of engineering a cell similar to an engineering device by plugging in genetic logic circuits. This approach, which is loosely defined as synthetic biology is an emerging field of bioengineering, where scientist use electrical and computer engineering principle to re-program cellular functions with a potential to solve next generation challenges in medicine, materials, energy, and space travel. In this talk, we discuss our efforts to create artificial and complex chemical signal processing systems using genetic logic circuits and its applications in building a technology platform for microbial robotics. We further discuss our systems biology effort to understand the effect of microgravity on human and bacterial cells during space travel.
Organizers: Metin Sitti
Neurological disorders and injuries lead to a loss of sensorimotor function in the central nervous system, which controls the musculoskeletal system. Novel systems and control methods can be employed to create neuroprostheses that restore these functions to an unprecedented degree by two major advances: (1) Long standing limitations of inertial motion tracking are overcome by novel parameter estimation and sensor fusion methods. (2) A recent extension of classic learning control methods facilitates real-time pattern adaptation in artificial muscle recruitment. We review the role of these methods in the development of biomimetic neuroprostheses and discuss their potential impact in a range of further application systems including autonomous vehicles, robotics, and multi-agent networks.
Organizers: Sebastian Trimpe