Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus. One recent approach proposes self-supervision based on non-rigid reconstruction. Unfortunately, this method fails for important cases such as highly non-rigid deformations. We first address this problem of lack of data by introducing a novel semi-supervised strategy to obtain dense interframe correspondences from a sparse set of annotations. This way, we obtain a large dataset of 400 scenes, over 390,000 RGB-D frames, and 2,537 densely aligned frame pairs; in addition, we provide a test set along with several metrics for evaluation. Based on this corpus, we introduce a data-driven non-rigid feature matching approach, which we integrate into an optimization-based reconstruction pipeline. Here, we propose a new neural network that operates on RGB-D frames, while maintaining robustness under large non-rigid deformations and producing accurate predictions. Our approach significantly outperforms both existing non-rigid reconstruction methods that do not use learned data terms, as well as learning-based approaches that only use self-supervision.
Organizers: Vassilis Choutas
How can we tell that a video is playing backwards? People's motions look wrong when the video is played backwards--can we develop an algorithm to distinguish forward from backward video? Similarly, can we tell if a video is sped-up? We have developed algorithms to distinguish forwards from backwards video, and fast from slow. Training algorithms for these tasks provides a self-supervised task that facilitates human activity recognition. We'll show these results, and applications of these unsupervised video learning tasks. We also present a method to retime people in videos --- manipulating and editing the time over which the motions of individuals occurs. Our model not only disentangles the motions of each person in the video, but it also correlates each person with the scene changes they generate, and thus re-times the corresponding shadows, reflections, and motion of loose clothing appropriately.
Organizers: Yinghao Huang
In recent years, commodity 3D sensors have become widely available, spawning significant interest in both offline and real-time 3D reconstruction. While state-of-the-art reconstruction results from commodity RGB-D sensors are visually appealing, they are far from usable in practical computer graphics applications since they do not match the high quality of artist-modeled 3D graphics content. One of the biggest challenges in this context is that obtained 3D scans suffer from occlusions, thus resulting in incomplete 3D models. In this talk, I will present a data-driven approach towards generating high quality 3D models from commodity scan data, and the use of these geometrically complete 3D models towards semantic and texture understanding of real-world environments.
Organizers: Yinghao Huang
In this talk I will discuss the development of functional materials and their application in modulating the biological microenvironment during cellular sensing and signal transduction. First, I’ll briefly summarize the mechanical, biochemical and physicochemical material properties that influence cellular sensing and subsequent integration with the tissues at the macroscale. Controlling signal transduction at the submicron scale, however, requires careful materials engineering to address the need for minimally invasive targeting of single proteins and for providing sufficient physical stimuli for cellular signaling. I will discuss an approach to fabricate anisotropic magnetite nanodiscs (MNDs) which can be used as torque transducers to mechanosensory cells under weak, slowly varying magnetic fields (MFs). When MNDs are coupled to MFs, their magnetization transitions between a vortex and in-plane state, leading to torques on the pN scale, sufficient to activate mechanosensitive ion channels in neuronal cell membranes. This approach opens new avenues for studies of biological mechanoreception and provides new tools for minimally invasive neuromodulation technology.
Optoacoustic imaging is increasingly attracting the attention of the biomedical research community due to its excellent spatial and temporal resolution, centimeter scale penetration into living tissues, versatile endogenous and exogenous optical absorption contrast. State-of-the-art implementations of multi-spectral optoacoustic tomography (MSOT) are based on multi-wavelength excitation of tissues to visualize specific molecules within opaque tissues. As a result, the technology can noninvasively deliver structural, functional, metabolic, and molecular information from living tissues. The talk covers most recent advances pertaining ultrafast imaging instrumentation, multi-modal combinations with optical and ultrasound methods, intelligent reconstruction algorithms as well as smart optoacoustic contrast and sensing approaches. Our current efforts are also geared toward exploring potential of the technique in studying multi-scale dynamics of the brain and heart, monitoring of therapies, fast tracking of cells and targeted molecular imaging applications. MSOT further allows for a handheld operation thus offers new level of precision for clinical diagnostics of patients in a number of indications, such as breast and skin lesions, inflammatory diseases and cardiovascular diagnostics.
Organizers: Metin Sitti
Machine learning allows automated systems to identify structures and physical laws based on measured data, which is particularly useful in areas where an analytic derivation of a model is too tedious or not possible. Research in reinforcement learning led to impressive results and superhuman performance in well-structured tasks and games. However, to this day, data-driven models are rarely employed in the control of safety critical systems, because the success of a controller, which is based on these models, cannot be guaranteed. Therefore, the research presented in this talk analyzes the closed-loop behavior of learning control laws by means of rigorous proofs. More specifically, we propose a control law based on Gaussian process (GP) models, which actively avoids uncertainties in the state space and favors trajectories along the training data, where the system is well-known. We show that this behavior is optimal as it maximizes the probability of asymptotic stability. Additionally, we consider an event-triggered online learning control law, which safely explores an initially unknown system. It only takes new training data whenever the uncertainty in the system becomes too large. As the control law only requires a locally precise model, this novel learning strategy has a high data efficiency and provides safety guarantees.
Organizers: Sebastian Trimpe
The precise delivery of bio-functionalized matters is of great interest from the fundamental and applied viewpoints. Particularly, most existing single cell platforms are unable to achieve large scale operation with flexibility on cells and digital manipulation towards multiplex cell tweezers. Thus, there is an urgent need of innovative techniques to accomplish the automation of single cells. Recently, the flexibility of magnetic shuttling technology using nano/micro scale magnets for the manipulation of particles has gained significant advances and has been used for a wide variety of single cells manipulation tasks. Herein, let’s call “spintrophoresis” using micro-/nano-sized Spintronic devices rather than “magnetophoresis” using bulk magnet. Although a digital manipulation of single cells has been implemented by the integrated circuits of spintrophoretic patterns with current, active and passive sorting gates are required for its practical application for cell analysis. Firstly, a universal micromagnet junction for passive self-navigating gates of microrobotic carriers to deliver the cells to specific sites using a remote magnetic field is described for passive cell sorting. In the proposed concept, the nonmagnetic gap between the defined donor and acceptor micromagnets creates a crucial energy barrier to restrict particle gating. It is shown that by carefully designing the geometry of the junctions, it becomes possible to deliver multiple protein- functionalized carriers in high resolution, as well as MFC-7 and THP-1 cells from the mixture, with high fidelity and trap them in individual apartments. Secondly, a convenient approach using multifarious transit gates is proposed for active sorting of specific cells that can pass through the local energy barriers by a time-dependent pulsed magnetic field instead of multiple current wires. The multifarious transit gates including return, delay, and resistance linear gates, as well as dividing, reversed, and rectifying T-junction gates, are investigated theoretically and experimentally for the programmable manipulation of microrobotic particles. The results demonstrate that, a suitable angle of the 3D-gating field at a suitable time zone is crucial to implement digital operations at integrated multifarious transit gates along bifurcation paths to trap microrobotic carriers in specific apartments, paving the way for flexible on-chip arrays of multiplexed cells. Finally, I will include the pseudo-diamagnetic spintrophoresis using negative magnetic patterns for multiplexed magnetic tweezers without the biomarker labelling. Label free single cells manipulation, separation and localization enables a novel platform to address biologically relevant problems in bio-MEMS/ NEMS technologies.
Biological motion is fascinating in almost every aspect you look upon it. Especially locomotion plays a crucial part in the evolution of life. Structures, like the bones connected by joints, soft and connective tissues and contracting proteins in a muscle-tendon unit enable and prescribe the respective species' specific locomotion pattern. Most importantly, biological motion is autonomously learned, it is untethered as there is no external energy supply and typical for vertebrates, it's muscle-driven. This talk is focused on human motion. Digital models and biologically inspired robots are presented, built for a better understanding of biology’s complexity. Modeling musculoskeletal systems reveals that the mapping from muscle stimulations to movement dynamics is highly nonlinear and complex, which makes it difficult to control those systems with classical techniques. However, experiments on a simulated musculoskeletal model of a human arm and leg and real biomimetic muscle-driven robots show that it is possible to learn an accurate controller despite high redundancy and nonlinearity, while retaining sample efficiency. More examples on active muscle-driven motion will be given.
Organizers: Ahmed Osman
Feedback based automatic control has been a key enabling technology for many technological advances over the past 80 years. New application domains, like autonomous cars driving on automated highways, energy distribution via smart grids, life in smart cities or the new production paradigm Industry 4.0 do, however, require a new type of cybernetic systems and control theory that goes beyond some of the classical ideas. Starting from the concept of feedback and its significance in nature and technology, we will present in this talk some new developments and challenges in connection to the control of today's and tomorrow’s intelligent systems.
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image properties which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Machine Learning where we show (1) how to generalize image classification models to cases when no labelled visual training data is available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a class label, explain why/not the predicted label is chosen for the image.
During manipulation, humans adjust the amount of force applied to an object depending on friction: they exert a stronger grip for slippery surfaces and a looser grip for sticky surfaces. However, the neural mechanisms signaling friction remain unclear. To fill this gap, we recorded the response of human tactile afferent during the onset of slip against flat surfaces of different frictions. We observed that some afferents responded to partial slip events occurring during transition from a stuck to a slipping contact, and potentially signaling the impending slip.
Wearable sensing and feedback devices are becoming increasingly ubiquitous for measuring human movement in research laboratories, medical clinics, and in consumer goods. Advances in computation and miniaturization have enabled sensing for gait assessment; these technologies are then used in interventions to provide feedback that facilitates changes in gait or enhances sensory capabilities. This talk will focus on vibration as the primary method of providing feedback. I will discuss the use of vibrotactile arrays to communicate plantar foot pressure in users of lower-limb prosthetics, as a synthetic form of sensory feedback. Wearable vibrating units can also be used as a cue to retrain gait, and I will describe my preliminary work in gait retraining as a conservative treatment for knee osteoarthritis. This talk will cover the development and evaluation of these haptic devices and establish their impact within the greater context of clinical biomechanics.
Search-based Planning refers to planning by constructing a graph from systematic discretization of the state- and action-space of a robot and then employing a heuristic search to find an optimal path from the start to the goal vertex in this graph. This paradigm works well for low-dimensional robotic systems such as mobile robots and provides rigorous guarantees on solution quality. However, when it comes to planning for higher-dimensional robotic systems such as mobile manipulators, humanoids and ground and aerial vehicles navigating at high-speed, Search-based Planning has been typically thought of as infeasible. In this talk, I will describe some of the research that my group has done into changing this thinking. In particular, I will focus on two different principles. First, constructing multiple lower-dimensional abstractions of robotic systems, solutions to which can effectively guide the overall planning process using Multi-Heuristic A*, an algorithm recently developed by my group. Second, using offline pre-processing to provide a *provably* constant-time online planning for repetitive planning tasks. I will present algorithmic frameworks that utilize these principles, describe their theoretical properties, and demonstrate their applications to a wide range of physical high-dimensional robotic systems.