Optoacoustic imaging is increasingly attracting the attention of the biomedical research community due to its excellent spatial and temporal resolution, centimeter scale penetration into living tissues, versatile endogenous and exogenous optical absorption contrast. State-of-the-art implementations of multi-spectral optoacoustic tomography (MSOT) are based on multi-wavelength excitation of tissues to visualize specific molecules within opaque tissues. As a result, the technology can noninvasively deliver structural, functional, metabolic, and molecular information from living tissues. The talk covers most recent advances pertaining ultrafast imaging instrumentation, multi-modal combinations with optical and ultrasound methods, intelligent reconstruction algorithms as well as smart optoacoustic contrast and sensing approaches. Our current efforts are also geared toward exploring potential of the technique in studying multi-scale dynamics of the brain and heart, monitoring of therapies, fast tracking of cells and targeted molecular imaging applications. MSOT further allows for a handheld operation thus offers new level of precision for clinical diagnostics of patients in a number of indications, such as breast and skin lesions, inflammatory diseases and cardiovascular diagnostics.
Organizers: Metin Sitti
In this talk I will present an overview of our recent works that learn deep geometric models for the 3D face from large datasets of scans. Priors for the 3D face are crucial for many applications: to constrain ill posed problems such as 3D reconstruction from monocular input, for efficient generation and animation of 3D virtual avatars, or even in medical domains such as recognition of craniofacial disorders. Generative models of the face have been widely used for this task, as well as deep learning approaches that have recently emerged as a robust alternative. Barring a few exceptions, most of these data-driven approaches were built from either a relatively limited number of samples (in the case of linear models of the shape), or by synthetic data augmentation (for deep-learning based approaches), mainly due to the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of 3D information that can be gathered when considering publicly available datasets that have been captured over the last decade. I will discuss here our works that tackle the challenges of building rich geometric models out of these large and varied datasets, with the goal of modeling the facial shape, expression (i.e. motion) or geometric details. Concretely, I will talk about (1) an efficient and fully automatic approach for registration of large datasets of 3D faces in motion; (2) deep learning methods for modeling the facial geometry that can disentangle the shape and expression aspects of the face; and (3) a multi-modal learning approach for capturing geometric details from images in-the-wild, by simultaneously encoding both facial surface normal and natural image information.
Organizers: Jinlong Yang
Motivated by the low voltage driven actuation of ionic Electroactive Polymers (iEAPs)  , recently we began investigating ionic elastomers. In this talk I will discuss the preparation, physical characterization and electric bending actuation properties of two novel ionic elastomers; ionic polymer electrolyte membranes (iPEM), and ionic liquid crystal elastomers (iLCE). Both materials can be actuated by low frequency AC or DC voltages of less than 1 V. The bending actuation properties of the iPEMs are outperforming most of the well-developed iEAPs, and the not optimized first iLCEs are already comparable to them. Ionic liquid crystal elastomers also exhibit superior features, such as the alignment dependent actuation, which offers the possibility of pre-programed actuation pattern at the level of cross-linking process. Additionally, multiple (thermal, optical and electric) actuations are also possible. I will also discuss issues with compliant electrodes and possible soft robotic applications.  Y. Bar-Cohen, Electroactive Polyer Actuators as Artficial Muscles: Reality, Potential and Challenges, SPIE Press, Bellingham, 2004.  O. Kim, S. J. Kim, M. J. Park, Chem. Commun. 2018, 54, 4895.  C. P. H. Rajapaksha, C. Feng, C. Piedrahita, J. Cao, V. Kaphle, B. Lüssem, T. Kyu, A. Jákli, Macromol. Rapid Commun. 2020, in print.  C. Feng, C. P. H. Rajapaksha, J. M. Cedillo, C. Piedrahita, J. Cao, V. Kaphle, B. Lussem, T. Kyu, A. I. Jákli, Macromol. Rapid Commun. 2019, 1900299.
“There’s something about the outside of a horse that is good for the inside of a man”, Churchill allegedly said. The horse’s motion has captured the interest of humans throughout history. Understanding of the mechanics of horse motion has been sought in early work by Aristotle (300 BC), in pioneering photographic studies by Muybridge (1880) as well as in modern day scientific publications.
The horse (Equus callabus ferus) is a remarkable animal athlete with outstanding running capabilities. The efficiency of its locomotion is explained by specialised anatomical features, which limit the degrees of freedom of movement and reduce energy consumption. Theoretical mechanical models are quite well suited to describe the essence of equine gaits and provide us with simple measures for analysing gait asymmetry. Such measures are well needed, since agreement between veterinarians is moderate to poor when it comes to visual assessment of lameness.
The human visual system has indeed clear limitations in perception and interpretation of horse motion. This limits our abilities to understand the horse, not only to detect lameness and to predict performance, but also to interpret its non-verbal communication and to detect signs of illness or discomfort.
This talk will provide a brief overview of existing motion analysis techniques and models in equine biomechanics. We will discuss future possibilities to achieve more accessible, sensitive and complex ways of analysing the motion of the horse.
Traditional voice conversion methods rely on parallel recordings of multiple speakers pronouncing the same sentences. For real-world applications however, parallel data is rarely available. We propose MelGAN-VC, a voice conversion method that relies on non-parallel speech data and is able to convert audio signals of arbitrary length from a source voice to a target voice. We firstly compute spectrograms from waveform data and then perform a domain translation using a Generative Adversarial Network (GAN) architecture. An additional siamese network helps preserving speech information in the translation process, without sacrificing the ability to flexibly model the style of the target speaker. We test our framework with a dataset of clean speech recordings, as well as with a collection of noisy real-world speech examples. Finally, we apply the same method to perform music style transfer, translating arbitrarily long music samples from one genre to another, and showing that our framework is flexible and can be used for audio manipulation applications different from voice conversion.
Machine learning allows automated systems to identify structures and physical laws based on measured data, which is particularly useful in areas where an analytic derivation of a model is too tedious or not possible. Research in reinforcement learning led to impressive results and superhuman performance in well-structured tasks and games. However, to this day, data-driven models are rarely employed in the control of safety critical systems, because the success of a controller, which is based on these models, cannot be guaranteed. Therefore, the research presented in this talk analyzes the closed-loop behavior of learning control laws by means of rigorous proofs. More specifically, we propose a control law based on Gaussian process (GP) models, which actively avoids uncertainties in the state space and favors trajectories along the training data, where the system is well-known. We show that this behavior is optimal as it maximizes the probability of asymptotic stability. Additionally, we consider an event-triggered online learning control law, which safely explores an initially unknown system. It only takes new training data whenever the uncertainty in the system becomes too large. As the control law only requires a locally precise model, this novel learning strategy has a high data efficiency and provides safety guarantees.
Organizers: Sebastian Trimpe
The precise delivery of bio-functionalized matters is of great interest from the fundamental and applied viewpoints. Particularly, most existing single cell platforms are unable to achieve large scale operation with flexibility on cells and digital manipulation towards multiplex cell tweezers. Thus, there is an urgent need of innovative techniques to accomplish the automation of single cells. Recently, the flexibility of magnetic shuttling technology using nano/micro scale magnets for the manipulation of particles has gained significant advances and has been used for a wide variety of single cells manipulation tasks. Herein, let’s call “spintrophoresis” using micro-/nano-sized Spintronic devices rather than “magnetophoresis” using bulk magnet. Although a digital manipulation of single cells has been implemented by the integrated circuits of spintrophoretic patterns with current, active and passive sorting gates are required for its practical application for cell analysis. Firstly, a universal micromagnet junction for passive self-navigating gates of microrobotic carriers to deliver the cells to specific sites using a remote magnetic field is described for passive cell sorting. In the proposed concept, the nonmagnetic gap between the defined donor and acceptor micromagnets creates a crucial energy barrier to restrict particle gating. It is shown that by carefully designing the geometry of the junctions, it becomes possible to deliver multiple protein- functionalized carriers in high resolution, as well as MFC-7 and THP-1 cells from the mixture, with high fidelity and trap them in individual apartments. Secondly, a convenient approach using multifarious transit gates is proposed for active sorting of specific cells that can pass through the local energy barriers by a time-dependent pulsed magnetic field instead of multiple current wires. The multifarious transit gates including return, delay, and resistance linear gates, as well as dividing, reversed, and rectifying T-junction gates, are investigated theoretically and experimentally for the programmable manipulation of microrobotic particles. The results demonstrate that, a suitable angle of the 3D-gating field at a suitable time zone is crucial to implement digital operations at integrated multifarious transit gates along bifurcation paths to trap microrobotic carriers in specific apartments, paving the way for flexible on-chip arrays of multiplexed cells. Finally, I will include the pseudo-diamagnetic spintrophoresis using negative magnetic patterns for multiplexed magnetic tweezers without the biomarker labelling. Label free single cells manipulation, separation and localization enables a novel platform to address biologically relevant problems in bio-MEMS/ NEMS technologies.
Biological motion is fascinating in almost every aspect you look upon it. Especially locomotion plays a crucial part in the evolution of life. Structures, like the bones connected by joints, soft and connective tissues and contracting proteins in a muscle-tendon unit enable and prescribe the respective species' specific locomotion pattern. Most importantly, biological motion is autonomously learned, it is untethered as there is no external energy supply and typical for vertebrates, it's muscle-driven. This talk is focused on human motion. Digital models and biologically inspired robots are presented, built for a better understanding of biology’s complexity. Modeling musculoskeletal systems reveals that the mapping from muscle stimulations to movement dynamics is highly nonlinear and complex, which makes it difficult to control those systems with classical techniques. However, experiments on a simulated musculoskeletal model of a human arm and leg and real biomimetic muscle-driven robots show that it is possible to learn an accurate controller despite high redundancy and nonlinearity, while retaining sample efficiency. More examples on active muscle-driven motion will be given.
Organizers: Ahmed Osman
Feedback based automatic control has been a key enabling technology for many technological advances over the past 80 years. New application domains, like autonomous cars driving on automated highways, energy distribution via smart grids, life in smart cities or the new production paradigm Industry 4.0 do, however, require a new type of cybernetic systems and control theory that goes beyond some of the classical ideas. Starting from the concept of feedback and its significance in nature and technology, we will present in this talk some new developments and challenges in connection to the control of today's and tomorrow’s intelligent systems.
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image properties which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Machine Learning where we show (1) how to generalize image classification models to cases when no labelled visual training data is available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a class label, explain why/not the predicted label is chosen for the image.
During manipulation, humans adjust the amount of force applied to an object depending on friction: they exert a stronger grip for slippery surfaces and a looser grip for sticky surfaces. However, the neural mechanisms signaling friction remain unclear. To fill this gap, we recorded the response of human tactile afferent during the onset of slip against flat surfaces of different frictions. We observed that some afferents responded to partial slip events occurring during transition from a stuck to a slipping contact, and potentially signaling the impending slip.
Wearable sensing and feedback devices are becoming increasingly ubiquitous for measuring human movement in research laboratories, medical clinics, and in consumer goods. Advances in computation and miniaturization have enabled sensing for gait assessment; these technologies are then used in interventions to provide feedback that facilitates changes in gait or enhances sensory capabilities. This talk will focus on vibration as the primary method of providing feedback. I will discuss the use of vibrotactile arrays to communicate plantar foot pressure in users of lower-limb prosthetics, as a synthetic form of sensory feedback. Wearable vibrating units can also be used as a cue to retrain gait, and I will describe my preliminary work in gait retraining as a conservative treatment for knee osteoarthritis. This talk will cover the development and evaluation of these haptic devices and establish their impact within the greater context of clinical biomechanics.
Search-based Planning refers to planning by constructing a graph from systematic discretization of the state- and action-space of a robot and then employing a heuristic search to find an optimal path from the start to the goal vertex in this graph. This paradigm works well for low-dimensional robotic systems such as mobile robots and provides rigorous guarantees on solution quality. However, when it comes to planning for higher-dimensional robotic systems such as mobile manipulators, humanoids and ground and aerial vehicles navigating at high-speed, Search-based Planning has been typically thought of as infeasible. In this talk, I will describe some of the research that my group has done into changing this thinking. In particular, I will focus on two different principles. First, constructing multiple lower-dimensional abstractions of robotic systems, solutions to which can effectively guide the overall planning process using Multi-Heuristic A*, an algorithm recently developed by my group. Second, using offline pre-processing to provide a *provably* constant-time online planning for repetitive planning tasks. I will present algorithmic frameworks that utilize these principles, describe their theoretical properties, and demonstrate their applications to a wide range of physical high-dimensional robotic systems.
Security and privacy is of growing concern in many control applications. Cyber attacks are frequently reported for a variety of industrial and infrastructure systems. For more than a decade the control community has developed techniques for how to design control systems resilient to cyber-physical attacks. In this talk, we will review some of these results. In particular, as cyber and physical components of networked control systems are tightly interconnected, it is be argued that traditional IT security focusing only on the cyber part does not provide appropriate solutions. Modeling the objectives and resources of the adversary together with the plant and control dynamics is shown to be essential. The consequences of common attack scenarios, such denial-of-service, replay, and bias injection attacks, can be analyzed using the presented framework. It is also shown how to strengthen the control loops by deriving security and privacy aware estimation and control schemes. Applications in building automation, power networks, and automotive systems will be used to motivate and illustrate the results. The presentation is based on joint work with several students and colleagues at KTH and elsewhere.
In the search for materials with new properties, there have been great advances in recent years aimed at the construction of mechanical systems whose behaviour is governed by structure, rather than composition. Through careful design of the material’s architecture, new material properties have been demonstrated, including negative Poisson’s ratio, high stiffness-to-weight ratio and mechanical cloaking. While originally the field focused on achieving unusual (zero or negative) values for familiar mechanical parameters, more recently it has been shown that non-linearities can be exploited to further extend the design space. In this talk Prof. Katia Bertoldi will focus on kirigami-inspired metamaterials, which are produced by introducing arrays of cuts into thin sheets. First, she will demonstrate that instabilities triggered under uniaxial tension can be exploited to create complex 3D patterns and even to guide the formation of permanent folds. Second, she will show that such non-linear systems can be used to designs smart and flexible skins with anisotropic frictional properties that enables a single soft actuator to propel itself. Finally, Prof.Bertoldi will focus on bistable kirigami metamaterials and show that they provide an ideal environment for the propagation non-linear waves.
Organizers: Metin Sitti