Search-based Planning refers to planning by constructing a graph from systematic discretization of the state- and action-space of a robot and then employing a heuristic search to find an optimal path from the start to the goal vertex in this graph. This paradigm works well for low-dimensional robotic systems such as mobile robots and provides rigorous guarantees on solution quality. However, when it comes to planning for higher-dimensional robotic systems such as mobile manipulators, humanoids and ground and aerial vehicles navigating at high-speed, Search-based Planning has been typically thought of as infeasible. In this talk, I will describe some of the research that my group has done into changing this thinking. In particular, I will focus on two different principles. First, constructing multiple lower-dimensional abstractions of robotic systems, solutions to which can effectively guide the overall planning process using Multi-Heuristic A*, an algorithm recently developed by my group. Second, using offline pre-processing to provide a *provably* constant-time online planning for repetitive planning tasks. I will present algorithmic frameworks that utilize these principles, describe their theoretical properties, and demonstrate their applications to a wide range of physical high-dimensional robotic systems.
Wearable sensing and feedback devices are becoming increasingly ubiquitous for measuring human movement in research laboratories, medical clinics, and in consumer goods. Advances in computation and miniaturization have enabled sensing for gait assessment; these technologies are then used in interventions to provide feedback that facilitates changes in gait or enhances sensory capabilities. This talk will focus on vibration as the primary method of providing feedback. I will discuss the use of vibrotactile arrays to communicate plantar foot pressure in users of lower-limb prosthetics, as a synthetic form of sensory feedback. Wearable vibrating units can also be used as a cue to retrain gait, and I will describe my preliminary work in gait retraining as a conservative treatment for knee osteoarthritis. This talk will cover the development and evaluation of these haptic devices and establish their impact within the greater context of clinical biomechanics.
During manipulation, humans adjust the amount of force applied to an object depending on friction: they exert a stronger grip for slippery surfaces and a looser grip for sticky surfaces. However, the neural mechanisms signaling friction remain unclear. To fill this gap, we recorded the response of human tactile afferent during the onset of slip against flat surfaces of different frictions. We observed that some afferents responded to partial slip events occurring during transition from a stuck to a slipping contact, and potentially signaling the impending slip.
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image properties which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Machine Learning where we show (1) how to generalize image classification models to cases when no labelled visual training data is available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a class label, explain why/not the predicted label is chosen for the image.
Feedback based automatic control has been a key enabling technology for many technological advances over the past 80 years. New application domains, like autonomous cars driving on automated highways, energy distribution via smart grids, life in smart cities or the new production paradigm Industry 4.0 do, however, require a new type of cybernetic systems and control theory that goes beyond some of the classical ideas. Starting from the concept of feedback and its significance in nature and technology, we will present in this talk some new developments and challenges in connection to the control of today's and tomorrow’s intelligent systems.
Security and privacy is of growing concern in many control applications. Cyber attacks are frequently reported for a variety of industrial and infrastructure systems. For more than a decade the control community has developed techniques for how to design control systems resilient to cyber-physical attacks. In this talk, we will review some of these results. In particular, as cyber and physical components of networked control systems are tightly interconnected, it is be argued that traditional IT security focusing only on the cyber part does not provide appropriate solutions. Modeling the objectives and resources of the adversary together with the plant and control dynamics is shown to be essential. The consequences of common attack scenarios, such denial-of-service, replay, and bias injection attacks, can be analyzed using the presented framework. It is also shown how to strengthen the control loops by deriving security and privacy aware estimation and control schemes. Applications in building automation, power networks, and automotive systems will be used to motivate and illustrate the results. The presentation is based on joint work with several students and colleagues at KTH and elsewhere.
In the search for materials with new properties, there have been great advances in recent years aimed at the construction of mechanical systems whose behaviour is governed by structure, rather than composition. Through careful design of the material’s architecture, new material properties have been demonstrated, including negative Poisson’s ratio, high stiffness-to-weight ratio and mechanical cloaking. While originally the field focused on achieving unusual (zero or negative) values for familiar mechanical parameters, more recently it has been shown that non-linearities can be exploited to further extend the design space. In this talk Prof. Katia Bertoldi will focus on kirigami-inspired metamaterials, which are produced by introducing arrays of cuts into thin sheets. First, she will demonstrate that instabilities triggered under uniaxial tension can be exploited to create complex 3D patterns and even to guide the formation of permanent folds. Second, she will show that such non-linear systems can be used to designs smart and flexible skins with anisotropic frictional properties that enables a single soft actuator to propel itself. Finally, Prof.Bertoldi will focus on bistable kirigami metamaterials and show that they provide an ideal environment for the propagation non-linear waves.
Organizers: Metin Sitti
Machine learning increasingly supports consequential decisions in domains including health, employment, and criminal justice. Consequential decision making is inherently dynamic: Individuals, their outcomes, and entire populations can change and adapt in response to classification. Traditional machine learning, however, fails to account for such dynamic effects. In this talk, I will highlight three different vignettes of dynamic decision making. The first is about how classification changes populations and how this perspective is essential to questions of fairness in machine learning. The second is about how classification incentivizes individuals to adapt strategically. The third is about how predictions are often performative, that is, they influence the very outcome they aim to predict. I will end on the contours of a theory that unifies these three settings and its connections to questions in causality, control theory, economics, and sociology.
Organizers: Metin Sitti
Recent economic, technological, and societal changes (e.g., the shift from large organizations to decentralized networks of individuals/small businesses, #metoo movement) require organizations to adapt to the transforming nature of work by altering the way work is performed and the roles that workers play. Due to globalization and advanced communication technologies, modern organizations are also characterized by a diverse workforce that needs to be carefully managed. Therefore organizational leaders must take on the challenge of unleashing the true potential of diversity and inclusion by challenging assumptions and changing corporate cultures. Using qualitative and quantitative methods, my research explores the implications of the ongoing transformation of work in terms of worker’s identity and their perspective on time and place, and looks into the main competencies workers need to successfully adapt to the new way of working. In addition, I examine how modern organizations can promote a diverse and inclusive workplace and how they deal with one the major barriers to the career development of professional women – sexual harassment in the workplace. Finally, I explore the role of leaders in creating a diverse and inclusive community. On a larger scale, my research aims to help leaders and organizations clarify how they can contribute to a more tolerant, diverse, and inclusive society.
Organizers: Katherine J. Kuchenbecker
The demand for safe, robust, and intelligent robotic systems is growing rapidly, given their potential to make our societies more productive and increase our welfare. To achieve this, robots are increasingly expected to operate in human-populated environments, maneuver in remote and cluttered environments, maintain and repair facilities, take care of our health, and streamline manufacturing and assembly lines. However, computational issues limit the ability of robots to plan complex motions in constrained and contact-rich environments, interact with humans safely, and exploit dynamics to gracefully maneuver, manipulate, fly, or explore the oceans. This talk will be centered around planning and decision-making algorithms for robust and agile robots operating in complex environments. In particular, Dr. Zhao will present novel computational approaches necessary to enable real-time and robust motion planning of highly dynamic bipedal locomotion over rough terrain. This planning approach revolves around robust disturbance metrics, an optimal recovery controller, and foot placement re-planning strategies. Extending this motion planning approach to generalized whole-body locomotion behaviors, He will introduce our recent progress on high-level reactive task planner synthesis for multi-contact, template-based locomotion interacting with constrained environments and how to integrate formal methods for mission-capable locomotion. This talk will also present robust trajectory optimization algorithm capable of handling contact uncertainties and without enumerating contact modes. Dr. Zhao will end this talk with current research directions on distributed trajectory optimization and task and motion planning.
Organizers: Metin Sitti
Fernanda Bribiesca-Contreras' doctoral research investigates the form and function relationship of the wing muscles and its implication with aerial and underwater flight.
Organizers: Alexander Badri-Sprowitz
Our scientific understanding of haptic interaction is still evolving, both because what you feel greatly depends on how you move, and because engineered sensors, actuators, and algorithms typically struggle to match human capabilities. Consequently, few computer and machine interfaces provide the human operator with high-fidelity touch feedback or carefully analyze the physical signals generated during haptic interactions, limiting their usability. The crucial role of the sense of touch is also deeply appreciated by researchers working to create autonomous robots that can competently manipulate everyday objects and safely interact with humans in unstructured environments.
Providing rich and immersive physical experiences to users has become an essential component in many computer-interactive applications, where haptics plays a central role. However, as with other sensory modalities, modeling and rendering good haptic experiences with plausible physicality is a very demanding task in terms of the cost associated with modeling and authoring, not to mention the cost for development. No general and widely-used solutions exist yet for that; most designers and developers rely on their in-house programs, or even worse, manual coding. This talk will introduce the research conducted by the speaker in order to facilitate the authoring of haptic content. In particular, it will focus on automatic synthesis algorithms of vibrotactile effects and motion effects from audiovisual content, as well as some relevant issues in haptic perception.
Organizers: Katherine J. Kuchenbecker
In this talk, I will present about the most recent advances in data-driven character animation and control using neural networks. Creating key-framed animations by hand is typically very time-consuming and requires a lot of artistic expertise and training. Recent work applying deep learning for character animation was firstly able to compete or even outperform the quality that could be achieved by professional animators for biped locomotion, and thus caused a lot excitement in both academia and industry. Shortly after, following research also demonstrated its applicability to quadruped locomotion control, which has been considered one of the unsolved key challenges in character animation due to the highly complex footfall patterns of quadruped characters. Addressing the next challenges beyond character locomotion, this year at SIGGRAPH Asia we presented the Neural State Machine, an improved version of such previous systems in order to make human characters naturally interact with objects and the environment from motion capture data. Generally, the difficulty in such tasks is due to complex planning of periodic and aperiodic movements reacting to the scene geometry in order to precisely position and orient the character, and to adapt to different variations in the type, size and shape of such objects. We demonstrate the versatility of this framework with various scene interaction tasks, such as sitting on a chair, avoiding obstacles, opening and entering through a door, and picking and carrying objects generated in real-time just from a single model.
The body is one of the most relevant aspects of our self, and we shape it through our eating behavior and physical acitivity. As a psychologist and neuroscientist, I seek to disentangle mutual interactions between how we represent our own body, what we eat and how much we exercise. In the talk, I will give a scoping overview of this approach and present the studies I am conducting as a guest scientist at PS.
Organizers: Ahmed Osman