Our scientific understanding of haptic interaction is still evolving, both because what you feel greatly depends on how you move, and because engineered sensors, actuators, and algorithms typically struggle to match human capabilities. Consequently, few computer and machine interfaces provide the human operator with high-fidelity touch feedback or carefully analyze the physical signals generated during haptic interactions, limiting their usability. The crucial role of the sense of touch is also deeply appreciated by researchers working to create autonomous robots that can competently manipulate everyday objects and safely interact with humans in unstructured environments.
Recent economic, technological, and societal changes (e.g., the shift from large organizations to decentralized networks of individuals/small businesses, #metoo movement) require organizations to adapt to the transforming nature of work by altering the way work is performed and the roles that workers play. Due to globalization and advanced communication technologies, modern organizations are also characterized by a diverse workforce that needs to be carefully managed. Therefore organizational leaders must take on the challenge of unleashing the true potential of diversity and inclusion by challenging assumptions and changing corporate cultures. Using qualitative and quantitative methods, my research explores the implications of the ongoing transformation of work in terms of worker’s identity and their perspective on time and place, and looks into the main competencies workers need to successfully adapt to the new way of working. In addition, I examine how modern organizations can promote a diverse and inclusive workplace and how they deal with one the major barriers to the career development of professional women – sexual harassment in the workplace. Finally, I explore the role of leaders in creating a diverse and inclusive community. On a larger scale, my research aims to help leaders and organizations clarify how they can contribute to a more tolerant, diverse, and inclusive society.
Organizers: Katherine J. Kuchenbecker
The demand for safe, robust, and intelligent robotic systems is growing rapidly, given their potential to make our societies more productive and increase our welfare. To achieve this, robots are increasingly expected to operate in human-populated environments, maneuver in remote and cluttered environments, maintain and repair facilities, take care of our health, and streamline manufacturing and assembly lines. However, computational issues limit the ability of robots to plan complex motions in constrained and contact-rich environments, interact with humans safely, and exploit dynamics to gracefully maneuver, manipulate, fly, or explore the oceans. This talk will be centered around planning and decision-making algorithms for robust and agile robots operating in complex environments. In particular, Dr. Zhao will present novel computational approaches necessary to enable real-time and robust motion planning of highly dynamic bipedal locomotion over rough terrain. This planning approach revolves around robust disturbance metrics, an optimal recovery controller, and foot placement re-planning strategies. Extending this motion planning approach to generalized whole-body locomotion behaviors, He will introduce our recent progress on high-level reactive task planner synthesis for multi-contact, template-based locomotion interacting with constrained environments and how to integrate formal methods for mission-capable locomotion. This talk will also present robust trajectory optimization algorithm capable of handling contact uncertainties and without enumerating contact modes. Dr. Zhao will end this talk with current research directions on distributed trajectory optimization and task and motion planning.
Organizers: Metin Sitti
Providing rich and immersive physical experiences to users has become an essential component in many computer-interactive applications, where haptics plays a central role. However, as with other sensory modalities, modeling and rendering good haptic experiences with plausible physicality is a very demanding task in terms of the cost associated with modeling and authoring, not to mention the cost for development. No general and widely-used solutions exist yet for that; most designers and developers rely on their in-house programs, or even worse, manual coding. This talk will introduce the research conducted by the speaker in order to facilitate the authoring of haptic content. In particular, it will focus on automatic synthesis algorithms of vibrotactile effects and motion effects from audiovisual content, as well as some relevant issues in haptic perception.
Organizers: Katherine J. Kuchenbecker
In this talk, I will present about the most recent advances in data-driven character animation and control using neural networks. Creating key-framed animations by hand is typically very time-consuming and requires a lot of artistic expertise and training. Recent work applying deep learning for character animation was firstly able to compete or even outperform the quality that could be achieved by professional animators for biped locomotion, and thus caused a lot excitement in both academia and industry. Shortly after, following research also demonstrated its applicability to quadruped locomotion control, which has been considered one of the unsolved key challenges in character animation due to the highly complex footfall patterns of quadruped characters. Addressing the next challenges beyond character locomotion, this year at SIGGRAPH Asia we presented the Neural State Machine, an improved version of such previous systems in order to make human characters naturally interact with objects and the environment from motion capture data. Generally, the difficulty in such tasks is due to complex planning of periodic and aperiodic movements reacting to the scene geometry in order to precisely position and orient the character, and to adapt to different variations in the type, size and shape of such objects. We demonstrate the versatility of this framework with various scene interaction tasks, such as sitting on a chair, avoiding obstacles, opening and entering through a door, and picking and carrying objects generated in real-time just from a single model.
The body is one of the most relevant aspects of our self, and we shape it through our eating behavior and physical acitivity. As a psychologist and neuroscientist, I seek to disentangle mutual interactions between how we represent our own body, what we eat and how much we exercise. In the talk, I will give a scoping overview of this approach and present the studies I am conducting as a guest scientist at PS.
Organizers: Ahmed Osman
In this talk, Majid Taghavi will briefly discuss the demand for high-performance electromechanical transducers, the current challenges, and approaches he has been pursuing to tackle them. He will discuss multiple electromechanical concepts and devices that he has delivered for low-power energy harvesting, self-powered sensors, and artificial muscle technologies. Majid Taghavi will look into piezoelectric, triboelectric, electrostatic, dielectrophoretic, and androphilic phenomena, and will show his observations and innovations in coupling physical phenomena and developing smart materials and intelligent devices.
Organizers: Metin Sitti
Prof. Eric Dufresne will describe some experiments on some simple composites of elastomers and droplets. First, we will consider their composite mechanical properties. He will show how simple liquid droplets can counterintuitively stiffen the material, and how magnetorheological fluid droplets can provide elastomers with magnetically switchable shape memory. Second, we consider the nucleation, growth, and ripening of droplets within an elastomer. Here, a variety of interesting phenomena emerge: size-tunable monodisperse droplets, shape-tunable droplets, and ripening of droplets along stiffness gradients. We are exploiting these phenomena to make materials with mechanically switchable structural color.
Organizers: Metin Sitti
Computation has fundamentally changed the way we study nature. New data collection technology, such as GPS, high definition cameras, UAVs, genotyping, and crowdsourcing, are generating data about wild populations that are orders of magnitude richer than any previously collected. Unfortunately, in this domain as in many others, our ability to analyze data lags substantially behind our ability to collect it. In this talk I will show how computational approaches can be part of every stage of the scientific process of understanding animal sociality, from intelligent data collection (crowdsourcing photographs and identifying individual animals from photographs by stripes and spots - Wildbook.org) to hypothesis formulation (by designing a novel computational framework for analysis of dynamic social networks), and provide scientific insight into collective behavior of zebras, baboons, and other social animals.
Organizers: Aamir Ahmad
Prof. Pietro Valdastri's talk will focus on Medical Capsule Robots. Capsule robots are cm-size devices that leverage extreme miniaturization to access and operate in environments that are out of reach for larger robots. In medicine, capsule robots can be designed to be swallowed like a pill and to diagnose and treat mortal diseases, such as cancer. The talk will move from capsule robots for the inspection of the digestive tract toward a new generation of surgical robots and devices, having a relevant reduction in size, invasiveness, and cost as the main drivers for innovation. During the talk, we will discuss the recent enabling technologies that are being developed at the University of Leeds to transform medical robotics. These technologies include magnetic manipulation of capsule robots, hydraulic and pneumatic actuation, real-time tracking of capsule position and orientation, ultra-low-cost design, frugal innovation, and autonomy in robotic endoscopy. Prof. Russell Harris has been researching new manufacturing processes for over 20 years. He has several research projects focussing on robotics, and is particularly interested in how new manufacturing processes can be an enabler to advanced robotic devices and components. In this talk he will discuss some of this research and where he believes there may be new opportunities for collaborative research across manufacturing and robotics.
Endowing robots with human-like physical reasoning abilities remains challenging. We argue that existing methods often disregard spatio-temporal relations and by using Graph Neural Networks (GNNs) that incorporate a relational inductive bias, we can shift the learning process towards exploiting relations. In this work, we learn action-conditional forward dynamics models of a simulated manipulation task from visual observations involving cluttered and irregularly shaped objects. We investigate two GNN approaches and empirically assess their capability to generalize to scenarios with novel and an increasing number of objects. The first, Graph Networks (GN) based approach, considers explicitly defined edge attributes and not only does it consistently underperform an auto-encoder baseline that we modified to predict future states, our results indicate how different edge attributes can significantly influence the predictions. Consequently, we develop the Auto-Predictor that does not rely on explicitly defined edge attributes. It outperforms the baseline and the GN-based models. Overall, our results show the sensitivity of GNN-based approaches to the task representation, the efficacy of relational inductive biases and advocate choosing lightweight approaches that implicitly reason about relations over ones that leave these decisions to human designers.
Organizers: Siyu Tang
Future cities and infrastructure systems will evolve into complex conglomerates where autonomous aerial, aquatic and ground-based robots will coexist with people and cooperate in symbiosis. To create this human-robot ecosystem, robots will need to respond more flexibly, robustly and efficiently than they do today. They will need to be designed with the ability to move across terrain boundaries and physically interact with infrastructure elements to perform sensing and intervention tasks. Taking inspiration from nature, aerial robotic systems can integrate multi-functional morphology, new materials, energy-efficient locomotion principles and advanced perception abilities that will allow them to successfully operate and cooperate in complex and dynamic environments. This talk will describe the scientific fundamentals, design principles and technologies for the development of biologically inspired flying robots with adaptive morphology that can perform monitoring and manufacturing tasks for future infrastructure and building systems. Examples will include flying robots with perching capabilities and origami-based landing systems, drones for aerial construction and repair, and combustion-based jet thrusters for aerial-aquatic vehicles.
Organizers: Metin Sitti
In the first part of the talk, I am going to present our work on human pose estimation in the Wild, capturing unconstrained images and videos containing an a priori unknown number of people, often occluded and exhibiting a wide range of articulations and appearances. Unlike conventional top-down approaches that first detect humans with the off-the-shelf object detector and then estimate poses independently per bounding box, our formulation performs joint detection and pose estimation. In the first stage we indiscriminately localise body parts of every person in the image with the state-of-the-art ConvNet-based keypoint detector. In the second stage we perform assignment of keypoints to people based on a graph partitioning approach, that minimizes an integer linear program under a set of contraints and with the vertex and edge costs computed by our ConvNet. Our method naturally generalises to articulated tracking of multiple humans in video sequences. Next, I will discuss our work on learning accurate 3D object shape and camera pose from a collection of unlabeled category-specific images. We train a convolutional network to predict both the shape and the pose from a single image by minimizing the reprojection error: given several views of an object, the projections of the predicted shapes to the predicted camera poses should match the provided views. To deal with pose ambiguity, we introduce an ensemble of pose predictors that we then distill it to a single "student" model. To allow for efficient learning of high-fidelity shapes, we represent the shapes by point clouds and devise a formulation allowing for differentiable projection of these. Finally, I will talk about how to reconstruct an appearance of three-dimensional objects, namely a method for generating a 3D human avatar from an image. Our model predicts a full texture map, clothing segmentation and displacement map. The learning is done in the UV-space of the SMPL model, which turns the hard 3D inference problem into image-to-image translation task, where we can use deep neural networks to encode appearance, geometry and clothing layout. Our model is trained on a dataset of over 4000 3D scans of humans in diverse clothing.