Safe Autonomy with Deep Learning in the Feedback Loop
Bio:
George J. Pappas is the UPS Foundation Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, the O. Hugo Schuck Best Paper Award, the National Science Foundation PECASE, and the George H. Heilmeier Faculty Excellence Award.
Abstract:
Deep learning has been extremely successful in computer vision and perception. Inspired by this success in perceiving environments, deep learning is now one of the main sensing modalities in autonomous robots, including driverless cars. The recent success of deep reinforcement learning in chess or AlphaGo suggests that robot planning control will soon be performed by deep learning in a model free manner, disrupting traditional model-based engineering design. However, recent crashes in driverless cars as well as adversarial attacks in deep networks have exposed the brittleness of deep learning perception which then leads to catastrophic decisions. There is a tremendous opportunity for the cyber physical systems community to embrace these challenges and develop principles, architectures, and tools to ensure safety of autonomous systems. In this talk, I will present our approach in ensuring the robustness and safety of autonomous robots that use deep learning as a perceptual sensor in the feedback loop. Using ideas from robust control, we develop tools to analyze the robustness of deep networks that ensure that the perception of the environment is more accurate. Critical to our approach is creating semantic representations of unknown environments while also quantifying the uncertainty of semantic maps. Autonomous planning and control need to both embrace such semantic representations and formally reason about the environmental uncertainty produced by deep learning in the feedback loop, leading to autonomous robots that operate with prescribed safety in unknown but learned environments.