I am on the faculty in the Civil and Environmental Engineering Department at the University of Michigan. My research is on control of energy systems.
Dr. Anuradha Annaswamy received the Ph.D. degree in Electrical Engineering from Yale University in 1985. She has been a member of the faculty at Yale, Boston University, and MIT where currently she is the director of the Active-Adaptive Control Laboratory and a Senior Research Scientist in the Department of Mechanical Engineering. Her research interests pertain to adaptive control theory and applications to aerospace and automotive control, active control of noise in thermo-fluid systems, control of autonomous systems, decision and control in smart grids, and co-design of control and distributed embedded systems. She is the co-editor of the IEEE CSS report on Impact of Control Technology: Overview, Success Stories, and Research Challenges, 2011, and will serve as the Editor-in-Chief of the IEEE Vision document on Smart Grid and the role of Control Systems to be published in 2013. Dr. Annaswamy has received several awards including the George Axelby Outstanding Paper award from the IEEE Control Systems Society, the Presidential Young Investigator award from the National Science Foundation, the Hans Fisher Senior Fellowship from the Institute for Advanced Study at the Technische Universität München in 2008, and the Donald Groen Julius Prize for 2008 from the Institute of Mechanical Engineers. Dr. Annaswamy is a Fellow of the IEEE and a member of AIAA.
This project aims to transform the software development process in modern cars, which are witnessing significant innovation with many new autonomous functions being introduced, culminating in a fully autonomous vehicle. Most of these new features are indeed implemented in software, at the heart of which lies several control algorithms. Such control algorithms operate in a feedback loop, involving sensing the state of the plant or the system to be controlled, computing a control input, and actuating the plant in order to enforce a desired behavior on it. Examples of this range from brake and engine control, to cruise control, automated parking, and to fully autonomous driving. Current development flows start with mathematically designing a controller, followed by implementing it in software on the embedded systems existing in a car. This flow has worked well in the past, where automotive embedded systems were simple ? with few processors, communication buses, and simple sensors. The control algorithms were simple as well, and important functions were largely implemented by mechanical subsystems. But modern cars have over 100 processors connected by several miles of cables, and multiple sensors like cameras, radars and lidars, whose data needs complex processing before it can be used by a controller. Further, the control algorithms themselves are also more complex since they need to implement new autonomous features that did not exist before. As a result, both computation, communication, and memory accesses in such a complex hardware/software system can now be organized in many different ways, with each being associated with different tradeoffs in accuracy, timing, and resource requirements. These in turn have considerable impact on control performance and how the control strategy needs to be designed. As a result, the clear separation between designing the controller, followed by implementing it in software in the car, no longer works well. This project aims to develop both the theoretical foundations and the tool support to adapt this design flow to emerging automotive control strategies and embedded systems. This will not only result in more cost-effective design of future cars, but will also help with certifying the implemented controllers, thereby leading to safer autonomous cars.
In particular, the goal is to automate the synthesis and implementation of control algorithms on distributed embedded architectures consisting of different types of multicore processors, GPUs, FPGA-based accelerators, different communication buses, gateways, and sensors associated with compute-intensive processing. Starting with specifications of plants, control objectives, controller templates, and a partially-specified implementation architecture, this project seeks to synthesize both controller and implementation architecture parameters that meet all control objectives and resource constraints. Towards this, a variety of techniques from switched control, interface compatibility checking, and scheduling of multi-mode systems ? that bring together control theory, real-time systems, program analysis, and mathematical optimization, will be used. In collaboration with General Motors, this project will build a tool chain that integrates controller design tools like Matlab/Simulink with standard embedded systems design and configuration tools. This project will demonstrate the benefits of this new design flow and tool support by addressing a set of challenge problems from General Motors.
In avionics, an evolution is underway to endow aircraft with ?thinking? capabilities through the use of artificial-intelligence (AI) techniques. This evolution is being fueled by the availability of high-performance embedded hardware platforms, typically in the form of multicore machines augmented with accelerators that can speed up certain computations. Unfortunately, avionics software certification processes have not kept pace with this evolution. These processes are rooted in the twin concepts of time and space partitioning: different system components are prevented from interfering with each other as they execute (time) and as they access memory (space). On a single-processor machine, these concepts can be simply applied to decompose a system into smaller components that can be specified, implemented, and understood separately. On a multicore+accelerator platform, however, component isolation is much more difficult to achieve efficiently. This fact points to a looming dilemma: unless reasonable notions of component isolation can be provided in this context, certifying AI-based avionics systems will likely be impractical. This project will address this dilemma through multi-faceted research in the CPS Core Research Areas of Real-Time Systems, Safety, Autonomy, and CPS System Architecture. It will contribute to Real-Time Systems and Safety by producing new infrastructure and analysis tools for component-based avionics applications that must pass real-time certification. It will contribute to Safety and Autonomy by targeting the design of autonomous aircraft that must exhibit certifiably safe and dependable behavior. It will contribute to CPS System Architecture by designing new methods for decomposing complex AI-oriented avionics workloads into components that are isolated in space and time.
The intellectual merit of this project lies in producing a framework for supporting components on multicore+accelerator platforms in AI-based avionics use cases. This framework will balance the need to isolate components in time and space with the need for efficient execution. Component provisioning hinges on execution time bounds for individual programs. New timing-analysis methods will be produced for obtaining these bounds at different safety levels. Research will also be conducted on performance/timeliness/accuracy tradeoffs that arise when refactoring time-limited AI computations for perception, planning, and control into components. Experimental evaluations of the proposed framework will be conducted using an autonomous aircraft simulator, commercial drones, and facilities at Northrop Grumman Corp. More broadly, this project will contribute to the continuous push toward more semi-autonomous and autonomous functions in avionics. This push began 40 years ago with auto-pilot functions and is being fueled today by advances in AI software. This project will focus on a key aspect of certifying this software: validating real-time correctness. The results that are produced will be made available to the world at large through open-source software. This software will include operating-system extensions for supporting components in an isolated way and mechanisms for forming components and assessing their timing correctness. Additionally, a special emphasis will be placed on outreach efforts that target underrepresented groups, and on increasing female participation in computing at the undergraduate level.
Deep Reinforcement Learning (RL) has emerged as prominent tool to control cyber-physical systems (CPS) with highly non-linear, stochastic, and unknown dynamics. Nevertheless, our current lack of understanding of when, how, and why RL works necessitates the need for new synthesis and analysis tools for safety-critical CPS driven by RL controllers; this is the main scope of this project. The primary focus of this research is on mobile robot systems. Such CPS are often driven by RL controllers due to their inherent complex - and possibly uncertain/unknown - dynamics, unknown exogenous disturbances, or the need for real-time decision making. Typically, RL-based control design methods are data inefficient, they cannot be safely transferred to new mission & safety requirements or new environments, while they often lack performance guarantees. This research aims to address these limitations resulting in a novel paradigm in safe autonomy for CPS with RL controllers. Wide availability of the developed autonomy methods can enable safety-critical applications for CPS with significant societal impact on, e.g., environmental monitoring, infrastructure inspection, autonomous driving, and healthcare. The broader impacts of this research include its educational agenda involving K-12, undergraduate and graduate level education.
To achieve the research goal of safe, efficient, and transferable RL, three tightly coupled research thrusts are pursued: (i) accelerated & safe reinforcement learning for temporal logic control objectives; (ii) safe transfer learning for temporal logic control objectives; (iii) compositional verification of temporal logic properties for CPS with NN controllers. The technical approach in these thrusts relies on tools drawn from formal methods, machine learning, and control theory and requires overcoming intellectual challenges related to integration of computation, control, and sensing. The developed autonomy methods will be validated and demonstrated on mobile aerial and ground robots in autonomous surveillance, delivery, and mobile manipulation tasks.