CAREER: Formal Methods for Human-Cyber-Physical Systems
Lead PI:
Lu Feng
Abstract
There is a growing trend toward human-cyber-physical systems (h-CPS), where systems collaborate or interact with human operators to harness complementary strengths of humans and autonomy. Examples include self-driving cars that need the occasional driver intervention, and industrial robots that work beside or cooperatively with people. The societal impact of h-CPS, however, is contingent on ensuring safety and reliability. Several high-profile incidents have shown that unsafe h-CPS can lead to catastrophic outcomes. Formal methods enable the model-based design of safety-critical systems with mathematically rigorous guarantees. However, the research area of formal methods for h-CPS is still in its infancy. The goal of this research is to expand formal methods toward the joint modeling and formal analysis of CPS and human-autonomy interactions, accounting for the uncertainty and variability of human behaviors, intentions, and preferences.
Performance Period: 06/15/2020 - 05/31/2025
Institution: University of Virginia Main Campus
Sponsor: NSF
Award Number: 1942836
CPS Medium: Autonomous Control of Self-Powered Critical Infrastructures
Lead PI:
Jeff Scruggs
Co-PI:
Abstract
This Cyber Physical Systems (CPS) project will develop novel sensing, actuation, and embedded computing technologies that allow civil infrastructures to be responsive, resilient and adaptive in the face of dynamic loads. Such technologies require delivery of electrical power, typically either via an external power grid, or through the use of battery storage. However, grid power may be unreliable during extreme loading events, and batteries must be periodically recharged or replaced. The novelty of the technologies developed in this project is that they power themselves, by storing and reusing energy injected into the infrastructure by external loads. The project focuses on three applications: (1) urban stormwater networks that actively control water levels to prevent flooding, using power generated from the hydrologic flows, (2) buildings that actively control their deformations during earthquakes and high winds, using power generated from vibrations, and (3) ocean desalination systems that actively control pumping rates, using power generated from waves. The project contains an experimental campaign for each application. It also contains an analytical component, focused on the development of control algorithms to maximize the performance of the technologies. Educational outreach activities include class modules and research experiences for undergraduate and graduate students, as well as a workshop for high school students. Control algorithms for self-powered infrastructures must explicitly optimize the balance between power generation and performance objectives. This project will innovate new Model Predictive Control algorithms for self-powered infrastructure technologies, such that they achieve the best performance possible while not running out of energy. These algorithms will be validated experimentally, for all three applications. There is presently no existing theory for optimal control of self-powered systems that is scalable to large and complex systems such as the ones under consideration. The research to be conducted here will augment recent advances in Model Predictive Control theory, to result in a new body of knowledge in this area. Challenges include: (1) innovation of optimization algorithms that can contend with the inherent nonconvexity of optimal self-powered control problems; (2) development of effective techniques for handling the stochastic nature of the dynamics for the target applications; (3) synthesis of controllers that are computationally tractable, but which also optimally compensate for the complex transmission losses and constraints in the power trains; (4) the derivation of systematic techniques for ensuring the robustness of the controllers, to uncertainties in the system model and disturbances.
Jeff Scruggs

I am on the faculty in the Civil and Environmental Engineering Department at the University of Michigan.  My research is on control of energy systems.

Performance Period: 10/01/2022 - 09/30/2025
Institution: University of Michigan
Sponsor: National Science Foundation
Award Number: 2206018
CPS: DFG Joint: Medium: Collaborative Research: Perceptive Stochastic Coordination in Mass Platoons of Automated Vehicles
Lead PI:
Javad Mohammadpour Velni
Abstract
Connected Automated Vehicle (CAV) applications are expected to transform the transportation landscape and address some of the pressing safety and efficiency issues. While advances in communication and computing technologies enable the concept of CAVs, the coupling of application, control and communication components of such systems and interference from human actors, pose significant challenges to designing systems that are safe and reliable beyond prototype environments. Realizing CAV applications, in particular in situations where humans may partly remain in the loop, requires addressing uncertainties that arise from human input. Large scale deployment of CAVs will also require addressing challenges in coordination of actions among CAVs and with human operated systems. To address these challenges, this project develops a novel model-based stochastic hybrid systems (SHS)-theoretic approach that relies on describing and communicating behaviors of actors in the system in the form of evolving SHS using Bayesian learning. The models are then utilized in a stochastic model predictive control (SMPC) framework for optimal coordination of actions. The proposed research will provide wide-ranging societal benefits through three major impact areas: first, by advancing research in stochastic communication-aware control design for hybrid systems; second, through the development of new models and advanced controllers to address the emerging challenges of coordinating mixed systems of automated and manned vehicles, hence opening new vistas in other areas involving general multi-agent systems; and third, through educational and outreach activities that are natural extensions of this multidisciplinary research. This project is also the first fruits of a recent National Science Foundation/Deutsche Forschungs Gesellschaft (NSF/DFG) collaboration on cyber-physical systems (CPS). Through this collaboration, NSF funds the US component (University of Central Florida and University of Georgia) while the German partners (University of Technology and University of Koblenz-Landau) are funded by DFG. The overarching goal of this collaborative research is to introduce SHS-based modeling and control concepts to allow the design of highly efficient CAV systems capable of large scale coordination (mass platooning). Such designs are currently challenging due to the uncertainties that stem from human input and communication of actors. The key objectives of the project are to: (1) develop methods for capturing the human, sensors and communication induced uncertainties of mixed automated and manned systems in a stochastic hybrid system form (perception maps) and communicating them in a control-aware fashion, (2) employ the models in an SMPC framework to produce multi-modal decisions and lower level longitudinal motion control in a single unified framework, and (3) validate the analytical outcomes through both extensive data-driven co-simulation using industry utilized models, and a fleet of realistic small CAVs and a full scale prototype CAV.
Performance Period: 10/01/2022 - 12/31/2024
Institution: Clemson University
Sponsor: National Science Foundation
Award Number: 2302215
Collaborative Research: CPS: Medium: Wildland Fire Observation, Management, and Evacuation using Intelligent Collaborative Flying and Ground Systems
Lead PI:
Janice Coen
Abstract
Increasing wildfire costs---a reflection of climate variability and development within wildlands---drive calls for new national capabilities to manage wildfires. The great potential of unmanned aerial systems (UAS) has not yet been fully utilized in this domain due to the lack of holistic, resilient, flexible, and cost-effective monitoring protocols. This project will develop UAS-based fire management strategies to use autonomous unmanned aerial vehicles (UAVs) in an optimal, efficient, and safe way to assist the first responders during the fire detection, management, and evacuation stages. The project is a collaborative effort between Northern Arizona University (NAU), Georgia Institute of Technology (GaTech), Desert Research Institute (DRI), and the National Center for Atmospheric Research (NCAR). The team has established ongoing collaborations with the U.S. Forest Service (USFS) in Pacific Northwest Research Station, Kaibab National Forest (NF), and Arizona Department of Forestry and Fire Management to perform multiple field tests during the prescribed and managed fires. This proposal's objective is to develop an integrated framework satisfying unmet wildland fire management needs, with key advances in scientific and engineering methods by using a network of low-cost and small autonomous UAVs along with ground vehicles during different stages of fire management operations including: (i) early detection in remote and forest areas using autonomous UAVs; (ii) fast active geo-mapping of the fire heat map on flying drones; (iii) real-time video streaming of the fire spread; and (iv) finding optimal evacuation paths using autonomous UAVs to guide the ground vehicles and firefighters for fast and safe evacuation. This project will advance the frontier of disaster management by developing: (i) an innovative drone-based forest fire detection and monitoring technology for rapid intervention in hard-to-access areas with minimal human intervention to protect firefighter lives; (ii) multi-level fire modeling to offer strategic, event-scale, and new on-board, low-computation tactics using fast fire mapping from UAVs; and (iii) a bounded reasoning-based planning mechanism where the UAVs identify the fastest and safest evacuation roads for firefighters and fire-trucks in highly dynamic and uncertain dangerous zones. The developed technologies will be translational to a broad range of applications such as disaster (flooding, fire, mud slides, terrorism) management, where quick search, surveillance, and responses are required with limited human interventions. This project will also contribute to future engineering curricula and pursue a substantial integration of research and education while also engaging female and underrepresented minority students, developing hands-on research experiments for K-12 students. This project is in response to the NSF Cyber-Physical Systems 20-563 solicitation.
Janice Coen
Dr. Janice Coen holds positions of Project Scientist in the Mesoscale and Microscale Meteorology Laboratory at the National Center for Atmospheric Research in Boulder, Colorado, and Senior Research Scientist at the University of San Francisco in San Francisco, California. She received a B.S. in Engineering Physics from Grove City College and an M.S. and Ph.D. from the Department of Geophysical Sciences at the University of Chicago. She studies fire behavior and its interaction with weather using coupled weather-fire CFD models and flow analysis of high-speed IR fire imagery. Her recent work investigated the mechanisms leading to extreme wildfire events, fine-scale wind extrema that lead to ignitions by the electric grid, and integration of coupled models with satellite active fire detection data to forecast the growth of landscape-scale wildfires.
Performance Period: 05/01/2021 - 04/30/2024
Institution: National Center for Atmospheric Research (NCAR)
Sponsor: National Science Foundation
Award Number: 2038759
Collaborative Research:CPS:Medium:SMAC-FIRE: Closed-Loop Sensing, Modeling and Communications for WildFIRE
Lead PI:
Janice Coen
Abstract
Increases in temperatures and drought duration and intensity due to climate change, together with the expansion of wildlife-urban interfaces, has dramatically increased the frequency and intensity of forest fires, and has had devastating effects on lives, property, and the environment. To address this challenge, this project?s goal is to design a network of airborne drones and wireless sensors that can aid in initial wildfire localization and mapping, near-term prediction of fire progression, and providing communications support for firefighting personnel on the ground. Two key aspects differentiate the system from prior work: (1) It leverages and subsequently updates detailed three-dimensional models of the environment, including the effects of fuel type and moisture state, terrain, and atmospheric/wind conditions, in order to provide the most timely and accurate predictions of fire behavior possible, and (2) It adapts to hazardous and rapidly changing conditions, optimally balancing the need for wide-area coverage and maintaining communication links with personnel in remote locations. The science and engineering developed under this project can be adapted to many applications beyond wildfires including structural fires in urban and suburban settings, natural or man-made emergencies involving radiation or airborne chemical leaks, "dirty bombs" that release chemical or biological agents, or tracking highly localized atmospheric conditions surrounding imminent or on-going extreme weather events. The system developed under this project will enable more rapid localization and situational awareness of wildfires at their earliest stages, better predictions of both local, near-term and event-scale behavior, better situational awareness and coordination of personnel and resources, and increased safety for fire fighters on the ground. Models ranging from simple algebraic relationships based on wind velocity to more complex time-dependent coupled fluid dynamics-fire physics models will be used to anticipate fire behavior. These models are hampered by stochastic processes such as the lofting of burning embers to ignite new fires, that cause errors to grow rapidly with time. This project is focused on closing the loop using sensor data provided by airborne drones and ground-based sensors (GBS). The models inform the sensing by anticipating rapid growth of problematic phenomena, and the subsequent sensing updates the models, providing local wind and spot fire locations. Closing this loop as quickly as possible is critical to mitigating the fire?s impact. The system we propose integrates advanced fire modeling tools with mobile drones, wireless GBS, and high-level human interaction for both the initial attack of a wildfire event and subsequent on-going support.
Janice Coen
Dr. Janice Coen holds positions of Project Scientist in the Mesoscale and Microscale Meteorology Laboratory at the National Center for Atmospheric Research in Boulder, Colorado, and Senior Research Scientist at the University of San Francisco in San Francisco, California. She received a B.S. in Engineering Physics from Grove City College and an M.S. and Ph.D. from the Department of Geophysical Sciences at the University of Chicago. She studies fire behavior and its interaction with weather using coupled weather-fire CFD models and flow analysis of high-speed IR fire imagery. Her recent work investigated the mechanisms leading to extreme wildfire events, fine-scale wind extrema that lead to ignitions by the electric grid, and integration of coupled models with satellite active fire detection data to forecast the growth of landscape-scale wildfires.
Performance Period: 07/01/2022 - 06/30/2025
Institution: University Corporation for Atmospheric Research
Sponsor: National Science Foundation
Award Number: 2209994
CPS:Medium:Interactive Human-Drone Partnerships in Emergency Response Scenarios
Lead PI:
Jane Huang
Co-PI:
Abstract
Small unmanned aerial, land, or submersible vehicles (drones) are increasingly used to support emergency response scenarios such as search-and-rescue, structural building fires, and medical deliveries. However, in current practice drones are typically controlled by a single operator thereby significantly limiting their potential. The proposed work will deliver a novel DroneResponse platform, representing the next generation of emergency response solutions in which semi-autonomous and self-coordinating cohorts of drones will serve as fully-fledged members of an emergency response team. Drones will play diverse roles in each emergency response scenario - for example, using thermal imagery to map the structural integrity of a burning building, methodically searching an area for a child lost in a cornfield, or delivering a life-saving device to a person caught in a fast-flowing river. The benefits of this project will be realized by urban and rural communities who will benefit from enhanced emergency response capabilities. Practical lessons learned from this work will broadly contribute to the conversation around best practices for drone deployment in the community including issues related to privacy, safety, and equity. Achieving the DroneResponse vision involves delivering novel scene recognition algorithms capable of recreating high-fidelity models of the environment under less than ideal environmental conditions. The work addresses non-trivial cyber-physical systems (CPS) research challenges associated with (1) scene recognition, including image merging, dealing with uncertainty, and geolocating objects; (2) exploring, designing, and evaluating human-CPS interfaces that provide situational awareness and empower users to define missions and communicate current mission objectives and achievements, (3) developing algorithms to support drone autonomy and runtime adaptation with respect to mission goals established by humans, (4) developing a framework for coordinating image recognition algorithms with real-time drone command and control, and finally (5) evaluating DroneResponse in real-world scenarios. Researchers will leverage user-centered design principles to develop human-CPS interfaces that support situational awareness designed to enable emergency responders to make informed decisions. The end goal is to empower human operators and drones to work collaboratively to save lives, minimize property damage, gather critical information, and contribute to the success of a mission across diverse emergency scenarios.
Performance Period: 10/01/2019 - 09/30/2024
Institution: University of Notre Dame
Sponsor: National Science Foundation
Award Number: 1931962
CPS: DFG Joint: Medium: Collaborative Research: Data-Driven Secure Holonic control and Optimization for the Networked CPS (aDaptioN)
Lead PI:
Anuradha Annaswamy
Abstract
The proposed decentralized/distributed control and optimization for the critical cyber-physical networked infrastructures (CPNI) will improve the robustness, security and resiliency of the electric distribution grid, which directly impacts the life of citizens and national economy. The proposed control and optimization architectures are flexible, adapt to changing operating scenarios, respond quickly and accurately, provide better scalability and robustness, and safely operate the system even when pushed towards the edges by leveraging massive sensor data, distributed computation, and edge computing. The algorithms and platform will be released open source and royalty-free and the project team will work with industry members and researchers for wider usage of the developed algorithms for other CPNI. Developed artifacts as part of the proposed work will be integrated in existing undergraduate and graduate related courses. Undergraduate students will be engaged in research through supplements and underrepresented and pre-engineering students will be engaged through existing outreach activities at home institutions including Imagine U program and 4-H Teens summer camp programs and the Pacific Northwest Louis Stokes Alliance for Minority Participations. Additionally, project team plans to organize a workshop in the third year to demonstrate the fundamental concepts and applications of the proposed control and optimization architecture to advance CPNI. Developed solutions can be extended for range of applications in multiple CPNIs beyond use cases discussed in the proposed work. While the proposed control architecture with edge computing offer great potential; coordinating decentralized control and optimization is extremely challenging due to variable network and computational delays, several interleavings of message arrivals, disparate failure modes of components, and cyber security threats leading to several fundamental theoretical problems. Proposed work offers number of novel solutions including (a) adaptive and delay-aware control algorithms, (b) Predictive control and distributed optimization with realistic cyber-physical constraints, (c) threat sharing, data-driven detection and mitigation for cyber security, (d) coordination and management of computing nodes, (e) knowledge learning and sharing. Proposed solutions will be a step towards advancing fundamentals in CPNI and in engineering next generation CPNI. The proposed work also aims to use high fidelity testbed to evaluate developed algorithms and tools for specific CPNI: electric distribution grid.
Anuradha Annaswamy

Dr. Anuradha Annaswamy received the Ph.D. degree in Electrical Engineering from Yale University in 1985. She has been a member of the faculty at Yale, Boston University, and MIT where currently she is the director of the Active-Adaptive Control Laboratory and a Senior Research Scientist in the Department of Mechanical Engineering. Her research interests pertain to adaptive control theory and applications to aerospace and automotive control, active control of noise in thermo-fluid systems, control of autonomous systems, decision and control in smart grids, and co-design of control and distributed embedded systems. She is the co-editor of the IEEE CSS report on Impact of Control Technology: Overview, Success Stories, and Research Challenges, 2011, and will serve as the Editor-in-Chief of the IEEE Vision document on Smart Grid and the role of Control Systems to be published in 2013. Dr. Annaswamy has received several awards including the George Axelby Outstanding Paper award from the IEEE Control Systems Society, the Presidential Young Investigator award from the National Science Foundation, the Hans Fisher Senior Fellowship from the Institute for Advanced Study at the Technische Universität München in 2008, and the Donald Groen Julius Prize for 2008 from the Institute of Mechanical Engineers. Dr. Annaswamy is a Fellow of the IEEE and a member of AIAA.

Performance Period: 01/01/2020 - 12/31/2023
Institution: Massachusetts Institute of Technology
Sponsor: National Science Foundation
Award Number: 1932406
CPS: Medium: GOALI: Design Automation for Automotive Cyber-Physical Systems
Lead PI:
Samarjit Chakraborty
Co-PI:
Abstract

This project aims to transform the software development process in modern cars, which are witnessing significant innovation with many new autonomous functions being introduced, culminating in a fully autonomous vehicle. Most of these new features are indeed implemented in software, at the heart of which lies several control algorithms. Such control algorithms operate in a feedback loop, involving sensing the state of the plant or the system to be controlled, computing a control input, and actuating the plant in order to enforce a desired behavior on it. Examples of this range from brake and engine control, to cruise control, automated parking, and to fully autonomous driving. Current development flows start with mathematically designing a controller, followed by implementing it in software on the embedded systems existing in a car. This flow has worked well in the past, where automotive embedded systems were simple ? with few processors, communication buses, and simple sensors. The control algorithms were simple as well, and important functions were largely implemented by mechanical subsystems. But modern cars have over 100 processors connected by several miles of cables, and multiple sensors like cameras, radars and lidars, whose data needs complex processing before it can be used by a controller. Further, the control algorithms themselves are also more complex since they need to implement new autonomous features that did not exist before. As a result, both computation, communication, and memory accesses in such a complex hardware/software system can now be organized in many different ways, with each being associated with different tradeoffs in accuracy, timing, and resource requirements. These in turn have considerable impact on control performance and how the control strategy needs to be designed. As a result, the clear separation between designing the controller, followed by implementing it in software in the car, no longer works well. This project aims to develop both the theoretical foundations and the tool support to adapt this design flow to emerging automotive control strategies and embedded systems. This will not only result in more cost-effective design of future cars, but will also help with certifying the implemented controllers, thereby leading to safer autonomous cars. 

In particular, the goal is to automate the synthesis and implementation of control algorithms on distributed embedded architectures consisting of different types of multicore processors, GPUs, FPGA-based accelerators, different communication buses, gateways, and sensors associated with compute-intensive processing. Starting with specifications of plants, control objectives, controller templates, and a partially-specified implementation architecture, this project seeks to synthesize both controller and implementation architecture parameters that meet all control objectives and resource constraints. Towards this, a variety of techniques from switched control, interface compatibility checking, and scheduling of multi-mode systems ? that bring together control theory, real-time systems, program analysis, and mathematical optimization, will be used. In collaboration with General Motors, this project will build a tool chain that integrates controller design tools like Matlab/Simulink with standard embedded systems design and configuration tools. This project will demonstrate the benefits of this new design flow and tool support by addressing a set of challenge problems from General Motors.

Samarjit Chakraborty
Samarjit Chakraborty is a William R. Kenan, Jr. Distinguished Professor and Chair of the Department of Computer Science at UNC Chapel Hill. Prior to coming here he was a professor of Electrical Engineering at the Technical University of Munich in Germany from 2008 – 2019, where he held the Chair of Real-Time Computer Systems. From 2011 – 2016 he additionally led a research program on embedded systems for electric vehicles at the TUM CREATE Center for Electromobility in Singapore, where he also served as a Scientific Advisor. He was an assistant professor of Computer Science at the National University of Singapore from 2003 – 2008, before joining TUM. He obtained his PhD from ETH Zurich in 2003. His research is broadly in embedded and cyber-physical systems design. He received the 2023 Humboldt Professorship Award from Germany and is a Fellow of the IEEE.
Performance Period: 01/01/2021 - 12/31/2023
Institution: University of North Carolina at Chapel Hill
Sponsor: National Science Foundation
Award Number: 2038960
CPS: Medium: GOALI: Enabling Scalable Real-Time Certification for AI-Oriented Safety-Critical Systems
Lead PI:
James Anderson
Co-PI:
Abstract

In avionics, an evolution is underway to endow aircraft with ?thinking? capabilities through the use of artificial-intelligence (AI) techniques. This evolution is being fueled by the availability of high-performance embedded hardware platforms, typically in the form of multicore machines augmented with accelerators that can speed up certain computations. Unfortunately, avionics software certification processes have not kept pace with this evolution. These processes are rooted in the twin concepts of time and space partitioning: different system components are prevented from interfering with each other as they execute (time) and as they access memory (space). On a single-processor machine, these concepts can be simply applied to decompose a system into smaller components that can be specified, implemented, and understood separately. On a multicore+accelerator platform, however, component isolation is much more difficult to achieve efficiently. This fact points to a looming dilemma: unless reasonable notions of component isolation can be provided in this context, certifying AI-based avionics systems will likely be impractical. This project will address this dilemma through multi-faceted research in the CPS Core Research Areas of Real-Time Systems, Safety, Autonomy, and CPS System Architecture. It will contribute to Real-Time Systems and Safety by producing new infrastructure and analysis tools for component-based avionics applications that must pass real-time certification. It will contribute to Safety and Autonomy by targeting the design of autonomous aircraft that must exhibit certifiably safe and dependable behavior. It will contribute to CPS System Architecture by designing new methods for decomposing complex AI-oriented avionics workloads into components that are isolated in space and time.

The intellectual merit of this project lies in producing a framework for supporting components on multicore+accelerator platforms in AI-based avionics use cases. This framework will balance the need to isolate components in time and space with the need for efficient execution. Component provisioning hinges on execution time bounds for individual programs. New timing-analysis methods will be produced for obtaining these bounds at different safety levels. Research will also be conducted on performance/timeliness/accuracy tradeoffs that arise when refactoring time-limited AI computations for perception, planning, and control into components. Experimental evaluations of the proposed framework will be conducted using an autonomous aircraft simulator, commercial drones, and facilities at Northrop Grumman Corp. More broadly, this project will contribute to the continuous push toward more semi-autonomous and autonomous functions in avionics. This push began 40 years ago with auto-pilot functions and is being fueled today by advances in AI software. This project will focus on a key aspect of certifying this software: validating real-time correctness. The results that are produced will be made available to the world at large through open-source software. This software will include operating-system extensions for supporting components in an isolated way and mechanisms for forming components and assessing their timing correctness. Additionally, a special emphasis will be placed on outreach efforts that target underrepresented groups, and on increasing female participation in computing at the undergraduate level.

Performance Period: 09/01/2020 - 07/10/2023
Institution: University of North Carolina at Chapel Hill
Sponsor: National Science Foundation
Award Number: 2038855
CPS: SMALL: Formal Methods for Safe, Efficient, and Transferable Learning-enabled Autonomy
Lead PI:
Yiannis Kantaros
Abstract

Deep Reinforcement Learning (RL) has emerged as prominent tool to control cyber-physical systems (CPS) with highly non-linear, stochastic, and unknown dynamics. Nevertheless, our current lack of understanding of when, how, and why RL works necessitates the need for new synthesis and analysis tools for safety-critical CPS driven by RL controllers; this is the main scope of this project. The primary focus of this research is on mobile robot systems. Such CPS are often driven by RL controllers due to their inherent complex - and possibly uncertain/unknown - dynamics, unknown exogenous disturbances, or the need for real-time decision making. Typically, RL-based control design methods are data inefficient, they cannot be safely transferred to new mission & safety requirements or new environments, while they often lack performance guarantees. This research aims to address these limitations resulting in a novel paradigm in safe autonomy for CPS with RL controllers. Wide availability of the developed autonomy methods can enable safety-critical applications for CPS with significant societal impact on, e.g., environmental monitoring, infrastructure inspection, autonomous driving, and healthcare. The broader impacts of this research include its educational agenda involving K-12, undergraduate and graduate level education.

To achieve the research goal of safe, efficient, and transferable RL, three tightly coupled research thrusts are pursued: (i) accelerated & safe reinforcement learning for temporal logic control objectives; (ii) safe transfer learning for temporal logic control objectives; (iii) compositional verification of temporal logic properties for CPS with NN controllers. The technical approach in these thrusts relies on tools drawn from formal methods, machine learning, and control theory and requires overcoming intellectual challenges related to integration of computation, control, and sensing. The developed autonomy methods will be validated and demonstrated on mobile aerial and ground robots in autonomous surveillance, delivery, and mobile manipulation tasks.
 

Performance Period: 04/01/2023 - 03/31/2026
Institution: Washington University
Sponsor: National Science Foundation
Award Number: 2231257
Subscribe to