Collaborative Research:CPS:Medium:SMAC-FIRE: Closed-Loop Sensing, Modeling and Communications for WildFIRE
Lead PI:
Janice Coen
Abstract
Increases in temperatures and drought duration and intensity due to climate change, together with the expansion of wildlife-urban interfaces, has dramatically increased the frequency and intensity of forest fires, and has had devastating effects on lives, property, and the environment. To address this challenge, this project?s goal is to design a network of airborne drones and wireless sensors that can aid in initial wildfire localization and mapping, near-term prediction of fire progression, and providing communications support for firefighting personnel on the ground. Two key aspects differentiate the system from prior work: (1) It leverages and subsequently updates detailed three-dimensional models of the environment, including the effects of fuel type and moisture state, terrain, and atmospheric/wind conditions, in order to provide the most timely and accurate predictions of fire behavior possible, and (2) It adapts to hazardous and rapidly changing conditions, optimally balancing the need for wide-area coverage and maintaining communication links with personnel in remote locations. The science and engineering developed under this project can be adapted to many applications beyond wildfires including structural fires in urban and suburban settings, natural or man-made emergencies involving radiation or airborne chemical leaks, "dirty bombs" that release chemical or biological agents, or tracking highly localized atmospheric conditions surrounding imminent or on-going extreme weather events. The system developed under this project will enable more rapid localization and situational awareness of wildfires at their earliest stages, better predictions of both local, near-term and event-scale behavior, better situational awareness and coordination of personnel and resources, and increased safety for fire fighters on the ground. Models ranging from simple algebraic relationships based on wind velocity to more complex time-dependent coupled fluid dynamics-fire physics models will be used to anticipate fire behavior. These models are hampered by stochastic processes such as the lofting of burning embers to ignite new fires, that cause errors to grow rapidly with time. This project is focused on closing the loop using sensor data provided by airborne drones and ground-based sensors (GBS). The models inform the sensing by anticipating rapid growth of problematic phenomena, and the subsequent sensing updates the models, providing local wind and spot fire locations. Closing this loop as quickly as possible is critical to mitigating the fire?s impact. The system we propose integrates advanced fire modeling tools with mobile drones, wireless GBS, and high-level human interaction for both the initial attack of a wildfire event and subsequent on-going support.
Janice Coen
Dr. Janice Coen holds positions of Project Scientist in the Mesoscale and Microscale Meteorology Laboratory at the National Center for Atmospheric Research in Boulder, Colorado, and Senior Research Scientist at the University of San Francisco in San Francisco, California. She received a B.S. in Engineering Physics from Grove City College and an M.S. and Ph.D. from the Department of Geophysical Sciences at the University of Chicago. She studies fire behavior and its interaction with weather using coupled weather-fire CFD models and flow analysis of high-speed IR fire imagery. Her recent work investigated the mechanisms leading to extreme wildfire events, fine-scale wind extrema that lead to ignitions by the electric grid, and integration of coupled models with satellite active fire detection data to forecast the growth of landscape-scale wildfires.
Performance Period: 07/01/2022 - 06/30/2025
Institution: University Corporation for Atmospheric Research
Sponsor: National Science Foundation
Award Number: 2209994
CPS:Medium:Interactive Human-Drone Partnerships in Emergency Response Scenarios
Lead PI:
Jane Huang
Co-PI:
Abstract
Small unmanned aerial, land, or submersible vehicles (drones) are increasingly used to support emergency response scenarios such as search-and-rescue, structural building fires, and medical deliveries. However, in current practice drones are typically controlled by a single operator thereby significantly limiting their potential. The proposed work will deliver a novel DroneResponse platform, representing the next generation of emergency response solutions in which semi-autonomous and self-coordinating cohorts of drones will serve as fully-fledged members of an emergency response team. Drones will play diverse roles in each emergency response scenario - for example, using thermal imagery to map the structural integrity of a burning building, methodically searching an area for a child lost in a cornfield, or delivering a life-saving device to a person caught in a fast-flowing river. The benefits of this project will be realized by urban and rural communities who will benefit from enhanced emergency response capabilities. Practical lessons learned from this work will broadly contribute to the conversation around best practices for drone deployment in the community including issues related to privacy, safety, and equity. Achieving the DroneResponse vision involves delivering novel scene recognition algorithms capable of recreating high-fidelity models of the environment under less than ideal environmental conditions. The work addresses non-trivial cyber-physical systems (CPS) research challenges associated with (1) scene recognition, including image merging, dealing with uncertainty, and geolocating objects; (2) exploring, designing, and evaluating human-CPS interfaces that provide situational awareness and empower users to define missions and communicate current mission objectives and achievements, (3) developing algorithms to support drone autonomy and runtime adaptation with respect to mission goals established by humans, (4) developing a framework for coordinating image recognition algorithms with real-time drone command and control, and finally (5) evaluating DroneResponse in real-world scenarios. Researchers will leverage user-centered design principles to develop human-CPS interfaces that support situational awareness designed to enable emergency responders to make informed decisions. The end goal is to empower human operators and drones to work collaboratively to save lives, minimize property damage, gather critical information, and contribute to the success of a mission across diverse emergency scenarios.
Performance Period: 10/01/2019 - 09/30/2024
Institution: University of Notre Dame
Sponsor: National Science Foundation
Award Number: 1931962
CPS: DFG Joint: Medium: Collaborative Research: Data-Driven Secure Holonic control and Optimization for the Networked CPS (aDaptioN)
Lead PI:
Anuradha Annaswamy
Abstract
The proposed decentralized/distributed control and optimization for the critical cyber-physical networked infrastructures (CPNI) will improve the robustness, security and resiliency of the electric distribution grid, which directly impacts the life of citizens and national economy. The proposed control and optimization architectures are flexible, adapt to changing operating scenarios, respond quickly and accurately, provide better scalability and robustness, and safely operate the system even when pushed towards the edges by leveraging massive sensor data, distributed computation, and edge computing. The algorithms and platform will be released open source and royalty-free and the project team will work with industry members and researchers for wider usage of the developed algorithms for other CPNI. Developed artifacts as part of the proposed work will be integrated in existing undergraduate and graduate related courses. Undergraduate students will be engaged in research through supplements and underrepresented and pre-engineering students will be engaged through existing outreach activities at home institutions including Imagine U program and 4-H Teens summer camp programs and the Pacific Northwest Louis Stokes Alliance for Minority Participations. Additionally, project team plans to organize a workshop in the third year to demonstrate the fundamental concepts and applications of the proposed control and optimization architecture to advance CPNI. Developed solutions can be extended for range of applications in multiple CPNIs beyond use cases discussed in the proposed work. While the proposed control architecture with edge computing offer great potential; coordinating decentralized control and optimization is extremely challenging due to variable network and computational delays, several interleavings of message arrivals, disparate failure modes of components, and cyber security threats leading to several fundamental theoretical problems. Proposed work offers number of novel solutions including (a) adaptive and delay-aware control algorithms, (b) Predictive control and distributed optimization with realistic cyber-physical constraints, (c) threat sharing, data-driven detection and mitigation for cyber security, (d) coordination and management of computing nodes, (e) knowledge learning and sharing. Proposed solutions will be a step towards advancing fundamentals in CPNI and in engineering next generation CPNI. The proposed work also aims to use high fidelity testbed to evaluate developed algorithms and tools for specific CPNI: electric distribution grid.
Anuradha Annaswamy

Dr. Anuradha Annaswamy received the Ph.D. degree in Electrical Engineering from Yale University in 1985. She has been a member of the faculty at Yale, Boston University, and MIT where currently she is the director of the Active-Adaptive Control Laboratory and a Senior Research Scientist in the Department of Mechanical Engineering. Her research interests pertain to adaptive control theory and applications to aerospace and automotive control, active control of noise in thermo-fluid systems, control of autonomous systems, decision and control in smart grids, and co-design of control and distributed embedded systems. She is the co-editor of the IEEE CSS report on Impact of Control Technology: Overview, Success Stories, and Research Challenges, 2011, and will serve as the Editor-in-Chief of the IEEE Vision document on Smart Grid and the role of Control Systems to be published in 2013. Dr. Annaswamy has received several awards including the George Axelby Outstanding Paper award from the IEEE Control Systems Society, the Presidential Young Investigator award from the National Science Foundation, the Hans Fisher Senior Fellowship from the Institute for Advanced Study at the Technische Universität München in 2008, and the Donald Groen Julius Prize for 2008 from the Institute of Mechanical Engineers. Dr. Annaswamy is a Fellow of the IEEE and a member of AIAA.

Performance Period: 01/01/2020 - 12/31/2023
Institution: Massachusetts Institute of Technology
Sponsor: National Science Foundation
Award Number: 1932406
CPS: Medium: GOALI: Design Automation for Automotive Cyber-Physical Systems
Lead PI:
Samarjit Chakraborty
Co-PI:
Abstract

This project aims to transform the software development process in modern cars, which are witnessing significant innovation with many new autonomous functions being introduced, culminating in a fully autonomous vehicle. Most of these new features are indeed implemented in software, at the heart of which lies several control algorithms. Such control algorithms operate in a feedback loop, involving sensing the state of the plant or the system to be controlled, computing a control input, and actuating the plant in order to enforce a desired behavior on it. Examples of this range from brake and engine control, to cruise control, automated parking, and to fully autonomous driving. Current development flows start with mathematically designing a controller, followed by implementing it in software on the embedded systems existing in a car. This flow has worked well in the past, where automotive embedded systems were simple ? with few processors, communication buses, and simple sensors. The control algorithms were simple as well, and important functions were largely implemented by mechanical subsystems. But modern cars have over 100 processors connected by several miles of cables, and multiple sensors like cameras, radars and lidars, whose data needs complex processing before it can be used by a controller. Further, the control algorithms themselves are also more complex since they need to implement new autonomous features that did not exist before. As a result, both computation, communication, and memory accesses in such a complex hardware/software system can now be organized in many different ways, with each being associated with different tradeoffs in accuracy, timing, and resource requirements. These in turn have considerable impact on control performance and how the control strategy needs to be designed. As a result, the clear separation between designing the controller, followed by implementing it in software in the car, no longer works well. This project aims to develop both the theoretical foundations and the tool support to adapt this design flow to emerging automotive control strategies and embedded systems. This will not only result in more cost-effective design of future cars, but will also help with certifying the implemented controllers, thereby leading to safer autonomous cars. 

In particular, the goal is to automate the synthesis and implementation of control algorithms on distributed embedded architectures consisting of different types of multicore processors, GPUs, FPGA-based accelerators, different communication buses, gateways, and sensors associated with compute-intensive processing. Starting with specifications of plants, control objectives, controller templates, and a partially-specified implementation architecture, this project seeks to synthesize both controller and implementation architecture parameters that meet all control objectives and resource constraints. Towards this, a variety of techniques from switched control, interface compatibility checking, and scheduling of multi-mode systems ? that bring together control theory, real-time systems, program analysis, and mathematical optimization, will be used. In collaboration with General Motors, this project will build a tool chain that integrates controller design tools like Matlab/Simulink with standard embedded systems design and configuration tools. This project will demonstrate the benefits of this new design flow and tool support by addressing a set of challenge problems from General Motors.

Samarjit Chakraborty
Samarjit Chakraborty is a William R. Kenan, Jr. Distinguished Professor and Chair of the Department of Computer Science at UNC Chapel Hill. Prior to coming here he was a professor of Electrical Engineering at the Technical University of Munich in Germany from 2008 – 2019, where he held the Chair of Real-Time Computer Systems. From 2011 – 2016 he additionally led a research program on embedded systems for electric vehicles at the TUM CREATE Center for Electromobility in Singapore, where he also served as a Scientific Advisor. He was an assistant professor of Computer Science at the National University of Singapore from 2003 – 2008, before joining TUM. He obtained his PhD from ETH Zurich in 2003. His research is broadly in embedded and cyber-physical systems design. He received the 2023 Humboldt Professorship Award from Germany and is a Fellow of the IEEE.
Performance Period: 01/01/2021 - 12/31/2023
Institution: University of North Carolina at Chapel Hill
Sponsor: National Science Foundation
Award Number: 2038960
CPS: Medium: GOALI: Enabling Scalable Real-Time Certification for AI-Oriented Safety-Critical Systems
Lead PI:
James Anderson
Co-PI:
Abstract

In avionics, an evolution is underway to endow aircraft with ?thinking? capabilities through the use of artificial-intelligence (AI) techniques. This evolution is being fueled by the availability of high-performance embedded hardware platforms, typically in the form of multicore machines augmented with accelerators that can speed up certain computations. Unfortunately, avionics software certification processes have not kept pace with this evolution. These processes are rooted in the twin concepts of time and space partitioning: different system components are prevented from interfering with each other as they execute (time) and as they access memory (space). On a single-processor machine, these concepts can be simply applied to decompose a system into smaller components that can be specified, implemented, and understood separately. On a multicore+accelerator platform, however, component isolation is much more difficult to achieve efficiently. This fact points to a looming dilemma: unless reasonable notions of component isolation can be provided in this context, certifying AI-based avionics systems will likely be impractical. This project will address this dilemma through multi-faceted research in the CPS Core Research Areas of Real-Time Systems, Safety, Autonomy, and CPS System Architecture. It will contribute to Real-Time Systems and Safety by producing new infrastructure and analysis tools for component-based avionics applications that must pass real-time certification. It will contribute to Safety and Autonomy by targeting the design of autonomous aircraft that must exhibit certifiably safe and dependable behavior. It will contribute to CPS System Architecture by designing new methods for decomposing complex AI-oriented avionics workloads into components that are isolated in space and time.

The intellectual merit of this project lies in producing a framework for supporting components on multicore+accelerator platforms in AI-based avionics use cases. This framework will balance the need to isolate components in time and space with the need for efficient execution. Component provisioning hinges on execution time bounds for individual programs. New timing-analysis methods will be produced for obtaining these bounds at different safety levels. Research will also be conducted on performance/timeliness/accuracy tradeoffs that arise when refactoring time-limited AI computations for perception, planning, and control into components. Experimental evaluations of the proposed framework will be conducted using an autonomous aircraft simulator, commercial drones, and facilities at Northrop Grumman Corp. More broadly, this project will contribute to the continuous push toward more semi-autonomous and autonomous functions in avionics. This push began 40 years ago with auto-pilot functions and is being fueled today by advances in AI software. This project will focus on a key aspect of certifying this software: validating real-time correctness. The results that are produced will be made available to the world at large through open-source software. This software will include operating-system extensions for supporting components in an isolated way and mechanisms for forming components and assessing their timing correctness. Additionally, a special emphasis will be placed on outreach efforts that target underrepresented groups, and on increasing female participation in computing at the undergraduate level.

Performance Period: 09/01/2020 - 07/10/2023
Institution: University of North Carolina at Chapel Hill
Sponsor: National Science Foundation
Award Number: 2038855
CPS: SMALL: Formal Methods for Safe, Efficient, and Transferable Learning-enabled Autonomy
Lead PI:
Yiannis Kantaros
Abstract

Deep Reinforcement Learning (RL) has emerged as prominent tool to control cyber-physical systems (CPS) with highly non-linear, stochastic, and unknown dynamics. Nevertheless, our current lack of understanding of when, how, and why RL works necessitates the need for new synthesis and analysis tools for safety-critical CPS driven by RL controllers; this is the main scope of this project. The primary focus of this research is on mobile robot systems. Such CPS are often driven by RL controllers due to their inherent complex - and possibly uncertain/unknown - dynamics, unknown exogenous disturbances, or the need for real-time decision making. Typically, RL-based control design methods are data inefficient, they cannot be safely transferred to new mission & safety requirements or new environments, while they often lack performance guarantees. This research aims to address these limitations resulting in a novel paradigm in safe autonomy for CPS with RL controllers. Wide availability of the developed autonomy methods can enable safety-critical applications for CPS with significant societal impact on, e.g., environmental monitoring, infrastructure inspection, autonomous driving, and healthcare. The broader impacts of this research include its educational agenda involving K-12, undergraduate and graduate level education.

To achieve the research goal of safe, efficient, and transferable RL, three tightly coupled research thrusts are pursued: (i) accelerated & safe reinforcement learning for temporal logic control objectives; (ii) safe transfer learning for temporal logic control objectives; (iii) compositional verification of temporal logic properties for CPS with NN controllers. The technical approach in these thrusts relies on tools drawn from formal methods, machine learning, and control theory and requires overcoming intellectual challenges related to integration of computation, control, and sensing. The developed autonomy methods will be validated and demonstrated on mobile aerial and ground robots in autonomous surveillance, delivery, and mobile manipulation tasks.
 

Performance Period: 04/01/2023 - 03/31/2026
Institution: Washington University
Sponsor: National Science Foundation
Award Number: 2231257
Collaborative Research: CPS: Medium: Sensor Attack Detection and Recovery in Cyber-Physical Systems
Lead PI:
Insup Lee
Co-PI:
Abstract

New vulnerabilities arise in Cyber-Physical Systems (CPS) as new technologies are integrated to interact and control physical systems. In addition to software and network attacks, sensor attacks are a crucial security risk in CPS, where an attacker alters sensing information to negatively interfere with the physical system. Acting on malicious sensor information can cause serious consequences. While many research efforts have been devoted to protecting CPS from sensor attacks, several critical problems remain unresolved. First, existing attack detection works tend to minimize the detection delay and false alarms at the same time; this goal, however, is not always achievable due to the inherent trade-off between the two metrics. Second, there has been much work on attack detection, yet a key question remains concerning what to do after detecting an attack. Importantly, a CPS should detect an attack and recover from the attack before irreparable consequences occur. Third, the interrelation between detection and recovery has met with insufficient attention: Integrating detection and recovery techniques would result in more effective defenses against sensor attacks.

This project aims to address these key problems and develop novel detection and recovery techniques. The project aims to achieve timely and safe defense against sensor attacks by addressing real-time adaptive-attack detection and recovery in CPS. First, this project explores new attack detection techniques that can dynamically balance the trade-off between the detection delay and the false-alarm rate in a data-driven fashion. In this way, the detector will deliver attack detection with predictable delay and maintain the usability of the detection approach. Second, this project pursues new recovery techniques that bring the system back to a safe state before a recovery deadline while minimizing the degradation to the mission being executed by the system. Third, this project investigates efficient techniques that address the attack detection and recovery in a coordinated fashion to significantly improve response to attacks. Specific research tasks include the development of real-time adaptive sensor attack detection techniques, real-time attack recovery techniques, and attack detection and recovery coordination techniques. The developed techniques will be implemented and evaluated on multiple CPS simulators and an autonomous vehicle testbed.

Performance Period: 07/15/2022 - 06/30/2025
Institution: University of Pennsylvania
Award Number: 2143274
CPS: Frontier: Collaborative Research: Cognitive Autonomy for Human CPS: Turning Novices into Experts
Lead PI:
Inseok Hwang
Co-PI:
Abstract

Human interaction with autonomous cyber-physical systems is becoming ubiquitous in consumer products, transportation systems, manufacturing, and many other domains. This project seeks constructive methods to answer the question: How can we design cyber-physical systems to be responsive and personalized, yet also provide high-confidence assurances of reliability? Cyber-physical systems that adapt to the human, and account for the human's ongoing adaptation to the system, could have enormous impact in everyday life as well as in specialized domains (biomedical devices and systems, transportation systems, manufacturing, military applications), by significantly reducing training time, increasing the breadth of the human's experiences with the system prior to operation in a safety-critical environment, improving safety, and improving both human and system performance. Architectures that support dynamic interactions, enabled by advances in computation, communication, and control, can leverage strengths of the human and the automation to achieve new levels of performance and safety.

This research investigates a human-centric architecture for "cognitive autonomy" that couples human psychophysiological and behavioral measures with objective measures of performance. The architecture has four elements: 1) a computable cognitive model which is amenable to control, yet highly customizable, responsive to the human, and context dependent; 2) a predictive monitor, which provides a priori probabilistic verification as well as real-time short-term predictions to anticipate problematic behaviors and trigger the appropriate action; 3) cognitive control, which collaboratively assures both desired safety properties and human performance metrics; and 4) transparent communication, which helps maintain trust and situational awareness through explanatory reasoning. The education and outreach plan focuses on broadening participation of underrepresented minorities through a culturally responsive undergraduate summer research program, which will also provide insights about learning environments that support participation and retention. All research and educational material generated by the project are being made available to the public through the project webpage.

Performance Period: 10/01/2019 - 09/30/2024
Institution: Purdue University
Sponsor: National Science Foundation
Award Number: 1836952
CPS: Medium: Collaborative Research: Mitigation strategies for enhancing performance while maintaining viability in cyber-physical systems
Lead PI:
Ilya V. Kolmanovsky
Abstract

Complex cyber-physical systems (CPS) that operate in dynamic and uncertain environments will inevitably encounter unanticipated situations during their operation. Examples range from naturally occurring faults in both the cyber and physical components to attacks launched by malicious entities with the purpose of disrupting normal operations. As infrastructures, e.g. energy, transportation, industrial systems and built environments, are getting smarter, the chance of a fault or attack increases. When this happens, it is essential that system behavior remains viable, i.e., it does not violate pre-specified operating constraints on run-time behavior. Preserving safety, for instance, is of paramount importance to avoid damage and possible loss of life. This project will develop strategies for mitigating the effects of such unanticipated situations, that seek to optimize for performance (measured by multiple metrics such as cost, efficiency, accuracy, etc.) without compromising viability. The emphasis will be on the automotive application domain, given the upcoming revolution brought by innovations such as vehicle-to-vehicle (V2V), vehicle to infrastructure (V2I) communication and autonomous driving, and because of the safety-criticality of the transportation infrastructure. To ground our research on relevant problems, we will engage industrial partners. The outcomes of the project will be validated upon test scenarios drawn from the automotive industry. 

Fundamental issues arising when safety-critical CPS operate in uncertain environments will be addressed, with the objective of obtaining a better understanding of, and developing optimal or near-optimal strategies for dealing with, emergent problems that arise from the interaction of resource-allocation and control strategies in such systems. One of the novelties of the technical approach adopted in this project is to closely integrate three different CPS perspectives control theory, automotive & aerospace application domain-knowledge, and real-time resource management & scheduling in order to develop run-time mitigation strategies for complex CPS's operating in dynamic and uncertain environments, and exposed to a variety of faults. Such an integrated approach will allow for the identification of emergent problems that arise from the interaction of resource-allocation and control algorithms, that may otherwise remain undiscovered if the control and resource-allocation aspects were considered separately.
The general design-time and run-time tools for creating resilient CPSs will be guided by the implementation and evaluation of the research in simulation and on laboratory test-beds upon three applications from the automotive domain: fault resilience for variable-valve internal combustion engines; fail-safe energy management for hybrid-electric vehicles; and robust sensor management for autonomous vehicles.
 

Performance Period: 09/15/2019 - 08/31/2024
Institution: Regents of the University of Michigan - Ann Arbor
Award Number: 1931738
Collaborative Research: CPS: Medium: Physics-Model-Based Neural Networks Redesign for CPS Learning and Control
Lead PI:
Huajie Shao
Abstract

Deep Neural Networks (DNN) enabled Cyber-Physical Systems (CPS) hold great promise for revolutionizing many industries, such as drones and self-driving cars. However, the current generation of DNN cannot provide analyzable behaviors and verifiable properties that are necessary for safety assurance. This critical flaw in purely data-driven DNN sometimes leads to catastrophic consequences, such as vehicle crashes linked to self-driving and driver-assistance technologies. On the other hand, physics-model-based engineering methods provide analyzable behaviors and verifiable properties, but do not match the performance of DNN systems. These considerations motivate the work in this project which aims at physics-model-based neural networks (NN) redesign, yielding HyPhy-DNN: a hybrid self-correcting physics-enhanced DNN framework. HyPhy-DNN will provide the performance of DNNs and the analyzability and verifiability of physical models, thus providing a foundation for verifiably safe self-driving cars, drones, and other CPS systems. Moreover, the HyPhy-DNN will fundamentally advance the integration of deep learning and robust control to enable safety- and time-critical CPS to safely operate with high performance in unforeseen and dynamic environments.

The HyPhy-DNN will make three innovations in redesigning NN architecture: (i) Physics augmentations of NN inputs for directly capturing hard-to-learn physical quantities and embedding Taylor series; (ii) Physics-guided neural network editing, such as removing links between independent physics variables or fixed weights on links between certain physics variables to maintain the known physics identity such as in conservation laws; and (iii) Time-frequency-representation filtering-based activations for filtering out noise having dynamic frequency distribution. The novel architectural redesigns will empower the HyPhy-DNN with four targeted capabilities: 1) controllable and provable model accuracy; 2) maximum avoidance of spurious correlations; 3) strict compliance with physics knowledge; and 4) automatic correction of unsafe control commands. Finally, the safety certification of any DNN will be a long-term challenge. Hence, the HyPhy-DNN shall have a simple but verified backup controller for guaranteeing safe and stable operation in dynamic and unforeseen environments. To achieve this, the research team will integrate HyPhy-DNN with an adaptive-model-adaptive-control (AMAC) framework, the core novelty of which lies in fast and accurate nonlinear model learning via sparse regression for model-based robust control. The HyPhy-DNN and AMAC are complementary and will be interactive at different scales of system performance and functionalities during the safety-status-cycle, supported by the Simplex software architecture, a well-known real-time software technology that tolerates faults and allows online control system upgrades.

Performance Period: 06/15/2023 - 05/31/2026
Institution: College of William and Mary
Award Number: 2311086
Subscribe to