CPS: Medium: Collaborative Research: Provably Safe and Robust Multi-Agent Reinforcement Learning with Applications in Urban Air Mobility
Lead PI:
Peng Wei
Abstract

This Cyber-Physical Systems (CPS) project aims at designing theories and algorithms for scalable multi-agent planning and control to support safety-critical autonomous eVTOL aircraft in high-throughput, uncertain and dynamic environments. Urban Air Mobility (UAM) is an emerging air transportation mode in which electrical vertical take-off and landing (eVTOL) aircraft will safely and efficiently transport passengers and cargo within urban areas. Guidance from the White House, the National Academy of Engineering, and the US Congress has encouraged fundamental research in UAM to maintain the US global leadership in this field. The success of UAM will depend on the safe and robust multi-agent autonomy to scale up the operations to high-throughput urban air traffic. Learning-based techniques such as deep reinforcement learning and multi-agent reinforcement learning are developed to support planning and control for these eVTOL vehicles. However, there is a major challenge to provide theoretical safety and robustness guarantees for these learning-based neural network in-the-loop models in multi-agent autonomous UAM applications. In this project, the researchers will collaborate with committed government and industry partners on the use-case-inspired fundamental research, with a focus on promoting safety and reliability of AI, machine learning and autonomy in students with diverse backgrounds.

The technical objectives of this project include (1) Safety and Robustness of Single-Agent Reinforcement Learning: in order to address the ?safety critical? UAM challenge, the PIs plan the min-max optimization for single agent reinforcement learning to formally build sufficient safety margin, constrained reinforcement learning to formulate safety as physical constraints in state and action spaces, and the novel cautious reinforcement learning that uses variational policy gradient to plan the safest aircraft trajectory with minimum distributional risk; (2) Safety and Robustness of Multi-Agent Reinforcement Learning: in order to address the ?heterogeneous agents and scalability? challenge, a novel federated reinforcement learning framework where a central agent coordinates with decentralized safe agents to improve traffic throughput while guaranteeing safety, and a scaling mechanism to accommodate a varying number of decentralized aircraft; (3) Safety and Robustness from Simulations to the Real World: in order to address the ?high-dimensionality and environment uncertainty? challenge, the researchers will focus on the agents? policy robustness under distribution shift and fast adaptation from simulation to the real world. Specifically, value-targeted model learning to incorporate domain knowledge such as the aircraft and environment physics, and a safe adaptation mechanism after the RL model is deployed online for flight testing or execution is planned.

Peng Wei
Performance Period: 06/01/2023 - 05/31/2026
Institution: Kansas State University
Sponsor: National Science Foundation
Award Number: 2312092