Large-scale networked systems (such as the power grid, the internet, multi-robot systems, and smart cities) consist of a large number of interconnected components. To allow the entire system to function efficiently, these components must communicate with each other and use the exchanged information in order to estimate the state of the entire system and take optimal actions. However, such large-scale networked systems are also increasingly under threat from sophisticated cyber-attacks that can compromise some of the components and cause them to behave erratically or inject malicious information into the network. Existing algorithms for distributed coordination in large-scale networks are highly vulnerable to such attacks. This project will address this critical problem by creating new algorithms to enable components in large-scale networks to cooperatively take optimal actions and estimate the state of the system despite attacks on a large number of the components. The algorithms will provide provable security and performance guarantees, and identify characteristics of networks and algorithms that are vulnerable to attacks. The project will identify new ways to design networks that provide a desired level of resilience to attacks. The algorithms that arise from the research will enable the design of more secure networks and critical infrastructure that remain functional under attacks, with substantial benefits to society. In addition to the technical and scientific contributions, the project will also train students in the design of secure networked systems, and will engage the local community in central Indiana in learning about networks via interactive exhibits and workshops at the local museum. This proposal presents an integrated research and education program focused on establishing the foundations of distributed optimization, learning, and estimation algorithms that are resilient to attacks. The research agenda is focused along three thrusts: (i) designing resilient algorithms for distributed optimization of static objective functions, (ii) designing resilient learning algorithms for settings where optimization objectives change over time, and (iii) designing resilient distributed state estimators for large scale dynamical systems. The three research thrusts each lead to new theoretical contributions. First, the proposed research will establish new metrics for measuring resilience in distributed optimization algorithms, and will build upon commonly studied optimization approaches (which are highly vulnerable to adversaries in their existing forms) to derive resilient distributed optimization algorithms. Second, it will establish new fundamental lower bounds on the regret that can be achieved with distributed online learning algorithms under adversarial behavior, and characterize achievable regret bounds via the design of new learning algorithms. Third, the proposed research will investigate the interplay between the dynamics of underlying physical systems and the communication network topology between distributed observers in order to design resilient distributed state estimation schemes. The proposed research will lead to a greater understanding of the fundamental factors that affect the resilience of distributed optimization, learning, and estimation dynamics, and establish systematic procedures to design large-scale networked systems that are capable of operating in a near-optimal manner under attacks. Given the lack of existing work on this topic, the research will lay the groundwork for substantial further explorations of resilient algorithms for distributed decision-making and coordination in large-scale networks.
The proposed effort will help develop systems in which humans and autonomy are responsible for collective information acquisition, perception, cognition and decision-making. Such collective operation is a necessity as much as it is an augmenting technology. In assistive robotics, for example, the autonomy exists to support functionality that the human users cannot perform. On the other hand, in cases in which a human can adequately operate a platform (e.g., semi-autonomous unmanned vehicles), she effectively augments the robot's abilities. Establishing provable trust is one of the most pressing bottlenecks in deploying autonomous systems at scale. Embedding a human as a user, information source or decision aid into the operation of autonomous systems amplifies the difficulty. While humans offer cognitive capabilities that complement machine implementable functionalities, the impact of this synergy is contingent on the system's ability to infer the intent, preferences and limitations of the human and the imperfections imposed by the interfaces between the human and the autonomous system. We expect the proposed theory, methods and tools to cut across the spectrum of cyberphysical systems that are to work with and in the vicinity of humans. Such systems include, to name a few, human-robot interactions, a range of assistive medical devices, semi-autonomous driving or safety augmentation systems in modern automobiles and control rooms of large-scale plants. The proposed effort targets a major gap in theory and tools for the design of human-embedded autonomous systems. Its objective is to develop languages, algorithms and demonstrations for the formal specification and automated synthesis of shared control protocols. Our technical approach is based on bridging formal methods, controls, learning and human behavioral modeling. It is based on three main research thrusts. (i) Specifications and modeling for shared control: What does it mean to be provably correct in human-embedded autonomous systems, and how can we represent correctness in formal specifications? (ii) Automated synthesis of shared control protocols: How can we mathematically abstract shared control, and automatically synthesize shared control protocols from formal specifications? (iii) Shared control through human-autonomy interfaces: How can we account for the limitations in expressivity, precision and bandwidth of human-autonomy interfaces, and co-design controllers and interfaces? The mathematically-based specifications and automated synthesis algorithms will diffuse the process of building trust throughout the design, have the potential to mitigate the need for purely empirical testing, and diagnose failure modes in advance of costly and restricted user studies. This systematic and early integration will help develop autonomous systems in which the operator and autonomy protocols are equally essential components of the same system and reduce the so-called ``automation surprises." While we expect the theoretical and algorithmic outcomes of the proposed effort to be application- and hardware-agnostic, we concretize our research plan in a specific hardware platform. It is composed of an existing quadrotor testbed with 3D motion capture; human monitoring and decoding functionality through neural, visual, audial and biopotential signals; and human-autonomy interfaces with virtual reality embeddings.