In societal-scale cyber-physical systems (SCPS), machine learning algorithms are increasingly becoming the interface between stakeholders---from matching drivers and riders on ride-sharing platforms to the real-time scheduling of energy resources in electric vehicle (EV) charging stations. The fact that the different stakeholders in these systems have different objectives gives rise to strategic interactions which can result in inefficiencies and negative externalities across the SCPS. This NSF CAREER project seeks to develop a foundational understanding of the strategic interactions that arise in SCPS, the impacts they have on social welfare, and how they affect algorithmic decision-making. The goal is to shift how engineers design algorithms for SCPS. Currently, learning algorithms are trained and developed in isolation, and uncertainty and strategic interactions are treated---if at all--- as adversarial or worst-case. In contrast, the proposed research aims to develop algorithms that can consider economic interactions, human behavior, and uncertainty when making decisions. The theory and algorithms developed through this project will be validated on two physical testbeds: 1. an EV charging testbed where drivers routinely mis-report preferences for faster charging, and 2. the Caltech Social Science Experimental Laboratory where controlled experiments will be conducted to understand how people respond to algorithms. The proposal also includes an integrated education and outreach plan, which includes outreach to K-12 students and new undergraduate and graduate courses on the complexities of learning in SCPS.
Key goals of this project include developing a unified design methodology for learning in the presence of strategic behaviors in SCPS and the systematic study of the control actions and control authority that individual users and policymakers can wield to achieve societal goals. The fact that strategic manipulations in SCPS are played out through the (mis)-reporting of data or through algorithmic decision-making distinguishes these problems from those classically studied in game theory and economics. Furthermore, in contrast with existing work in computer science and economics that study strategic interactions, this project aims to take a dynamic view of SCPS, which leverages tools and ideas from dynamical systems theory and stochastic processes to complement ideas in machine learning, game theory, and behavioral economics. This perspective will allow for new insights into how repeated interactions affect strategic decision-making in SCPS and which design decisions impact learning in game theoretic settings. This opens the door to new insights and the analysis of previously overlooked control knobs for achieving societal goals in SCPS.
This NSF CPS project aims to redesign the information structure utilized by system operators in today's electricity markets to accommodate technological advances in energy generation and consumption. The project will bring transformative change to power systems by incentivizing and facilitating the integration of non-conventional energy resources via a principled design of bidding, aggregation, and market mechanisms. Such integration will provide operators with the necessary flexibility to operate a network with high levels of renewable penetration. This will be achieved by a comprehensive bottom-down approach that will first identify the intrinsic cost of utilizing novel renewable resources and accommodate the operational ecosystem accordingly. The intellectual merits of the project include novel theories and algorithms for operating a vast number of distributed resources and testbed implementations of markets and controls. The project's broader impacts include K-12 and undergraduate programs, including in-class and extra-curricular STEM activities through, e.g., Hopkins in-class and extra-curricular STEM activities, and the Caltech WAVE summer research program.
Introducing distributed energy resources (DERs) at a large scale requires rethinking power grid operations to account for increased uncertainty and new operational constraints. The proposed research undertakes this task by overhauling the information structure that markets and grid controls utilize. We seek to characterize and shape how information is exchanged and used to manage the grid to improve efficiency, stability, and incentive alignment. The research is organized into three thrusts. Thrust 1 emphasizes the role of information in coordination. It seeks to characterize DER costs and constraints, designing bidding strategies tailored to convey information about the atypical characteristics of DER costs. Thrust 2 aims to develop aggregation strategies that efficiently manage resources by accounting for their cost and constraints, integrating DERs via an aggregate bid that protects sensitive user information and is robust to market manipulation. Finally, Thrust 3 characterizes the overall impact of DERs on operations. We will examine how user incentives that span across markets implicitly couple market outcomes and develop design mechanisms to mitigate inter-market price manipulation. We will also design pricing schemes that provide efficient DER allocation while preserving real-time operational constraints such as frequency regulation.
Distributed cyber-physical systems (CPS), where multiple computer programs distributed across a network interact with each other and with physical processes, are challenging to design and verify. Such systems are found in industrial automation, transportation systems, energy distribution systems, and many other applications. This project is developing a ?systems theory? for such applications that provides a good analytical toolkit for understanding how a system will behave when networks misbehave. It is building tools that make it possible to reason about the design of safe and reliable distributed CPS applications in an accessible user-friendly environment. In distributed applications, Brewer's CAP theorem tells us that when networks become partitioned, there is a tradeoff between consistency and availability in distributed software systems. Consistency is agreement on the values of shared variables; availability is the ability to respond to reads and writes of those shared variables. This project builds on an extension that has shown that consistency, availability, and network latency can be quantified, and that the CAP theorem can be generalized to give an algebraic relation between these quantities. This generalization is called the CAL theorem because it replaces ?network Partitioning? with ?Latency,? where partitioning is just a limiting case of latency. The CAL theorem can be used to help design distributed systems that fail gracefully when network performance degrades. With increasing latency, either consistency or availability (or some measure of both) must be sacrificed, and the CAL theorem quantifies these sacrifices.
This project is applying the CAL theorem to distributed CPS. The project is deriving the fundamental limits implied by the CAL theorem and developing a methodology for systematically trading off availability and consistency in application-specific ways. The application of the CAL theorem to CPS generalizes consistency to include agreement on the state of the physical world and availability to include the latency of software responses to stimulus from the physical world. Instead of focusing solely on network latency, the project adopts a measure called ?apparent latency? that includes network latency plus all other sources of latency (e.g., computation time). This measure is practically measurable. The project builds upon the recently developed Lingua Franca coordination language to provide system designers with concrete analysis and design tools to make the required tradeoffs in deployable software. The tools automatically produce graphical renditions of systems and user-friendly feedback on the concurrent, distributed, and real-time aspects.
The future of cyber-physical systems are smart technologies that can work collaboratively, cooperatively, and safely with humans. Smart technologies and humans will share autonomy, i.e., the right, obligation and ability to share control in order to meet their mutual objectives in the environment of operations. For example, surgical robots must interact with surgeons to increase their capabilities in performing high-precision surgeries, drones need to deliver packages to humans and places, and autonomous cars need to share roads with human-driven cars. In all such interactions, these systems must act safely despite the risks and uncertainties that are intrinsic with humans, technologies, and the environments in which they interact. The key insight of this project is that control strategies can be developed that increase safety in situations where a human needs to closely interact with a cyber-physical system (CPS) that is capable of autonomy or semi-autonomous action.
The goal of this project is to develop risk-aware interactive control and planning for achieving safe cyber-physical-human (CPS-h) systems. This project will advance the state-of-the-art of CPS-h planning and control in three main ways: (i) developing computationally tractable risk-aware trajectory planning algorithms that are suited to general autonomous CPS-h, (ii) developing a computationally efficient and empirically supported framework to account for risk-awareness in human?s decision-making, and (iii) deriving interaction-aware planning algorithms for achieving safe and efficient interactions between multiple risk-aware agents. The proposed algorithms will be extensively evaluated with human subjects in interaction with autonomous CPS-h such as autonomous cars and quadcopters. This work will have direct impact on many CPS-h domains including but not limited to multi-agent interactions, autonomous driving, collaboration and coordination between humans and autonomous agents in safety-critical scenarios.
Multi-agent coordination and collaboration is a core challenge of future cyber-physical systems as they start having more complex interactions with each other or with humans in homes or cities. One of the key challenges is that agents must be able to reason about and learn the behavior of other agents in order to be able to make decisions. This is particularly challenging because state of the art approaches such as recursive belief modeling over partner policies often do not scale. However, humans are very effective in coordinating and collaborating with each other without the need of any expensive recursive belief modeling. One hypothesis is that humans can effectively capture the sufficient representations required for coordinating on tasks. Similar to humans, the agents in a multi-agent setting can look for the sufficient statistics needed for coordination and collaboration. This project is about learning and approximating such sufficient statistics to enable effective collaboration and coordination. In addition, the investigators will study teaching and learning in settings where the agents have partial observation over the world and need to teach and learn from each other in order to achieve a collaborative task.
Important successful demonstrations of reinforcement learning for single agents have spurred the drive to determine whether such methods can extend to multiple agents. There have also been notable developments in the area of multi-agent systems, both in understanding the structure of the resulting interacting dynamics and in the development of practical reinforcement learning algorithms. The core objective of this project is: 1) the development of learning methods that approximate the well-known concept of sufficient statistics in multi-agent interactions; 2) the development of a reinforcement learning algorithm that leverages the representations of sufficient statistics for more effective planning, coordination, and collaboration in multi-agent settings; and 3) the development of algorithms that use the representations of sufficient statistics to enable teaching and learning in multi-agent settings under partial observation over the environment. The overall outcome of this project will be a new formalism along with algorithms, tools, and techniques that enhance multi-agent learning and control. The investigators will ground this in two main applications: 1) collaborative search and exploration and 2) collaborative transport of objects.
Participatory science has opened opportunities for many to participate in data collection for science experiments about the environment, local transportation, disaster response, and public safety where people live. The nature of the collection by non-scientists on a large scale carries inherent risks of sufficient coverage, accuracy and reliability of measurements. This project is motivated by the challenges in data and predictive analytics and in control for participatory science data collection and curation in cyber-physical systems (CPS) experiments.
This project focuses on data-driven frameworks to address these challenges in CPS-enabled participatory science that builds on statistics, optimization, control, natural language processing, CPS fundamentals, and coordination of participants, known as crowd steering. This framework, known as DCCDI for Data-driven Crowdsensing CPS Design and Implementation, tightly combines the underlying methods and techniques, especially focusing on physical sensors, mobility, and model-based approaches, to improve efficiency, effectiveness, and accountability. Validation of the DCCDI framework is conducted through simulations, case studies, and on real-world CPS-enabled experiments. This project closely integrates education and training with foundational research and public outreach that enhances interdisciplinary thinking about CPS systems, engages the public through participatory science, and broadens participation in science, technology, engineering, mathematics, and computer science.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Animal agriculture has intensified over the past several decades, and animals are managed increasingly as large groups. As animals are often located remotely on large expanses of pasture, continuous monitoring of animal health and well-being is labor-intensive and challenging. This project aims to develop a solar sensor-based smart farm Internet-of-Things network, which is versatile, reliable, and robust to cyberattacks for smart animal monitoring and to demonstrate its operation and practicality on real farms. The solar sensor network will leverage low-power, wide- area networking to enable animal care personnel to monitor the behavior and health of cattle remotely through the Internet. The proposed research will provide fundamental advances to building an energy-efficient, scalable, communication-efficient animal farm system, while ensuring high monitoring quality under uncertain, dynamic, and hostile smart farm environments. The success of this project will contribute to a farm management system by accurately observing, measuring and responding to variabilities in animal agriculture systems.
The proposed work will design an energy-centric solution that actively schedules communication and computation to minimize energy waste in energy harvesting contexts. The proposed sensor node will monitor biometrics, acceleration, and location of animals and is powered by solar energy. The proposal further builds a physical and medium-access communication layer that is actively aware of the energy-mismatch between the low-energy sensors and the more capable LoRa (Long Range) gateways. The adoption of wireless technologies introduces cybersecurity vulnerabilities, and hence, cybersecurity is another major design objective of the proposed system by leveraging belief models and deep learning techniques while maintaining high quality monitoring services. The proposed sensor network will be tested at Virginia Tech?s farm testbeds, which have been designed to test and showcase such technologies for pastured livestock. The research will also be beneficial to the fields of semiconductor devices, embedded systems, Internet-of-Things (IoT) devices, wireless communications including 5G and beyond, robust machine/deep learning, cybersecurity, statistical signal detection, and agricultural production. The project will pioneer transformative research to increase productivity of animal agriculture and allow for real-world testing of advancements.
Human blood clots kill an estimated 100,000 to 300,000 Americans each year. Current treatments rely on medications that break down clots, which can be combined with a surgical procedure that mechanically alters the clot. However, clot-busting medications and surgery are both linked to unintended adverse events. This project designs and studies miniature magnetic swimmers as a minimally invasive alternative to these treatments. These devices are millimeter-scale objects that have a helical shape and contain a small permanent magnet. A rotating magnetic field is used to remotely control the swimmers. The field makes the swimmer rotate and the helical shape converts the rotational movement into a propulsive force (much like a boat's propeller). The swimmers can be steered by changing the orientation of the applied magnetic field. They can precisely navigate within liquids along pre-defined 3D paths and could potentially navigate within the bloodstream toward a blood clot. The rotational movement can be used to abrade the clot and restore an appropriate blood flow.
This project introduces new biomedical paradigms and investigates the robust high-speed tetherless manipulation of magnetic swimmers in complex, time-varying environments. It requires multiple swimmer designs, innovative controllers, and integrated, dynamic sensing modalities. The project is divided into three synergetic objectives. First, a fluid-dynamics model for the swimmer must be created and validated experimentally. This model will be used to optimize the swimmer's geometry. Secondly, a robust swimmer controller will be built. The controller must be able to compensate for the variable blood flow present in the arteries. An ultrasound scanner will be used to track the position of the swimmer and obtain useful information about its environment. For example, the velocity of the swimmer can be used to infer the blood flow velocity. Finally, the feasibility of several surgical procedures will be studied via ex-vivo experiments. 3D navigation together with the removal of human blood clots will be performed inside a human heart phantom. Several swimmers end effectors will also be tested.
Innovations driven by recent progress in artificial intelligence (AI) have demonstrated human-competitive performance. However, as research expands to safety-critical applications, such as autonomous vehicles and healthcare treatment, the question of their safety becomes a bottleneck for the transition from theories to practice. Safety-critical autonomy must go through a rigorous evaluation before massive deployment. They are unique in the sense that failures may cause serious consequences, thus requiring an extremely low failure rate. This means that test results under naturalistic conditions are extremely imbalanced - with the failure cases being rare. The rarity, together with the complex AI structures, poses a huge challenge to design effective evaluation methods that cannot be adequately addressed by conventional methods.
This proposal aims to understand the fundamental challenges in assessing the risk of safety-critical AI autonomy and puts forward new theories and practical tools to develop certifiable, implementable, and efficient evaluation procedures. The specific aims of this research are to develop evaluation methods for three types of AI autonomy that cover a broad array of real-world applications: deep learning systems, reinforcement learning systems, and sophisticated systems comprising sub-modules, and validate them with the sensing and decision-making systems of real-world autonomous systems. This research lays the foundation for the PI?s long-term career goal to safely deploy AI in the physical world, opens up a new cross-cutting area to develop rigorous and efficient evaluation methods, addresses the urgent societal concern with the upcoming massive deployment of AI autonomy, and train a diverse, globally competitive workforce through education at all levels.
This research will create and validate new approaches for optimally managing mobile observational networks consisting of a renewably powered ?host? agent and ?satellite? agents that are deployed from and recharged by the host. Such networks can enable autonomous, long-term measurements for meteorological, climate change, reconnaissance, and surveillance applications, which are of significant national interest. While the hardware exists for such networks, the vast majority of existing mission planning and control approaches treat energy as a finite resource and focus on finite-duration missions. This research will represent a paradigm shift, wherein the energy resource available to the network is renewable, but the instantaneously available power is limited. This demands strategies that continuously trade off energy harvesting and scientific information gathering. This research will establish a comprehensive framework for managing the aforementioned tradeoffs, with both simulation-based and experimental demonstrations. The specific observational framework considered in this work will involve a fleet of solar-powered autonomous surface vessels, unoccupied aerial vehicles, and undersea gliders to for characterizing atmospheric and oceanic interactions between the deep-ocean and near-shore waters adjacent to North Carolina?s Outer Banks. The research will be complemented with targeted internship activities, K-12 outreach activities at The Engineering Place at NC State, and outreach activities with the Detroit Area Pre-College Engineering Program.
Fusing autonomy, persistence, and adaptation in observational networks demands a formal characterization and tradeoff between the cyber quantity of information and physical quantity of energy. Specifically, with a renewably powered host agent, energy no longer serves as a hard constraint; instead, there exists a perpetual tradeoff between the acquisition of information and the use of available on-board energy in a stochastic environment. To address this, the research team will create: (i) a scientifically tailored dynamic coverage model for information characterization, (ii) a statistical energy resource/consumption model, and (iii) a multi-level predictive controller that adapts the mission profile based on the information/energy tradeoff. The host controller will maximize a two-part objective function consisting of a finite-horizon coverage summation and terminal incentive based on a novel quantity termed the ?information value of energy.? This host controller will be complemented by a series of satellite energy-aware coverage controllers that maximize coverage subject to a safe rendezvous requirement in a stochastic resource. The research will be validated across three platforms of increasing complexity ? an unoccupied aerial vehicle (UAV) network (experimental), a combined solar-powered autonomous surface vessel (ASV)/UAV network (experimental), and a combined ASV/USV/undersea glider network (simulation-driven).