In the era of data sharing, it is still challenging, insecure, and time-consuming for scientists to share lessons learned from agricultural data collection and data processing. The focus of this project is to mitigate such challenges by intersecting expertise in plant science, secure networked systems, software engineering, and geospatial science. The proposed cyber-physical system will be evaluated in the laboratory and deployed on real crop farms in Missouri, Illinois, and Tennessee. All results will be shared with international organizations whose goal is to increase food security and improve human health and nutrition.
The proposed system will securely orchestrate data gathered using sensors, such as hyperspectral and thermal cameras to collect imagery on soybean, sorghum, and other crops. Preprocessed plant datasets will be then offered to scientists and farmers in different formats via a web-based system, ready to be processed by deep learning algorithms or consumed by thin clients. Data collected from different crop farms will be used to train distributed deep learning systems, using novel architectures that optimize privacy and training time. Such machine learning systems will be used to predict plant stress and detect pathogens. Finally, the cyber-physical system will integrate novel data processing software with existing NSF-funded hardware platforms, introducing novel algorithmic contributions in edge computing and giving feedback to farmers, closing the loop. The results of this project will impact research on high-value crops with significant levels of automation, such as those in protected agriculture and fish crop hydroponics systems in desert farming. Planned outreach activities will impact solutions for smallholder farmers that collaborators at the International Center for Agricultural Research in the Dry Areas (ICARDA) support. Although this work will focus on enabling data science for farming applications, the work will also inform management of other IoT applications, e.g., smart and connected healthcare or other cyber-human systems.
This NSF Cyber-Physical Systems (CPS) grant will advance the state-of-the-art of Connected and Automated Vehicle (CAV) systems by innovating in the three key areas of networking, sensing, and computation, as well as the synergy among them. This work leverages several emerging technology trends that are expected to transform the ground transportation system: much higher-speed wireless connectivity, improved on-vehicle and infrastructure based sensing capabilities, and advances in machine learning algorithms. So far, most related research and development focused on individual technologies, leading to limited benefits. This project will develop an integrated platform that jointly handles networking, sensing, and computation, by addressing key challenges associated with the operating conditions of the CAVs: e.g., safety-critical, high mobility, scarce on-board computing resources, fluctuating network conditions, limited sensor capabilities. The research team will study how to use the integrated platform to enable real-world CAV applications, such as enhancement of public service personnel's safety, alleviation of congestion at bottleneck areas, and protection of vulnerable road users (VRUs). Given its interdisciplinary nature, this project will yield a broad impact in multiple research communities including transportation engineering, mobile/edge computing, and machine learning. The outcome of this research will benefit multiple stakeholders in the CAV ecosystem: drivers, pedestrians, CAV manufacturers, transportation government agencies, mobile network carriers, etc., ultimately improving the safety and mobility of the nation's transportation system. This project will also provide a platform to conduct various education and outreach activities.
The intellectual core of this research consists of several foundational contributions to the ground transportation CPS domain. First, it innovates vehicle-to-everything (V2X) communications through strategically aggregating 4G/5G/WiFi/DSRC technologies to enhance network performance. Second, it develops a cooperative sensing and perception framework where nearby vehicles can share raw sensor data with an edge node to create a global view, which can provide extended perceptual range and detection of occluded objects. The key technical contribution is to ensure good scalability - allowing many moving vehicles to efficiently share their data despite limited, fluctuating network resources. Third, it enables partitioning computation across vehicles and the infrastructure to meet the real-time requirements of CAV applications. Fourth, integrating the above building blocks of networking, sensing, and computation, the research team will develop an optimization framework that makes adaptive, resource-aware decisions on what computation needs to be performed where at which quality, to maximize the service quality of CAV applications.
The rapid evolution of ubiquitous sensing, communication, and computation technologies has contributed to the revolution of cyber-physical systems (CPS). Learning-based methodologies are integrated to the control of physical systems and demonstrating impressive performance in many CPS domains and connected and autonomous vehicles (CAVs) system is one such example with the development of vehicle-to-everything communication technologies. However, existing literature still lacks understanding of the tridirectional relationship among communication, learning, and control. The main challenges to be solved include (1) how to model dynamic system state and state uncertainties with shared information, (2) how to make robust learning and control decisions under model uncertainties, (3) how to integrate learning and control to guarantee the safety of networked CPS, and (4) how to quantify the benefits of communication.
To address these challenges, this CAREER proposal aims to design integrated communication, learning, and control rules that are robust to hybrid system model uncertainties for safe operation and system efficiency of CAVs. The key intellectual merit is the design of integrated distributionally robust multi-agent reinforcement learning (DRMARL) and control framework with rigorous safety guarantees, considering hybrid system state uncertainties predicted with shared information, and the development of scientific foundation for analyzing and quantifying the benefits of communication. The fundamental theory and algorithm principles will be validated using simulators, small-scale testbeds, and full-scale CAVs field demonstrations, to form a new framework for future connectivity, learning, and control of CAVs and networked CPS. The technical contributions are as follows. (1). With shared information, we will design a cooperative prediction algorithm to improve hybrid system state and model uncertainty representations needed by learning and control. (2). Given enhanced prediction, we will design an integrated DRMARL and control framework with rigorous safety guarantee, and a computationally tractable algorithm to calculate the hybrid system decision-making policy. This integrates the strengths of both learning and control to improve system safety and efficiency. (3). We will define formally and quantify the value of communication given and propose a novel learn to communicate approach, to utilize learning and control to improve the communication actions. This project will also integrate an educational plan with the research goals by developing a learning platform of ``ssCAVs'' as an education tool and new interdisciplinary courses on ?learning and control?, undertaking outreach to the general public and K-12 students and teachers, and directly involving high-school scholars, undergraduate and graduate students in research.
Increasing wildfire costs---a reflection of climate variability and development within wildlands---drive calls for new national capabilities to manage wildfires. The great potential of unmanned aerial systems (UAS) has not yet been fully utilized in this domain due to the lack of holistic, resilient, flexible, and cost-effective monitoring protocols. This project will develop UAS-based fire management strategies to use autonomous unmanned aerial vehicles (UAVs) in an optimal, efficient, and safe way to assist the first responders during the fire detection, management, and evacuation stages. The project is a collaborative effort between Northern Arizona University (NAU), Georgia Institute of Technology (GaTech), Desert Research Institute (DRI), and the National Center for Atmospheric Research (NCAR). The team has established ongoing collaborations with the U.S. Forest Service (USFS) in Pacific Northwest Research Station, Kaibab National Forest (NF), and Arizona Department of Forestry and Fire Management to perform multiple field tests during the prescribed and managed fires. This proposal's objective is to develop an integrated framework satisfying unmet wildland fire management needs, with key advances in scientific and engineering methods by using a network of low-cost and small autonomous UAVs along with ground vehicles during different stages of fire management operations including: (i) early detection in remote and forest areas using autonomous UAVs; (ii) fast active geo-mapping of the fire heat map on flying drones; (iii) real-time video streaming of the fire spread; and (iv) finding optimal evacuation paths using autonomous UAVs to guide the ground vehicles and firefighters for fast and safe evacuation.
This project will advance the frontier of disaster management by developing: (i) an innovative drone-based forest fire detection and monitoring technology for rapid intervention in hard-to-access areas with minimal human intervention to protect firefighter lives; (ii) multi-level fire modeling to offer strategic, event-scale, and new on-board, low-computation tactics using fast fire mapping from UAVs; and (iii) a bounded reasoning-based planning mechanism where the UAVs identify the fastest and safest evacuation roads for firefighters and fire-trucks in highly dynamic and uncertain dangerous zones. The developed technologies will be translational to a broad range of applications such as disaster (flooding, fire, mud slides, terrorism) management, where quick search, surveillance, and responses are required with limited human interventions. This project will also contribute to future engineering curricula and pursue a substantial integration of research and education while also engaging female and underrepresented minority students, developing hands-on research experiments for K-12 students.
This project is in response to the NSF Cyber-Physical Systems 20-563 solicitation.
Unmanned aerial vehicles (UAVs) have been increasingly utilized in several commercial and civil applications such as package delivery, traffic monitoring, precision agriculture, remote sensing, border patrol, hazard monitoring, disaster relief, and search and rescue operations to collect data/imagery for a ground command station nearby. Current implementations of UAV-based operations heavily rely on control, inference, task allocation, and planning from a human controller that can limit the operation of drones in missions where the operation field is not fully observable to the human controller prior to the mission and reliable and continuous communication is not available between the UAVs and the ground station or among the teammate UAVs during the mission. The UAVs can be particularly useful in such unstructured and unknown environments to provide agile surveying or search-and-rescue operations. Therefore, the future of UAV technology focuses on the development of small, low-cost, and smart drones with a higher level of autonomy. Such drones can facilitate a wide range of sophisticated missions performed by a fleet of cooperative UAVs with minimum human intervention and lower cost.
The objective of this research is to develop theoretical and practical frameworks for operation, situational awareness, coordination, and communication of a network of fully autonomous multi-agent systems (e.g., UAVs) in dynamic and unknown environments with minimum human interventions. This research can facilitate a new set of applications for autonomous multi-agent systems in remote and dynamic environments. This project involves an integrated set of research, implementation, and experimental validation thrusts to develop novel frameworks for autonomous decision making, coalition formation, coordination, spectrum management, and task allocation in UAV systems. The developed techniques can be utilized in other multi-agent cognitive systems such as robotic systems, and autonomous driving vehicles where quick search, surveillance, and reactions are required with limited human interventions.
This project also offers a number of educational and outreach activities to integrate the results of this research in curriculum enhancement, student mentorship, engaging underrepresented minority and female students, developing hands-on UAV-based sensing experiments for elementary and middle school students and outreach to the community to enhance public awareness about new applications of UAV systems through collaboration with Flagstaff Festival of Science.
New vulnerabilities arise in Cyber-Physical Systems (CPS) as new technologies are integrated to interact and control physical systems. In addition to software and network attacks, sensor attacks are a crucial security risk in CPS, where an attacker alters sensing information to negatively interfere with the physical system. Acting on malicious sensor information can cause serious consequences. While many research efforts have been devoted to protecting CPS from sensor attacks, several critical problems remain unresolved. First, existing attack detection works tend to minimize the detection delay and false alarms at the same time; this goal, however, is not always achievable due to the inherent trade-off between the two metrics. Second, there has been much work on attack detection, yet a key question remains concerning what to do after detecting an attack. Importantly, a CPS should detect an attack and recover from the attack before irreparable consequences occur. Third, the interrelation between detection and recovery has met with insufficient attention: Integrating detection and recovery techniques would result in more effective defenses against sensor attacks.
This project aims to address these key problems and develop novel detection and recovery techniques. The project aims to achieve timely and safe defense against sensor attacks by addressing real-time adaptive-attack detection and recovery in CPS. First, this project explores new attack detection techniques that can dynamically balance the trade-off between the detection delay and the false-alarm rate in a data-driven fashion. In this way, the detector will deliver attack detection with predictable delay and maintain the usability of the detection approach. Second, this project pursues new recovery techniques that bring the system back to a safe state before a recovery deadline while minimizing the degradation to the mission being executed by the system. Third, this project investigates efficient techniques that address the attack detection and recovery in a coordinated fashion to significantly improve response to attacks. Specific research tasks include the development of real-time adaptive sensor attack detection techniques, real-time attack recovery techniques, and attack detection and recovery coordination techniques. The developed techniques will be implemented and evaluated on multiple CPS simulators and an autonomous vehicle testbed.
Robotic manipulation and automation systems have received a lot of attention in the past few years and have demonstrated promising performance in various applications spanning smart manufacturing, remote surgery, and home automation. These advances have been partly due to advanced perception capabilities (using vision and haptics) and new learning models and algorithms for manipulation and control. However, state-of-the-art cyber-physical systems remain limited in their sensing and perception to a direct line of sight and direct contact with the objects they need to perceive. The goal of this project is to design, build, and evaluate a cyber-physical system that can sense, perceive, learn, and manipulate far beyond what is feasible using existing systems. To do so, the research will explore the terahertz band, which offers a new sensing dimension by inferring the inherent material properties of objects via wireless terahertz signals and without direct contact. This project will also explore radio-frequency signals that can traverse occlusions. Building on these emerging sensing modalities, the core of the project focuses on developing full-spectrum perception, control, learning, and manipulation tasks. The success of this project will result in CPS system architectures with unprecedented capabilities, enabling fundamentally new opportunities to make robotic manipulation more efficient and allowing robots to perform new complex tasks that have not been possible before.
The project will enable robotic perception via full-spectral wireless sensing in order to unlock unprecedented robotic manipulation capabilities. This research involves learning synergies between sensing and control- whereby sensing is used for control and vice-versa - to optimize the end-to-end cyber-physical tasks. In particular, this research includes three inter-connected thrusts: (i) It will enable a new sensing modality that exploits high-resolution terahertz frequencies for robotic imaging and inference; (ii) It aims to build a new learning platform for full spectrum (mmWave, THz, and vision) perception to enable beyond-vision perception and reasoning in non-line-of-sight and cluttered environments, where optical systems lack in performance; and (iii) It presents a platform to learn the synergies between sensing and control to further co-optimize the end-to-end robotic manipulations tasks. These capabilities can open up entirely new realms of possibility to industrial robotics as well as assistive, warehousing, and smart home robotic. The research will be evaluated through extensive experimentation, prototype design, and system implementation. The results will be disseminated through close collaboration with industry and publications in top research venues.
uring normal operations an aircraft is operated by its autopilot. When the autopilot sense a dangerous condition, near or outside of the flight envelope, the autopilot disengages itself, returning control to the pilot. Well-trained pilots typically can deal with modest out-of-envelope challenges. A pilot who can deal with a significantly compromised flight envelope is very remarkable as happened with Captain Chesley Sullenberger ("Sully") and co-pilot Jeffrey Skiles on US Airways Flight 1549 in 2009 when the aircraft struck a flock of geese just northeast of the George Washington Bridge and suddenly lost all engine power over Manhattan. The pilots glided their plane extraordinarily skillfully to a ditching in the Hudson River off Midtown Manhattan, saving all the passengers and averting a catastrophic crash in New York City. The National Transportation Safety Board official described it as the most successful ditching in aviation history. This capability to operate safely despite the exceptional situation well-outside the norm is the essence of this project, Virtual Sully.
Virtual Sully technology is a development towards full pilotless autonomy, capable of identifying the failure/fault, estimating the remaining control authority, assessing the environment and planning a new feasible mission, doing path planning and executing it safely within the compromised flight envelope. This architecture replaces the traditional top-down one-way adaptation between mission planning, trajectory generation, tracking and stabilizing controller, with a two-way adaptation between mission planning, trajectory generation, and the adaptation of controller parameters to improve the stability and robustness of the control system. The following thrusts are considered: 1) monitoring and capability auditing; 2) high-assurance control with multi-level adaptation; 3) fault-tolerant architecture for unmanned autonomous systems (UAS) with real-time guarantees; 4) development of hardware-in-the loop simulation environment and flight tests using unmanned air vehicle (UAV) prototypes. Fault-tolerant computing infrastructure that can withstand high-stress situations will be integrated within flight control architecture that adapts at multiple levels. The feasibility evaluation of the missions and regenerated trajectories within UAV's remaining capabilities is pursued with real-time guarantees. The testbed is based on hardware-in-the-loop simulation for various failures, as well as extensive tests using real UAVs.
This research is to explore various approaches for a single-chip detector that (1) can record semiconductor-chip-package tampering activity without the need of a battery, (2) can be placed inside semiconductor chip packages through a nozzle-less droplet ejector, and (3) can be wirelessly interrogated without need to open up the semiconductor package. The project?s novelties are (1) the integration of a pyroelectric energy converter, a GHz resonator, an acceleration switch and an on-chip antenna, all on a single chip at a low cost and (2) a submillimeter-sized, battery-less, tamper detector chip that can be placed inside a semiconductor package through a droplet ejector and that can be wirelessly interrogated (for any recorded tampering activity) from the outside of the semiconductor package. The project's broader significance and importance are the foundational technology for individualized detection and recording of tampering activities, without needing an electrical power source such as the battery, and for the recorded event to be wirelessly interrogated, particularly to ensure the authenticity of semiconductor chips. Also, the proposed study of droplet-ejector-based chip packaging will likely open up a new packaging technology for semiconductor chips, particularly for chips whose lateral dimensions are too small for robotic pick-and-placement. Thus, the research will impact the semiconductor industry the foremost, but will also likely help many other industries needing to detect activities involving temperature rise and mechanical banging without battery. The proposed passive resonator will also be broadly applied to battery-less, passive security and identification such as radio frequency identification (RFID).
A single-chip semiconductor-tamper detector will be based on a pyroelectric energy converter (PEC) for generating a voltage and charge to break an RFID tag based on High-overtone Bulk Acoustic Resonator (HBAR) from heat associated with the tamper activity. A MEMS (microelectromechanical systems) acceleration switch will be designed to make an electrical connection between the PEC and the tag when mechanical shocks are applied to semiconductor chips on a printed circuit board (PCB), as a part of a tampering activity to detach semiconductor chips from PCB, so that the voltage and charge of the PEC due to the heat from de-soldering process may electrically break the tag. As the tampering activity involves banging PCBs against hard objects after a de-soldering process, a normally-off MEMS switch will be designed as an acceleration or vibration sensor to detect the banging. Counterfeiters may have options to scavenge IC chips with other methods than the method covered by the proposed tamper detector, but at no avail or at too high costs. The project will show the feasibility of a submillimeter-sized, battery-less and wireless, tamper detecting chip that can be mounted inside a semiconductor package through a nozzle-less droplet ejector. The proposed study will pave foundational technology for a paradigm-shifting concept of individualized detection and recording of tampering activities to ensure authenticity of semiconductor chips. The proposed transducers will likely impact wireless sensor network, energy harvesting, etc., and thus, the research will greatly impact many industries including RFID and wireless sensor industries in addition to semiconductor industry.
Increasing wildfire costs---a reflection of climate variability and development within wildlands---drive calls for new national capabilities to manage wildfires. The great potential of unmanned aerial systems (UAS) has not yet been fully utilized in this domain due to the lack of holistic, resilient, flexible, and cost-effective monitoring protocols. This project will develop UAS-based fire management strategies to use autonomous unmanned aerial vehicles (UAVs) in an optimal, efficient, and safe way to assist the first responders during the fire detection, management, and evacuation stages. The project is a collaborative effort between Northern Arizona University (NAU), Georgia Institute of Technology (GaTech), Desert Research Institute (DRI), and the National Center for Atmospheric Research (NCAR). The team has established ongoing collaborations with the U.S. Forest Service (USFS) in Pacific Northwest Research Station, Kaibab National Forest (NF), and Arizona Department of Forestry and Fire Management to perform multiple field tests during the prescribed and managed fires. This proposal's objective is to develop an integrated framework satisfying unmet wildland fire management needs, with key advances in scientific and engineering methods by using a network of low-cost and small autonomous UAVs along with ground vehicles during different stages of fire management operations including: (i) early detection in remote and forest areas using autonomous UAVs; (ii) fast active geo-mapping of the fire heat map on flying drones; (iii) real-time video streaming of the fire spread; and (iv) finding optimal evacuation paths using autonomous UAVs to guide the ground vehicles and firefighters for fast and safe evacuation.
This project will advance the frontier of disaster management by developing: (i) an innovative drone-based forest fire detection and monitoring technology for rapid intervention in hard-to-access areas with minimal human intervention to protect firefighter lives; (ii) multi-level fire modeling to offer strategic, event-scale, and new on-board, low-computation tactics using fast fire mapping from UAVs; and (iii) a bounded reasoning-based planning mechanism where the UAVs identify the fastest and safest evacuation roads for firefighters and fire-trucks in highly dynamic and uncertain dangerous zones. The developed technologies will be translational to a broad range of applications such as disaster (flooding, fire, mud slides, terrorism) management, where quick search, surveillance, and responses are required with limited human interventions. This project will also contribute to future engineering curricula and pursue a substantial integration of research and education while also engaging female and underrepresented minority students, developing hands-on research experiments for K-12 students.
This project is in response to the NSF Cyber-Physical Systems 20-563 solicitation.