This project aims to advance knowledge of machine learning for human-in-the-loop cyber-physical systems. Mobile and wearable devices have emerged as a promising technology for health monitoring and behavioral interventions. Designing such systems involves collecting and labeling sensor data in free-living environments through an active learning process. In active learning, the system iteratively queries a human expert (e.g., patient, clinician) for correct labels. Designing active learning strategies in uncontrolled settings is challenging because (1) active learning places a significant burden on the user and compromises adoption of the technology; and (2) labels expressed by humans carry significant amounts of temporal and spatial disparities that lead to poor performance of the system. The research will address technical challenges in designing high performance systems, and enable accurate monitoring and interventions in many applications beyond behavioral medicine.
This project develops mixed-initiative solutions that will enable learning of human behaviors in uncontrolled environments through the following research objectives: (1) investigating combinatorial approaches to maximize the active learning performance taking into account informativeness of sensor data, burden of data labeling, and reliability of prospective labels; (2) constructing a rich vocabulary of complex behaviors based on knowledge graph embedding and semi-supervised learning techniques; (3) developing network-graph-based learning algorithms that infer complex human behaviors; and (4) validating algorithms for off-line active learning, real-time active learning, behavior vocabulary construction, and behavior inference through both in-lab experiments and user studies.
A critical application of smart technologies is a smart, connected, and secured environmental monitoring network that can help administrators and researchers find better ways to incorporate evidence and data into public decision-making related to the environment. In this project, the investigators will establish a secure, trustworthy and reliable air quality monitoring network system using densely deployed low-cost sensors in and around the city of Orlando, Florida, to better inform development of pollution mitigation strategies in the region. Access to the urban-scale air quality sensor data and forecasts can have a positive social impact on environmental justice, public health, and sustainability initiatives. The investigators will incorporate the outcome of the project into courses on computer and network security and privacy, mobile computing, environmental sciences and engineering, and social science. The proposed work will provide hands-on exercises, research, and educational opportunities for undergraduate, graduate students and K-12 students.
The objectives of this project include performing remote low-cost sensor calibration, drift and malfunction detection. An innovative modeling method will be developed to perform remote calibration for low-cost PM2.5 sensors. A triple-sensor system will be developed, employing an operational statistical method that cross-evaluates sensor measurement data every hour to identify potential sensor drifts and malfunctions. The project team will build a trustworthy air quality monitoring network. A trusted boot strategy will be developed to ensure the sensor firmware is genuine at bootstrapping, performing dynamic analysis of states of the system, sending the measurement to a verifier for remote attestation, and accepting commands from the verifier to act on violations. The team will also create an accurate deep learning-based air quality prediction system based on a novel two-stage semi-supervised learning framework from noisy and mixed-labeled sensor big data. Social scientists on the team will conduct a social behavioral study of air quality monitoring and prediction. This project emphasizes sustainable empowerment of residents through processes of education on air quality and training on data utilization and advocacy. The project goes beyond passive citizen science to enable citizens to become advocates for their interests to increase not only outside air quality but also the overall quality of life of citizens in the community.
A growing number of natural or man-made detrimental incidents occur every day, which mandate precise monitoring, control/management, and prevention. Otherwise, they can rapidly evolve to turn into unpredictable events with significant losses such as delays of automotive traffics jams, catastrophic devastation assuming lives of innocent citizens as in man-made incidents or explosions, financial and industrial losses as in the case of malfunction or defects in manufacturing plants, and loss of natural resources as in the case of droughts, wildfires, and floods. First responders are at the frontline of counteractions against these incidents with their safety being at a major risk, while the effectiveness and efficiency of their actions may also need improvement for complete closure of the event or prevention of its growth. To further assist the first responders and disallowing the damage due to an incident, this research proposes a cooperative network of unmanned aerial vehicles (UAVs) that are equipped with novel sensing and imaging technologies and can obtain critical information for the safe and more successful operation of first responders by sharing information with them in a short span of time. The UAVs can also assist the first responders by obtaining information from environments that are hard to access such as non-urban areas and environments with extreme conditions e.g., high temperature/elevation.
The proposed research aims to investigate and realize a re-configurable, aerial, power-efficient, interconnected imaging, and detection (RAPID) CPS that can adaptively tune its configuration and performance (e.g., three-dimensional position of agents, spatial sensing resolution) with respect to the span of impacted area, feature size, and necessary resolution to monitor various incidents. To achieve these goals, three interrelated research thrusts with the following intellectual merits are pursued: (1) design and optimization of coordinated mobility strategies and control for the proposed CPS so as to maintain high-resolution sensing and connectivity of the drones; (2) design of an aerial communication network to realize cooperative sensing/communication and develop power-optimized rate-controllable wireless system per each UAV to exchange acquired image data between cyber and physical agents and track the position of UAVs in the network; and (3) design of a dual-mode sensing fusion embedded within a flying sensor agent comprising a novel low-power, high-resolution mm-wave imaging module to detect mobile/hidden objects and structural defects, and an infra-red (IR) thermal camera to detect high-temperature radiations.
Artificial Intelligence (AI) has shown superior performance in enhancing driving safety in advanced driver-assistance systems (ADAS). State-of-the-art deep neural networks (DNNs) achieve high accuracy at the expense of increased model complexity, which raises the computation burden of onboard processing units of vehicles for ADAS inference tasks. The primary goal of this project is to develop innovative collaborative AI inference strategies with the emerging edge computing paradigm. The strategies can adaptively adjust cooperative inference techniques for best utilizing available computation and communication resources and ultimately enable high-accuracy and real-time inference. The project will inspire greater collaborations between experts in wireless communication, edge computing, computer vision, autonomous driving testbed development, and automotive manufacturing, and facilitate AI applications in a variety of IoT systems. The educational testbed developed from this project can be integrated into courses to provide hands-on experiences. This project will benefit undergraduate, master, and Ph.D. programs and increase under-represented groups? engagement by leveraging the existing diversity-related outreach efforts.
A multi-disciplinary team with complementary expertise from Rowan University, Temple University, Stony Brook University, and Kettering University is assembled to pursue a coordinated study of collaborative AI inference. The PIs explore integrative research to enable deep learning technologies in resource-constrained ADAS for high-accuracy and real-time inference. Theory-wise, the PIs plan to take advantage of the observation that DNNs can be decomposed into a set of fine-grained components to allow distributed AI inference on both the vehicle and edge server sides for inference acceleration. Application-wise, the PIs plan to design novel DNN models which are optimized for the cooperative AI inference paradigm. Testbed-wise, a vehicle edge computing platform with V2X communication and edge computing capability will be developed at Kettering University GM Mobility Research Center. The cooperative AI inference system will be implemented, and the research findings will be validated on realistic vehicular edge computing environments thoroughly. The data, software, and educational testbeds developed from this project will be widely disseminated. Domain experts in autonomous driving testbed development, intelligent transportation systems, and automotive manufacturing will be engaged in project-related issues to ensure relevant challenges in this project are impactful for real-world applications.
The number of systems developed for applications including package delivery via small unmanned aerial vehicles (UAVs) and self-driving cars, is growing. To ensure safe and reliable positioning, it is critical to address not only positioning accuracy, but also the confidence in accuracy, defined as integrity. Most of the positioning and navigation studies for autonomous vehicles have focused on only accuracy, but not integrity. However, navigating autonomous vehicles equipped with relatively low-cost sensors in complex and rapidly changing environments -- e.g., urban areas with Global Positioning System (GPS) signal blockage -- poses great challenges compared to flying aircraft in the open sky, where positioning integrity has been well addressed by the Federal Aviation Administration (FAA)-regulated aviation industry.
This project aims to assess, monitor and improve positioning integrity for autonomous vehicles, such as UAVs and self-driving cars, and integrate the proposed research into education and outreach. The project involves a novel positioning integrity assessment and monitoring solution that is robust in GPS-challenged environments and is suitable for navigation sensor fusion. The investigator will (1) derive a new algorithm to directly assess and monitor GPS integrity in urban environments; (2) design an integrity monitoring framework for GPS sensor fusion using camera vision, LiDAR and inertial measurements; and (3) improve integrity by turning unwanted multi-path signals into a useful navigational source based on physical interaction with the environment.
This CAREER development plan will also integrate an education plan with the research goals by broadening participation of under-represented groups, such as women, by fostering a female researcher community through organizing female social events at technical conferences; educating and informing the public about FAA rules and safety issues regarding flying UAVs; and outreach to K-12 students by demonstrating the results of the proposed research at the Illinois Engineering Open House and leading hands-on activities for various school girl camps.
This project studies the algorithmic foundations and methodological frameworks to augment human capabilities via a novel form of physical and cognitive collaboration between human and multi-agent robotic systems, creating Aerial Co-Workers. These machines will actively collaborate with each other and with humans and tackle the fundamental gaps related to human-MAV collaboration at both physical and cognitive levels. The project is organized along two main thrust areas: Physical Collaboration and Cognitive Collaboration. The first thrust aims to significantly augment the physical ability of human workers by taking advantage of physical collaboration between the operator and a network of interconnected quadrotors, equipped with a set of flying hands", transporting objects. This will produce novel scientific solutions for human-robot collaborations to account for the complex legibility of the motions, and the variability of the relative positions of the agents. The second thrust aims to address two perception consensus problems to enable MAV-assisted augmented reality (AR) to augment the cognitive ability of operator(s). The key is to consistently collect, analyze, and display contextual information via multiple MAVs for effective and natural human-robot visual interactions. Aerial Co-Workers will get vantage viewpoints of the environment occluded from the humans which can be customized and augmented directly in the workspace to facilitate human actions via novel metric-semantic collaborative space mapping.
This project will have a strong societal impact as a disruptive technology for industry as well as the construction market, which is in urgent need of innovative solutions for enhancing the efficacy while maximizing safety. The outcome will enable safer, faster, and simpler task execution in scenarios including maintenance, inspection, transportation, and search and rescue. The project will contribute to lowering the barriers for new researchers in robotics, computer vision, and machine learning by making hardware designs, algorithms, datasets, and code available on open-source forums. The playful nature of AR tools and quadrotors employed in this project will contribute to engaging K-12 and undergraduate audiences.
The goals of Automated Driving Systems (ADS) and Advanced Driver Assistance Systems (ADAS) include reduction in accidental deaths, enhanced mobility for differently abled people, and an overall improvement in the quality of life for the general public. Such systems typically operate in open and highly uncertain environments for which robust perception systems are essential. However, despite the tremendous theoretical and experimental progress in computer vision, machine learning, and sensor fusion, the form and conditions under which guarantees should be provided for perception components is still unclear. The state-of-the-art is to perform scenario-based evaluation of data against ground truth values, but this has only limited impact. The lack of formal metrics to analyze the quality of perception systems has already led to several catastrophic incidents and a plateau in ADS/ADAS development. This project develops formal languages for specifying and evaluating the quality and robustness of perception sub-systems within ADS and ADAS applications. To enable broader dissemination of this technology, the project develops graduate and undergraduate curricula to train engineers in the use of such methods, and new educational modules to explain the challenges in developing safe and robust ADS for outreach and public engagement activities. To broaden participation in computing, the investigators target the inclusion of undergraduate women in research and development phases through summer internships.
The formal language developed in this project is based on a new spatio-temporal logic pioneered by the investigators. This logic allows one to simultaneously perform temporal reasoning about streaming perception data, and spatially reason about objects both within a single frame of the data and across frames. The project also develops quantitative semantics for this logic, which provides the user with quantifiable quality metrics for perception sub-systems. These semantics enable comparisons between different perception systems and architectures. Crucially, the formal language facilitates the process of abstracting away implementation details, which in turn allows system designers and regulators to specify assumptions and guarantees for system performance at a higher-level of abstraction. An interesting benefit of this formal language is that it enables querying of databases with perception data for specific driving scenarios without the need for the highly manual process of creating ground truth annotations. Such a formal language currently does not exist, and this is a huge impediment to building a thriving marketplace for perception components used in safety-critical systems. This framework sets the foundation for a requirements language between suppliers of perception components and automotive companies. The open source and publicly available software tools developed in this project will assist with testing of perception systems by engineers and governmental agencies.
The application of acoustic monitoring in ecological sciences has grown exponentially in the last two decades. It has been used to answer many questions, including detecting the presence or absence of animal species in an environment, evaluating animal behavior, and identifying ecological stressors and illegal activities. However, current uses are limited to the coverage of relatively small geographic areas with a fixed number of sensors. Animal-borne GPS-based location trackers paired with other sensors are another widely used tool in aiding wildlife conservation and ecosystem monitoring. Since capturing and collaring wild animals is a traumatic event for them, as well as being expensive and resource-intensive, multiyear deployments are required. There are severely limited opportunities to recharge batteries making relatively power-hungry sensing, such as acoustic monitoring, out of reach for existing tracking collars. The aim of the A3EM project is to devise an animal-borne adaptive acoustic monitoring system to enable long-term, real-time observation of the environment and behavior of wildlife. Animal-borne acoustic monitoring will be a novel tool that may provide new insights into biodiversity loss, a severe but underappreciated problem of our time. Combining acoustic monitoring with location tracking collars will enable entirely new applications that will facilitate census gathering and monitoring of threatened and endangered species, detecting poachers of elephants in Africa or caribou in Alaska, and evaluating the effects of mining and logging on wildlife, among many others. All data, hardware designs, and software source code will be released to the public domain, enabling tracking collar manufacturers to include the technology within their products.
A3EM constitutes a complex cyber-physical architecture involving humans, animals, distributed sensing devices, intelligent environmental monitoring agents, and limited power and network connectivity. This intermittently connected CPS, with a power budget an order of magnitude lower than typical, calls for novel approaches with a high level of autonomy and adaptation to the physical environment. A3EM will employ a unique combination of supervised and semi-supervised embedded machine learning to identify new and unexplored event classes in a given environment, dynamically control and adjust parameters related to data acquisition and storage, opportunistically share knowledge and data between distributed sensing devices, and optimize the management of storage and communication to minimize resource needs. These methods will be evaluated through the creation of a wearable acoustic monitoring system used to support ecological applications such as enhanced wildlife protection, rare species identification, and human impact studies on animal behavior.
The purpose of this project is to plan and organize the 2022 National Science Foundation (NSF) Cyber-Physical Systems (CPS) Principal Investigator (PI) Meeting. This meeting convenes all PIs of the NSF CPS Program for the 13 time since the program began. The PI Meeting is to take place during the Fall of 2022 in Alexandria, Virginia. The PI meeting is an annual opportunity for NSF-sponsored CPS researchers, industry representatives, and Federal agency representatives to gather and review new CPS developments, identify new and emerging applications, and to discuss technology gaps and barriers. The program agenda is community-driven and includes presentations (oral and poster) from PIs, reports of past year program activities, and showcase/pitch new CPS innovations and results. This will be a hybrid PI meeting. It will include both in-person and virtual elements. This will be the first CPS PI meeting that includes extensive in-person attendance since the pandemic. The virtual component of the PI meeting will also enable a larger community of researchers spanning academia, industry, and Government to also participate.
The annual PI Meeting serves as the only opportunity where the NSF-funded CPS Principal Investigators meet to share their research, discuss new research opportunities and challenges, and explore new ideas and partnerships for future work. Furthermore, the PI meeting is also an opportunity for the academic research community to interact with industry entities and government agencies with vested interest in CPS research and development. The PI Meeting is a forum for sharing ideas across the CPS community. It has played a major role in growing the community across a broad range of sectors and technologies, and performing outreach to others who have interest in learning about the program and participating as future proposers, transition partners, or sponsors. The 2022 PI meeting will feature lightning talks from researchers, poster sessions, special topic workshops, demonstrations and keynotes from leaders in the research community.
Frankie Denise King is the Assistant Director of the Annapolis Technical Coordination Project Office at Vanderbilt University’s Institute for Software Integrated Systems (VU-ISIS), where she is responsible for managing the coordination of collaborative R&D activities on the Cyber-Physical Systems-Virtual Organization that are sponsored by Federal agencies belonging to the Networking and Information Technology R&D (NITRD) Program. Before joining VU-ISIS, King served as the Technical Coordinator for the High Confidence Software and Systems (HCSS) Program Component Area (PCA) at the National Coordination Office (NCO) for NITRD for nearly seven years. Ms. King has over twenty-eight years of program development and management experience in domestic and international policy affairs where she has served in high-level capacities in the executive and legislative branches of the U.S. government and the private sector. Ms King’s work experience spans several domains, including the areas of information technology R&D, economics, agriculture, trade, and foreign assistance. Ms. King received an MA degree from the University of Notre Dame in 1984, and a BA degree from Fisk University in 1983, where she graduated Summa Cum Laude.
The purpose of this project is to plan and organize the 2024 National Science Foundation (NSF) Cyber-Physical Systems (CPS) Principal Investigator (PI) Meeting that is scheduled for the Spring of 2024 in Nashville, Tennessee. This meeting convenes PIs with active CPS Program awards for the 14th time since the program began. PI Meetings are primarily annual opportunities for CPS stakeholders comprising NSF-sponsored CPS researchers, industry representatives, and Federal agency representatives to gather and review new CPS developments, identify new and emerging applications, and to discuss technology gaps and barriers. The 2024 program agenda is community-driven comprising oral PI project presentations via panels, talks, poster presentations and demos of new CPS innovations and results, and networking sessions for PIs to interact. Federal agency program officers will provide status reports of the CPS program solicitation and address questions about preparing successful grant proposals. The meeting also will keynote presentations from invited leaders in the research community.
PI Meetings serves as the only opportunity where all of the NSF-funded CPS Principal Investigators meet to share their research, discuss new research opportunities and challenges, and explore new ideas and partnerships for future work, and to network over two meeting days. Specific to networking, PIs will have the opportunity to interact with industry representatives, government agency program officers, and non-governmental organizations with vested interest in CPS research and development at networking sessions scheduled for the arrival night, throughout the two full meeting days, and post-meeting after the meeting adjourns. PI Meeting are forums for sharing ideas across the CPS community and play a significant role in growing the community across a broad range of sectors and technologies, and performing outreach to others who have interest in learning about the program and participating as future proposers, transition partners, or sponsors.
Frankie Denise King is the Assistant Director of the Annapolis Technical Coordination Project Office at Vanderbilt University’s Institute for Software Integrated Systems (VU-ISIS), where she is responsible for managing the coordination of collaborative R&D activities on the Cyber-Physical Systems-Virtual Organization that are sponsored by Federal agencies belonging to the Networking and Information Technology R&D (NITRD) Program. Before joining VU-ISIS, King served as the Technical Coordinator for the High Confidence Software and Systems (HCSS) Program Component Area (PCA) at the National Coordination Office (NCO) for NITRD for nearly seven years. Ms. King has over twenty-eight years of program development and management experience in domestic and international policy affairs where she has served in high-level capacities in the executive and legislative branches of the U.S. government and the private sector. Ms King’s work experience spans several domains, including the areas of information technology R&D, economics, agriculture, trade, and foreign assistance. Ms. King received an MA degree from the University of Notre Dame in 1984, and a BA degree from Fisk University in 1983, where she graduated Summa Cum Laude.