NIST Cyber-Physical Systems Public Working Group
What are Cyber-Physical Systems or CPS?
- Is a CPS any engineered system with a microprocessor?
- Do all CPS need to be connected to the internet?
- Are there a set of basic functions and architectural elements common to all CPS?
You are invited to join us in answering these questions and charting the path to the future.
The National Science Foundation’s (NSF) Directorate for Computer and Information Science and Engineering (CISE) and Intel Labs recently announced a new partnership to support novel, transformative, multidisciplinary approaches that address the problem of securing current and emerging cyber-physical systems, the infrastructures they form, and those integrated with them. A key goal of this activity is to foster a long-term research community committed to advancing research and education at the confl
1st ACM International Conference on Embedded Systems for Energy-Efficient Buildings
co-located with ACM SenSys 2014
The RIPS information webcast has been re-scheduled for Tuesday January 28th at 11am to discuss the RIPS program and answer questions about the solicitation was postponed due to the recent weather related Fed Gov’t closure. Please click here to view more information concerning this webcast including the webcast slides.
The potential economic and societal impacts of realizing fully autonomous cyber-physical systems (CPS) are astounding. If the Federal Aviation Administration (FAA) allows integration of unmanned aerial vehicles (UAVs) into the national civilian airspace, the private-sector drone industry is estimated to generate more than 100K high-paying technical jobs over a ten-year span and contribute $82B to the U.S. economy. Self-driving cars are predicted to annually prevent 5M accidents and 2M injuries, conserve 7B liters of fuel, and save 30K lives and $190B in healthcare costs associated with accidents in the U.S. Successful mission pursuit of such fully autonomous CPS hinges on possessing full situational awareness including precise knowledge of its own location. Current CPS are far from possessing this capability, particularly in dynamic, uncertain, poorly modeled environments where GPS coverage may be spotty, obscured, or otherwise impaired. This necessitates developing a coherent analytical foundation to deal with this emerging class of CPS, in which situational awareness and mission planning and execution are intertwined and must be considered simultaneously to address uncertainty, model mismatch, and compensate for potential GPS coverage gaps.
This project is has four main objectives: (1) Analyze the observability of unknown dynamic, stochastic environments comprising multiple agents. This analysis will establish the minimum a priori knowledge needed about the environment and/or agents for stochastic observability. (2) Develop adaptation strategies to refine the agents models of the environment, on-the-fly, as the agents build spatiotemporal maps. Adaptation is crucial, since it is impractical to assume that agents have high-fidelity models describing the environment. (3) Design optimal, computationally efficient information fusion algorithms with performance guarantees. These algorithms will consider physically realistic nonlinear dynamics and observations with colored, non-Gaussian noise, commonly encountered in CPS. (4) Synthesize optimal, real-time decision making strategies to balance the potentially conflicting objectives of information gathering and mission fulfillment. This investigation will enable autonomous CPS to navigate complex tradeoffs, leading to autonomous identification and adoption of the optimal strategy.
This research has far-reaching impact- it will evolve autonomous CPS from merely sensing the environment to making sense of the environment, bringing new capabilities in environments where direct human control is not physically or economically possible. The project has a vertically-integrated education plan spanning K-12, undergraduate, and graduate students. The project will engage economically disadvantaged middle and high school students in the same UAV testbed used for research verification. Also, research outcomes will be infused into new and existing undergraduate and graduate courses.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Autonomous driving is on the verge of revolutionizing the transportation system and significantly improving the well-being of people. An autonomous vehicle relies on multiple sensors and AI algorithms to facilitate sensing and perception for navigating the world. As the automotive industry primarily focuses on increasing autonomy levels and enhancing perception performance in mainly benign environments, the security and safety of perception technologies against physical attacks have yet to be thoroughly investigated. Specifically, adversaries creating physical-world perceptual illusions may pose a significant threat to the sensing and learning systems of autonomous vehicles, potentially undermining trust in these systems. This research project aims to deepen our understanding of the security and safety risks under physical attacks. The project endeavors to bolster sensing and learning resilience in autonomous driving against malicious perceptual illusion attacks. The success of the project will significantly advance the security and safety of autonomous driving in the face of emerging physical-world threats, paving the way for the safe deployment of autonomous vehicles in next-generation transportation systems.
The goal of this project is to investigate advanced sensing and learning technologies to enhance the precision and robustness of autonomous driving in intricate and hostile environments. The team?s approach includes: (i) a comprehensive framework to evaluate key vulnerabilities in software/hardware components of autonomous driving systems and devise effective attack vectors for generating false and deceptive perceptions; (ii) a real-time super-resolution radar sensing technology and a data fusion approach that integrates features from various sensor types at both the middle and late stages to effectively bolster the robustness of each sensing modality against illusions; and (iii) a systematic framework to enhance the algorithmic generality and achieve robust perception against multi-modal attacks using multi-view representation learning. The presented solutions will undergo rigorous testing using simulations and experiments to validate their effectiveness and robustness. These solutions contribute to the development of more secure and robust autonomous driving systems, capable of withstanding perceptual illusion attacks in real-world scenarios. The project will also offer research training opportunities for underrepresented students across diverse levels and age groups. The resulting novel technology will be shared as open-source for broader dissemination and advancement of the knowledge developed through this project.
This project aims to radically transform traffic management, emergency response, and urban planning practices via predictive analytics on rich data streams from increasingly prevalent instrumented and connected vehicles, infrastructure, and people. Road safety and congestion are a formidable challenge for communities. Current incident management practices are largely reactive in response to road user reports. With the outcome of this project, cities could proactively deploy assets and manage traffic. This would reduce emergency response times, saving lives, and minimizing disruptions to traffic. Efforts are planned in Kindergarten-12 outreach, undergraduate education, outreach to women and minority students, and incorporation of the research into courses, with the goal to inspire and train a diverse cohort for the next-generation of scientists and prepare them for taking on challenges arising from smart and connected communities.
To realize the envisioned system, an integrated research approach is taken to tackle the following closely related research tasks: (1) integration of heterogeneous data streams using a new sparse multi-task multi-view feature fusing method; (2) prediction of traffic incidents by designing a novel high-order low-rank model; (3) teaming of connected vehicles and roadside sensor systems; (4) verification of traffic condition prediction by crowdsourcing the ground truth from user reports in real-time; (5) selection of crowdsourcing participants that recruits and selects voluntary operators of instrumented connected vehicles to provide onboard sensing readings; (6) selection of high quality and diverse images and videos from crowdsourcing vehicles to provide better data for traffic prediction; and 7) design of optimal rerouting strategies to improve commuters' routes in the time of potential traffic disruption.