Independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose.
The objective of this research is to develop advanced distributed monitoring and control systems for civil infrastructure. The approach uses a cyber-physical co-design of wireless sensor-actuator networks and structural monitoring and control algorithms. The unified cyber-physical system architecture and abstractions employ reusable middleware services to develop hierarchical structural monitoring and control systems. The intellectual merit of this multi-disciplinary research includes (1) a unified middleware architecture and abstractions for hierarchical sensing and control; (2) a reusable middleware service library for hierarchical structural monitoring and control; (3) customizable time synchronization and synchronized sensing routines; (4) a holistic energy management scheme that maps structural monitoring and control onto a distributed wireless sensor-actuator architecture; (5) dynamic sensor and actuator activation strategies to optimize for the requirements of monitoring, computing, and control; and (6) deployment and empirical validation of structural health monitoring and control systems on representative lab structures and in-service multi-span bridges. While the system constitutes a case study, it will enable the development of general principles that would be applicable to a broad range of engineering cyber-physical systems. This research will result in a reduction in the lifecycle costs and risks related to our civil infrastructure. The multi-disciplinary team will disseminate results throughout the international research community through open-source software and sensor board hardware. Education and outreach activities will be held in conjunction with the Asia-Pacific Summer School in Smart Structures Technology jointly hosted by the US, Japan, China, and Korea.
Off
University of Illinois at Urbana-Champaign
-
National Science Foundation
Agha, Gul
Gul  Agha Submitted by Gul Agha on April 7th, 2011
The objective of the research is to develop tools for comprehensive design and optimization of air traffic flow management capabilities at multiple spatial and temporal resolutions: a national airspace-wide scale and one-day time horizon (strategic time-frame); and at a regional scale (of one or a few Centers) and a two-hour time horizon (tactical time-frame). The approach is to develop a suite of tools for designing complex multi-scale dynamical networks, and in turn to use these tools to comprehensively address the strategic-to-tactical traffic flow management problem. The two directions in tool development include 1) the meshed modeling/design of flow- and queueing-networks under network topology variation for cyber- and physical- resource allocation, and 2) large-scale network simulation and numerical analysis. This research will yield aggregate modeling, management design, and validation tools for multi-scale dynamical infrastructure networks, and comprehensive solutions for national-wide strategic-to-tactical traffic flow management using these tools. The broader impact of the research lies in the significant improvement in cost and equity that may be achieved by the National Airspace System customers, and in the introduction of systematic tools for infrastructure-network design that will have impact not only in transportation but in fields such as electric power network control and health-infrastructure design. The development of an Infrastructure Network Ideas Cluster will enhance inter-disciplinary collaboration on the project topics and discussion of their potential societal impact. Activities of the cluster include cross-university undergraduate research training, seminars on technological and societal-impact aspects of the project, and new course development.
Off
Purdue University
-
National Science Foundation
Sun, Dengfeng
Dengfeng Sun Submitted by Dengfeng Sun on April 7th, 2011
The objective of the research is to develop tools for comprehensive design and optimization of air traffic flow management capabilities at multiple spatial and temporal resolutions: a national airspace-wide scale and one-day time horizon (strategic time-frame); and at a regional scale (of one or a few Centers) and a two-hour time horizon (tactical time-frame). The approach is to develop a suite of tools for designing complex multi-scale dynamical networks, and in turn to use these tools to comprehensively address the strategic-to-tactical traffic flow management problem. The two directions in tool development include 1) the meshed modeling/design of flow- and queueing-networks under network topology variation for cyber- and physical- resource allocation, and 2) large-scale network simulation and numerical analysis. This research will yield aggregate modeling, management design, and validation tools for multi-scale dynamical infrastructure networks, and comprehensive solutions for national-wide strategic-to-tactical traffic flow management using these tools. The broader impact of the research lies in the significant improvement in cost and equity that may be achieved by the National Airspace System customers, and in the introduction of systematic tools for infrastructure-network design that will have impact not only in transportation but in fields such as electric power network control and health-infrastructure design. The development of an Infrastructure Network Ideas Cluster will enhance inter-disciplinary collaboration on the project topics and discussion of their potential societal impact. Activities of the cluster include cross-university undergraduate research training, seminars on technological and societal-impact aspects of the project, and new course development.
Off
Washington State University
-
National Science Foundation
Roy, Sandip
Sandip Roy Submitted by Sandip Roy on April 7th, 2011
The objective of this research is to establish a foundational framework for smart grids that enables significant penetration of renewable DERs and facilitates flexible deployments of plug-and-play applications, similar to the way users connect to the Internet. The approach is to view the overall grid management as an adaptive optimizer to iteratively solve a system-wide optimization problem, where networked sensing, control and verification carry out distributed computation tasks to achieve reliability at all levels, particularly component-level, system-level, and application level. Intellectual merit. Under the common theme of reliability guarantees, distributed monitoring and inference algorithms will be developed to perform fault diagnosis and operate resiliently against all hazards. To attain high reliability, a trustworthy middleware will be used to shield the grid system design from the complexities of the underlying software world while providing services to grid applications through message passing and transactions. Further, selective load/generation control using Automatic Generation Control, based on multi-scale state estimation for energy supply and demand, will be carried out to guarantee that the load and generation in the system remain balanced. Broader impact. The envisioned architecture of the smart grid is an outstanding example of the CPS technology. Built on this critical application study, this collaborative effort will pursue a CPS architecture that enables embedding intelligent computation, communication and control mechanisms into physical systems with active and reconfigurable components. Close collaborations between this team and major EMS and SCADA vendors will pave the path for technology transfer via proof-of-concept demonstrations.
Off
University of Illinois at Urbana-Champaign
-
National Science Foundation
Kumar, Panganamala
Panganamala Kumar Submitted by Panganamala Kumar on April 7th, 2011
The objective of this research is to study such properties of classes of cooperative multi-agent systems as stability, performance, and robustness. Multi-agent systems such as vehicle platoons and coupled oscillators can display emergent behavior that is difficult to predict from the behavior of individual subsystems. The approach is to develop and extend the theory of fundamental design limitations to cover multi-agent systems that communicate over both physical and virtual communication links. The theory will further describe known phenomena, such as string instability, and extend the analysis to other systems, such as harmonic oscillators. The theory will be tested and validated in the Michigan Embedded Control Systems Laboratory. The intellectual merit of the proposed research will be the development of tools that delineate tradeoffs between performance and feedback properties for control systems involving mixes of human and computer agents and classes of hardware dynamics, controllers, and network topology. The contribution to system behavior of each agent's realization in hardware (constrained by Newton's laws) and realization in software and communications (subject to the constraints discovered by Shannon and Bode) will be assessed. The broader impacts of the proposed research will be a significant impact on teaching, both at the University of Michigan and at ETH Zurich. At each school, popular teaching laboratories allow over 100 students per year, from diverse backgrounds, to learn concepts from the field of embedded networked distributed control systems. New families of haptic devices will enable the research to be transferred into these teaching laboratories.
Off
University of Michigan Ann Arbor
-
National Science Foundation
Richard Gillespie
Freudenberg, James
James Freudenberg Submitted by James Freudenberg on April 7th, 2011
Vehicle automation has progressed from systems that monitor the operation of a vehicle, such as antilock brakes and cruise control, to systems that sense adjacent vehicles, such as emergency braking and intelligent cruise control. The next generation of systems will share sensor readings and collaborate to control braking operations by looking several cars ahead or by creating safe gaps for merging vehicles. Before we allow collaborative systems on public highways we must prove that they will do no harm, even when multiple rare events occur. The events will include loss of communications, failures or inaccuracies of sensors, mechanical failures in the automobile, aggressive drivers who are not participating in the system, and unusual obstacles or events on the roadways. The rules that control the interaction between vehicles is a protocol. There is a large body of work to verify the correctness of communications protocols and test that different implementations of the protocol will interact properly. However, it is difficult to apply these techniques to the protocols for collaborative driving systems because they are much more complex: 1) They interact with the physical world in more ways, through a network of sensors and the physical operation of the automobile as well as the communications channel; 2) They perform time critical operations that use multiple timers; And, 3) they may have more parties participating. In [1] we have verified that a three party protocol that assists a driver who wants to merge between two cars in an adjacent lane will not cause an accident for combinations of rare events. The verification uses a probabilistic sequence testing technique [2] that was developed for communications protocols. We were only able to use the communications technique after designing and specifying the collaborative driving protocol in a particular way. We have generalized the techniques used in the earlier work so that we can design collaborative driving protocols that can be verified. We have 1) a non-layered architecture, 2) a new class of protocols based upon time synchronized participants, and 3) a data management rule. 1) Communications protocols use a layered architecture. Protocol complexity is reduced by using the services provided by a lower layer. The layered architecture is not sufficient for collaborative driving protocols because they operate over multiple physical platforms. Instead, we define a smoke stack architecture that is interconnected. 2) The operation of protocols with multiple timers is more difficult to analyze because there are different sequences of operations depending on the relative times when the timers are initiated. Instead of using timers, we design protocols that use absolute time. This is reasonable because of the accurate time acquired from GPS and the accuracy of current clocks while GPS is not available. 3) Finally, in order for programs in different vehicles to make the same decisions they must use the same data. Our design merges the readings of sensors in different vehicles and uses a communications protocol that guarantees that all vehicles have the same sequence of messages and only use the messages that all vehicles have acquired. 1. Bohyun Kim, N. F. Maxemchuk, "A Safe Driver Assisted Merge Protocol," IEEE Systems Conference 2012, 19-22 Mar. 2012, Vancouver, BC, Canada, pp. 1-4. 2. N. F. Maxemchuk, K. K. Sabnani, "Probabilistic Verification of Communication Protocols," Distributed Computing Journal, Springer Verlag, no. 3, Sept. 1989, pp. 118-129.
Off
Columbia University
-
National Science Foundation
Maxemchuk, Nicholas
Nicholas Maxemchuk Submitted by Nicholas Maxemchuk on April 7th, 2011
This award supports the Third IFIP Working Conference on "Verified Software: Theories, Tools, and Experiments (VSTTE 2010)", August 16-19, 2010, hosted by Heriot-Watt University, Edinburgh Scotland. The construction of reliable software poses one of the most significant scientific and engineering challenges of the 21st century. Professor Tony Hoare of Microsoft Research has proposed the creation of a program verifier as a grand challenge for computer science and outlined an international program of research combining many disciplines such as the theory and implementation of programming languages, formal methods, program analysis, and automated theorem proving. The VSTTE conference series was established by the research community in response to this challenge. The VSTTE 2010 program includes two workshops focusing on the areas of: (1) theories, and (2) tools and experiments. This award is enabled through support provided by the NITRD High Confidence Software and Systems (HCSS) interagency Coordinating Group.
Off
SRI International
-
National Science Foundation
Shankar, Natarajan
Natarajan Shankar Submitted by Natarajan Shankar on April 7th, 2011
The objective of this research is to develop a framework for the development and deployment of next-generation medical systems consisting of integrated and cooperating medical devices. The approach is to design and implement an open-source medical device coordination framework and a model-based component oriented programming methodology for the device coordination, supported by a formal framework for reasoning about device behaviors and clinical workflows. The intellectual merit of the project lies in the formal foundations of the framework that will enable rapid development, verification, and certification of medical systems and their device components, as well as the clinical scenarios they implement. The model-based approach will supply evidence for the regulatory approval process, while run-time monitoring components embedded into the system will enable "black box" recording capabilities for the forensic analysis of system failures. The open-source distribution of tools supporting the framework will enhance its adoption and technology transfer. A rigorous framework for integrating and coordinating multiple medical devices will enhance the implementation of complicated clinical scenarios and reduce medical errors in the cases that involve such scenarios. Furthermore, it will speed up and simplify the process of regulatory approval for coordination-enabled medical devices, while the formal reasoning framework will improve the confidence in the design process and in the approval decisions. Overall, the framework will help reduce costs and improve the quality of the health care.
Off
Kansas State University
-
National Science Foundation
Hatcliff, John
John Hatcliff Submitted by John Hatcliff on April 7th, 2011
The objective of this research is to develop energy-efficient integrity establishment techniques for dynamic networks of cyber physical devices. In such dynamic networks, devices connect opportunistically and perform general-purpose computations on behalf of other devices. However, some devices may be malicious in intent and affect the integrity of computation. The approach is to develop new trust establishment mechanisms for dynamic networks. Existing trusted computing mechanisms are not directly applicable to cyber physical devices because they are resource-intensive and require devices to have special-purpose hardware. This project is addressing these problems along three research prongs. The first is a comprehensive study of the resource bottlenecks in current trust establishment protocols. Second, the insights from this study are being used to develop resource-aware attestation protocols for cyber physical devices that are equipped with trusted hardware. Third, the project is developing new trust establishment protocols for cyber physical devices that may lack trusted hardware. A key outcome of the project is an improved understanding of the tradeoffs needed to balance the concerns of security and resource-awareness in dynamic networks. Dynamic networks allow cyber physical devices to form a highly-distributed, cloud-like infrastructure for computations involving the physical world. The trust-establishment mechanisms developed in this project encourage devices to participate in dynamic networks, thereby unleashing the full potential of dynamic networks. This project includes development of dynamic networking applications, such as distributed gaming and social networking, in undergraduate curricula and course projects, thereby fostering the participation of this key demographic.
Off
Rutgers University New Brunswick
-
National Science Foundation
Ulrich Kremer
Ganapathy, Vinod
Submitted by Vinod Ganapathy on April 7th, 2011
The objective of this research is to address fundamental challenges in the verification and analysis of reconfigurable distributed hybrid control systems. These occur frequently whenever control decisions for a continuous plant depend on the actions and state of other participants. They are not supported by verification technology today. The approach advocated here is to develop strictly compositional proof-based verification techniques to close this analytic gap in cyber-physical system design and to overcome scalability issues. This project develops techniques using symbolic invariants for differential equations to address the analytic gap between nonlinear applications and present verification techniques for linear dynamics. This project aims at transformative research changing the scope of systems that can be analyzed. The proposed research develops a compositional proof-based approach to hybrid systems verification in contrast to the dominant automata-based verification approaches. It represents a major improvement addressing the challenges of composition, reconfiguration, and nonlinearity in system models The proposed research has significant applications in the verification of safety-critical properties in next generation cyber-physical systems. This includes distributed car control, robotic swarms, and unmanned aerial vehicle cooperation schemes to full collision avoidance protocols for multiple aircraft. Analysis tools for distributed hybrid systems have a broad range of applications of varying degrees of safety-criticality, validation cost, and operative risk. Analytic techniques that find bugs or ensure correct functioning can save lives and money, and therefore are likely to have substantial economic and societal impact.
Off
Carnegie-Mellon University
-
National Science Foundation
Platzer, Andre
Andre Platzer Submitted by Andre Platzer on April 7th, 2011
Subscribe to Validation and Verification