Applications of CPS technologies that involve communications systems.
The objective of this research is to develop methods for the operation and design of cyber physical systems in general, and energy efficient buildings in particular. The approach is to use an integrated framework: create models of complex systems from data; then design the associated sensing-communication-computation-control system; and finally create distributed estimation and control algorithms, along with execution platforms to implement these algorithms. A special emphasis is placed on adaptation. In particular, buildings and their environments change with time, as does the way in which buildings are used. The system must be designed to detect and respond to such changes.
The proposed research brings together ideas from control theory, dynamical systems, stochastic processes, and embedded systems to address design and operation of complex cyber physical systems that were previously thought to be intractable. These approaches provide qualitative understanding of system behavior, algorithms for control, and their implementation in a networked execution platform. Insights gained by the application of model reduction and adaptation techniques will lead to significant developments in the underlying theory of modeling and control of complex systems.
The research is expected to directly impact US industry through the development of integrated software-hardware solutions for smart buildings. Collaborations with United Technologies Research Center are planned to enhance this impact. The techniques developed are expected to apply to other complex cyber-physical systems with uncertain dynamics, such as the electric power grid. The project will enhance engineering education through the introduction of cross-disciplinary courses.
Off
University of Florida
-
National Science Foundation
Alberto Speranzon
Barooah, Prabir
Submitted by Prabir Barooah on October 31st, 2011
The objective of this research is to address issues related to the platform revolution leading to a third generation of networked control systems. The approach is to address four fundamental issues: (i) How to provide delay guarantees over communication networks to support networked control? (ii) How to synchronize clocks over networks so as to enable consistent and timely control actions? (iii) What is an appropriate architecture to support mechanisms for reliable yet flexible control system design? (iv) How to provide cross-domains proofs of proper performance in both cyber and physical domains?
Intellectual Merit: Currently neither theory nor networking protocols provide solutions for communication with delay constraints. Coordination by time is fundamental to the next generation of event-cum-time-driven systems that cyber-physical systems constitute. Managing delays and timing in architecture is fundamental for cyberphysical systems.
Broader Impact: Process, aerospace, and automotive industries rely critically on feedback control loops. Any platform revolution will have major consequences. Enabling control over networks will give rise to new large scale applications, e.g., the grand challenge of developing zero-fatality highway systems, by networking cars traveling on a highway. This research will train graduate students on this new technology of networked control. The Convergence Lab (i) has employed minority undergraduate students, including a Ron McNair Scholar, as well as other undergraduate and high school researchers, (ii) hosts hundreds of high/middle/elementary school students annually in Engineering Open House. The research results will be presented at conferences and published in open literature.
Off
University of Illinois at Urbana-Champaign
-
National Science Foundation
Kumar, Panganamala
Submitted by Panganamala Kumar on October 31st, 2011
Project
CPS: Medium: Collaborative Research: Dependability Techniques for Instrumented Cyber-Physical Spaces
The goal of this project is to develop a semantic foundation, cross-layer system architecture and adaptation services to improve dependability in instrumented cyberphysical spaces (ICPS) based on the principles of "computation reflection". ICPSs integrate a variety of sensing devices to create a digital representation of the evolving physical world and its processes for use by applications such as critical infrastructure monitoring, surveillance and incident-site emergency response. This requires the underlying systems to be dependable despite disruptions caused by failures in sensing, communications, and computation. The digital state representation guides a range of adaptations at different layers of the ICPS (i.e. networking, sensing, applications, cross-layer) to achieve end-to-end dependability at both the infrastructure and information levels. Examples of techniques explored include mechanisms for reliable information delivery over multi-networks, quality aware data collection, semantic sensing and reconfiguration using overlapping capabilities of heterogeneous sensors. Such adaptations are driven by a formal-methods based runtime analysis of system components, resource availability and application dependability needs. Responsphere, a real-world ICPS infrastructure on the University of California at Irvine campus, will serve as a testbed for development and validation of the overall ?reflective? approach and the cross-layer adaptation techniques to achieve dependability. Students at different levels (graduate, undergraduate, K-12) will be given opportunities to gain experience with using and designing real-world applications in the Responsphere ICPS via courses, independent study projects and demonstration sessions. Students will benefit tremendously from exposure to new software development paradigms for the ICPSs that will be a part of the future living environments.
Off
University of California-Irvine
-
National Science Foundation
Venkatasubramanian, Nalini
Submitted by Nalini Venkatasubramanian on April 7th, 2011
This Rapid Response Research (RAPID) project is developing technology for ubiquitous event reporting and data gathering on the 2010 oil spill in the Gulf of Mexico and its ecological impacts. Traditional applications for monitoring disasters have relied on specialized, tightly-coupled, and expensive hardware and software platforms to capture, aggregate, and disseminate information on affected areas. We lack science and technology for rapid and dependable integration of computing and communication technology into natural and engineered physical systems, cyber-physical systems (CPS). The tragic Gulf oil spill of 2010 presents both a compelling need to fill this gap in research and a critical opportunity to help in relief efforts by deploying cutting-edge CPS research in the field. In particular, this CPS research is developing a cloud-supported mobile CPS application enabling community members to contribute as citizen scientists through sensor deployments and direct recording of events and ecological impacts of the Gulf oil spill, such as fish and bird kills. The project exploits the availability of smartphones (with sophisticated sensor packages, high-level programming APIs, and multiple network connectivity options) and cloud computing infrastructures that enable collecting and aggregating data from mobile applications. The goal is to develop a scientific basis for managing the quality-of-service (QoS), user coordination, sensor data dissemination, and validation issues that arise in mobile CPS disaster monitoring applications. The research will have many important broader impacts related to the Gulf oil spill disaster relief efforts, including providing help for the affected Gulf communities as they field and evaluate next-generation CPS research and build a sustained capability for capturing large snapshots of the ecological impact of the Gulf oil spill. The resulting environmental data will have lasting value for evaluating the consequences of the spill in multiple research fields, but especially in Marine Biology. The project is collaborating with Gulf area K-12 schools to integrate disaster and ecology monitoring activities into their curricula. The technologies developed (resource optimization techniques, data reporting protocol trade-off analysis, and empirical evaluation of social network coordination strategies for an open data environment) will provide a resource for the CPS research community. It is expected that project results will enable future efforts to create and validate CPS disaster response systems that can scale to hundreds of thousands of users and operate effectively in life-critical situations with scarce network and computing resources.
Off
Vanderbilt University
-
National Science Foundation
Schmidt, Douglas
This Rapid Response Research (RAPID) project is developing technology for ubiquitous event reporting and data gathering on the 2010 oil spill in the Gulf of Mexico and its ecological impacts. Traditional applications for monitoring disasters have relied on specialized, tightly-coupled, and expensive hardware and software platforms to capture, aggregate, and disseminate information on affected areas. We lack science and technology for rapid and dependable integration of computing and communication technology into natural and engineered physical systems, cyber-physical systems (CPS). The tragic Gulf oil spill of 2010 presents both a compelling need to fill this gap in research and a critical opportunity to help in relief efforts by deploying cutting-edge CPS research in the field. In particular, this CPS research is developing a cloud-supported mobile CPS application enabling community members to contribute as citizen scientists through sensor deployments and direct recording of events and ecological impacts of the Gulf oil spill, such as fish and bird kills. The project exploits the availability of smartphones (with sophisticated sensor packages, high-level programming APIs, and multiple network connectivity options) and cloud computing infrastructures that enable collecting and aggregating data from mobile applications. The goal is to develop a scientific basis for managing the quality-of-service (QoS), user coordination, sensor data dissemination, and validation issues that arise in mobile CPS disaster monitoring applications. The research will have many important broader impacts related to the Gulf oil spill disaster relief efforts, including providing help for the affected Gulf communities as they field and evaluate next-generation CPS research and build a sustained capability for capturing large snapshots of the ecological impact of the Gulf oil spill. The resulting environmental data will have lasting value for evaluating the consequences of the spill in multiple research fields, but especially in Marine Biology. The project is collaborating with Gulf area K-12 schools to integrate disaster and ecology monitoring activities into their curricula. The technologies developed (resource optimization techniques, data reporting protocol trade-off analysis, and empirical evaluation of social network coordination strategies for an open data environment) will provide a resource for the CPS research community. It is expected that project results will enable future efforts to create and validate CPS disaster response systems that can scale to hundreds of thousands of users and operate effectively in life-critical situations with scarce network and computing resources.
Off
University of Alabama Tuscaloosa
-
National Science Foundation
Gray, Jeffrey
This Rapid Response Research (RAPID) project is developing technology for ubiquitous event reporting and data gathering on the 2010 oil spill in the Gulf of Mexico and its ecological impacts. Traditional applications for monitoring disasters have relied on specialized, tightly-coupled, and expensive hardware and software platforms to capture, aggregate, and disseminate information on affected areas. We lack science and technology for rapid and dependable integration of computing and communication technology into natural and engineered physical systems, cyber-physical systems (CPS). The tragic Gulf oil spill of 2010 presents both a compelling need to fill this gap in research and a critical opportunity to help in relief efforts by deploying cutting-edge CPS research in the field. In particular, this CPS research is developing a cloud-supported mobile CPS application enabling community members to contribute as citizen scientists through sensor deployments and direct recording of events and ecological impacts of the Gulf oil spill, such as fish and bird kills. The project exploits the availability of smartphones (with sophisticated sensor packages, high-level programming APIs, and multiple network connectivity options) and cloud computing infrastructures that enable collecting and aggregating data from mobile applications. The goal is to develop a scientific basis for managing the quality-of-service (QoS), user coordination, sensor data dissemination, and validation issues that arise in mobile CPS disaster monitoring applications. The research will have many important broader impacts related to the Gulf oil spill disaster relief efforts, including providing help for the affected Gulf communities as they field and evaluate next-generation CPS research and build a sustained capability for capturing large snapshots of the ecological impact of the Gulf oil spill. The resulting environmental data will have lasting value for evaluating the consequences of the spill in multiple research fields, but especially in Marine Biology. The project is collaborating with Gulf area K-12 schools to integrate disaster and ecology monitoring activities into their curricula. The technologies developed (resource optimization techniques, data reporting protocol trade-off analysis, and empirical evaluation of social network coordination strategies for an open data environment) will provide a resource for the CPS research community. It is expected that project results will enable future efforts to create and validate CPS disaster response systems that can scale to hundreds of thousands of users and operate effectively in life-critical situations with scarce network and computing resources.
Off
Virginia Polytechnic Institute and State University
-
National Science Foundation
White, Christopher
The objective of this research is the design of innovative routing, planning and coordination strategies for robot networks, and their application to oceanography. The approach is organized in three synergistic thrusts: (1) the application of queueing theory and combinatorial techniques to networked robots performing sequential tasks, (2) the design of novel distributed optimization and coordination schemes relying only on asynchronous and asymmetric communication, (3) the design of practical routing and coordination algorithms for the USC Networked Aquatic Platforms. In collaboration with oceanographers and marine biologists, the project aims to design motion, communication and interaction protocols that maximize the amount of scientific information collected by the platforms. This proposal addresses multi-dimensional problems of relevance in Engineering and Computer Science by unifying fundamental concepts from multiple cyberphysical domains (robotics, autonomy, combinatorics, and network science). Our team has expertise in a broad range of scientific disciplines, including control theory and theoretical computer science and their applications to multi-agent systems, robotics and sensor networks. The proposed research will have a positive impact on the emerging technology of autonomous and reliable robotic networks, performing a broad range of environmental monitoring and logistic tasks. Our educational and outreach objectives are manifold and focus on (1) integrating the proposed research themes into undergraduate education and research, e.g., via the existing NSF REU site at the USC Computer Science Department, and (2) mounting a vigorous program of outreach activities, e.g., via a well-developed collaboration with the UCSB Center for Science and Engineering Partnerships.
Off
University of California-Santa Barbara
-
National Science Foundation
Bullo, Francesco
Submitted by Francesco Bullo on April 7th, 2011
The objective of this research is the design of innovative routing, planning and coordination strategies for robot networks, and their application to oceanography. The approach is organized in three synergistic thrusts: (1) the application of queueing theory and combinatorial techniques to networked robots performing sequential tasks, (2) the design of novel distributed optimization and coordination schemes relying only on asynchronous and asymmetric communication, (3) the design of practical routing and coordination algorithms for the USC Networked Aquatic Platforms. In collaboration with oceanographers and marine biologists, the project aims to design motion, communication and interaction protocols that maximize the amount of scientific information collected by the platforms. This proposal addresses multi-dimensional problems of relevance in Engineering and Computer Science by unifying fundamental concepts from multiple cyberphysical domains (robotics, autonomy, combinatorics, and network science). Our team has expertise in a broad range of scientific disciplines, including control theory and theoretical computer science and their applications to multi-agent systems, robotics and sensor networks. The proposed research will have a positive impact on the emerging technology of autonomous and reliable robotic networks, performing a broad range of environmental monitoring and logistic tasks. Our educational and outreach objectives are manifold and focus on (1) integrating the proposed research themes into undergraduate education and research, e.g., via the existing NSF REU site at the USC Computer Science Department, and (2) mounting a vigorous program of outreach activities, e.g., via a well-developed collaboration with the UCSB Center for Science and Engineering Partnerships.
Off
University of Southern California
-
National Science Foundation
Sukhatme, Gaurav
Submitted by Gaurav Sukhatme on April 7th, 2011
The objective of this research is to study such properties of classes of cooperative multi-agent systems as stability, performance, and robustness. Multi-agent systems such as vehicle platoons and coupled oscillators can display emergent behavior that is difficult to predict from the behavior of individual subsystems. The approach is to develop and extend the theory of fundamental design limitations to cover multi-agent systems that communicate over both physical and virtual communication links. The theory will further describe known phenomena, such as string instability, and extend the analysis to other systems, such as harmonic oscillators. The theory will be tested and validated in the Michigan Embedded Control Systems Laboratory. The intellectual merit of the proposed research will be the development of tools that delineate tradeoffs between performance and feedback properties for control systems involving mixes of human and computer agents and classes of hardware dynamics, controllers, and network topology. The contribution to system behavior of each agent's realization in hardware (constrained by Newton's laws) and realization in software and communications (subject to the constraints discovered by Shannon and Bode) will be assessed. The broader impacts of the proposed research will be a significant impact on teaching, both at the University of Michigan and at ETH Zurich. At each school, popular teaching laboratories allow over 100 students per year, from diverse backgrounds, to learn concepts from the field of embedded networked distributed control systems. New families of haptic devices will enable the research to be transferred into these teaching laboratories.
Off
University of Michigan Ann Arbor
-
National Science Foundation
Richard Gillespie
Freudenberg, James
Submitted by James Freudenberg on April 7th, 2011
Vehicle automation has progressed from systems that monitor the operation of a vehicle, such as antilock brakes and cruise control, to systems that sense adjacent vehicles, such as emergency braking and intelligent cruise control. The next generation of systems will share sensor readings and collaborate to control braking operations by looking several cars ahead or by creating safe gaps for merging vehicles.
Before we allow collaborative systems on public highways we must prove that they will do no harm, even when multiple rare events occur. The events will include loss of communications, failures or inaccuracies of sensors, mechanical failures in the automobile, aggressive drivers who are not participating in the system, and unusual obstacles or events on the roadways.
The rules that control the interaction between vehicles is a protocol. There is a large body of work to verify the correctness of communications protocols and test that different implementations of the protocol will interact properly. However, it is difficult to apply these techniques to the protocols for collaborative driving systems because they are much more complex: 1) They interact with the physical world in more ways, through a network of sensors and the physical operation of the automobile as well as the communications channel; 2) They perform time critical operations that use multiple timers; And, 3) they may have more parties participating.
In [1] we have verified that a three party protocol that assists a driver who wants to merge between two cars in an adjacent lane will not cause an accident for combinations of rare events. The verification uses a probabilistic sequence testing technique [2] that was developed for communications protocols. We were only able to use the communications technique after designing and specifying the collaborative driving protocol in a particular way.
We have generalized the techniques used in the earlier work so that we can design collaborative driving protocols that can be verified. We have 1) a non-layered architecture, 2) a new class of protocols based upon time synchronized participants, and 3) a data management rule.
1) Communications protocols use a layered architecture. Protocol complexity is reduced by using the services provided by a lower layer. The layered architecture is not sufficient for collaborative driving protocols because they operate over multiple physical platforms. Instead, we define a smoke stack architecture that is interconnected.
2) The operation of protocols with multiple timers is more difficult to analyze because there are different sequences of operations depending on the relative times when the timers are initiated. Instead of using timers, we design protocols that use absolute time. This is reasonable because of the accurate time acquired from GPS and the accuracy of current clocks while GPS is not available.
3) Finally, in order for programs in different vehicles to make the same decisions they must use the same data. Our design merges the readings of sensors in different vehicles and uses a communications protocol that guarantees that all vehicles have the same sequence of messages and only use the messages that all vehicles have acquired.
1. Bohyun Kim, N. F. Maxemchuk, "A Safe Driver Assisted Merge Protocol," IEEE Systems Conference 2012, 19-22 Mar. 2012, Vancouver, BC, Canada, pp. 1-4.
2. N. F. Maxemchuk, K. K. Sabnani, "Probabilistic Verification of Communication Protocols," Distributed Computing Journal, Springer Verlag, no. 3, Sept. 1989, pp. 118-129.
Off
Columbia University
-
National Science Foundation
Maxemchuk, Nicholas
Submitted by Nicholas Maxemchuk on April 7th, 2011