Theoretical aspects of cyber-physical systems.
The objective of this research is to establish a foundational framework for smart grids that enables significant penetration of renewable DERs and facilitates flexible deployments of plug-and-play applications, similar to the way users connect to the Internet. The approach is to view the overall grid management as an adaptive optimizer to iteratively solve a system-wide optimization problem, where networked sensing, control and verification carry out distributed computation tasks to achieve reliability at all levels, particularly component-level, system-level, and application level. Intellectual merit. Under the common theme of reliability guarantees, distributed monitoring and inference algorithms will be developed to perform fault diagnosis and operate resiliently against all hazards. To attain high reliability, a trustworthy middleware will be used to shield the grid system design from the complexities of the underlying software world while providing services to grid applications through message passing and transactions. Further, selective load/generation control using Automatic Generation Control, based on multi-scale state estimation for energy supply and demand, will be carried out to guarantee that the load and generation in the system remain balanced. Broader impact. The envisioned architecture of the smart grid is an outstanding example of the CPS technology. Built on this critical application study, this collaborative effort will pursue a CPS architecture that enables embedding intelligent computation, communication and control mechanisms into physical systems with active and reconfigurable components. Close collaborations between this team and major EMS and SCADA vendors will pave the path for technology transfer via proof-of-concept demonstrations.
Off
University of Illinois at Urbana-Champaign
-
National Science Foundation
Kumar, Panganamala
Panganamala Kumar Submitted by Panganamala Kumar on April 7th, 2011
The objective of this research is to study such properties of classes of cooperative multi-agent systems as stability, performance, and robustness. Multi-agent systems such as vehicle platoons and coupled oscillators can display emergent behavior that is difficult to predict from the behavior of individual subsystems. The approach is to develop and extend the theory of fundamental design limitations to cover multi-agent systems that communicate over both physical and virtual communication links. The theory will further describe known phenomena, such as string instability, and extend the analysis to other systems, such as harmonic oscillators. The theory will be tested and validated in the Michigan Embedded Control Systems Laboratory. The intellectual merit of the proposed research will be the development of tools that delineate tradeoffs between performance and feedback properties for control systems involving mixes of human and computer agents and classes of hardware dynamics, controllers, and network topology. The contribution to system behavior of each agent's realization in hardware (constrained by Newton's laws) and realization in software and communications (subject to the constraints discovered by Shannon and Bode) will be assessed. The broader impacts of the proposed research will be a significant impact on teaching, both at the University of Michigan and at ETH Zurich. At each school, popular teaching laboratories allow over 100 students per year, from diverse backgrounds, to learn concepts from the field of embedded networked distributed control systems. New families of haptic devices will enable the research to be transferred into these teaching laboratories.
Off
University of Michigan Ann Arbor
-
National Science Foundation
Richard Gillespie
Freudenberg, James
James Freudenberg Submitted by James Freudenberg on April 7th, 2011
The objective of this research is to improve the ability to track the orbits of space debris and thereby reduce the frequency of collisions. The approach is based on two scientific advances: 1) optimizing the scheduling of data transmission from a future constellation of orbiting Cubesats to ground stations located worldwide, and 2) using satellite data to improve models of the ionosphere and thermosphere, which in turn are used to improve estimates of atmospheric density. Intellectual Merit Robust capacity-constrained scheduling depends on fundamental research on optimization algorithms for nonlinear problems involving both discrete and continuous variables. This objective depends on advances in optimization theory and computational techniques. Model refinement depends on adaptive control algorithms, and can lead to fundamental advances for automatic control systems. These contributions provide new ideas and techniques that are broadly applicable to diverse areas of science and engineering. Broader Impacts Improving the ability to predict the trajectories of space debris can render the space environment safer in both the near term---by enhancing astronaut safety and satellite reliability---and the long term---by suppressing cascading collisions that could have a devastating impact on the usage of space. This project will impact real-world practice by developing techniques that are applicable to large-scale modeling and data collection, from weather prediction to Homeland Security. The research results will impact education through graduate and undergraduate research as well as through interdisciplinary modules developed for courses in space science, satellite engineering, optimization, and data-based modeling taught across multiple disciplines.
Off
University Corporation For Atmospheric Research
-
National Science Foundation
Anderson, Jeffrey
Jeffrey Anderson Submitted by Jeffrey Anderson on April 7th, 2011
The objective of this research is to improve the ability to track the orbits of space debris and thereby reduce the frequency of collisions. The approach is based on two scientific advances: 1) optimizing the scheduling of data transmission from a future constellation of orbiting Cubesats to ground stations located worldwide, and 2) using satellite data to improve models of the ionosphere and thermosphere, which in turn are used to improve estimates of atmospheric density. Intellectual Merit Robust capacity-constrained scheduling depends on fundamental research on optimization algorithms for nonlinear problems involving both discrete and continuous variables. This objective depends on advances in optimization theory and computational techniques. Model refinement depends on adaptive control algorithms, and can lead to fundamental advances for automatic control systems. These contributions provide new ideas and techniques that are broadly applicable to diverse areas of science and engineering. Broader Impacts Improving the ability to predict the trajectories of space debris can render the space environment safer in both the near term---by enhancing astronaut safety and satellite reliability---and the long term---by suppressing cascading collisions that could have a devastating impact on the usage of space. This project will impact real-world practice by developing techniques that are applicable to large-scale modeling and data collection, from weather prediction to Homeland Security. The research results will impact education through graduate and undergraduate research as well as through interdisciplinary modules developed for courses in space science, satellite engineering, optimization, and data-based modeling taught across multiple disciplines.
Off
University of Michigan Ann Arbor
-
National Science Foundation
Bernstein, Dennis
Dennis Bernstein Submitted by Dennis Bernstein on April 7th, 2011
Vehicle automation has progressed from systems that monitor the operation of a vehicle, such as antilock brakes and cruise control, to systems that sense adjacent vehicles, such as emergency braking and intelligent cruise control. The next generation of systems will share sensor readings and collaborate to control braking operations by looking several cars ahead or by creating safe gaps for merging vehicles. Before we allow collaborative systems on public highways we must prove that they will do no harm, even when multiple rare events occur. The events will include loss of communications, failures or inaccuracies of sensors, mechanical failures in the automobile, aggressive drivers who are not participating in the system, and unusual obstacles or events on the roadways. The rules that control the interaction between vehicles is a protocol. There is a large body of work to verify the correctness of communications protocols and test that different implementations of the protocol will interact properly. However, it is difficult to apply these techniques to the protocols for collaborative driving systems because they are much more complex: 1) They interact with the physical world in more ways, through a network of sensors and the physical operation of the automobile as well as the communications channel; 2) They perform time critical operations that use multiple timers; And, 3) they may have more parties participating. In [1] we have verified that a three party protocol that assists a driver who wants to merge between two cars in an adjacent lane will not cause an accident for combinations of rare events. The verification uses a probabilistic sequence testing technique [2] that was developed for communications protocols. We were only able to use the communications technique after designing and specifying the collaborative driving protocol in a particular way. We have generalized the techniques used in the earlier work so that we can design collaborative driving protocols that can be verified. We have 1) a non-layered architecture, 2) a new class of protocols based upon time synchronized participants, and 3) a data management rule. 1) Communications protocols use a layered architecture. Protocol complexity is reduced by using the services provided by a lower layer. The layered architecture is not sufficient for collaborative driving protocols because they operate over multiple physical platforms. Instead, we define a smoke stack architecture that is interconnected. 2) The operation of protocols with multiple timers is more difficult to analyze because there are different sequences of operations depending on the relative times when the timers are initiated. Instead of using timers, we design protocols that use absolute time. This is reasonable because of the accurate time acquired from GPS and the accuracy of current clocks while GPS is not available. 3) Finally, in order for programs in different vehicles to make the same decisions they must use the same data. Our design merges the readings of sensors in different vehicles and uses a communications protocol that guarantees that all vehicles have the same sequence of messages and only use the messages that all vehicles have acquired. 1. Bohyun Kim, N. F. Maxemchuk, "A Safe Driver Assisted Merge Protocol," IEEE Systems Conference 2012, 19-22 Mar. 2012, Vancouver, BC, Canada, pp. 1-4. 2. N. F. Maxemchuk, K. K. Sabnani, "Probabilistic Verification of Communication Protocols," Distributed Computing Journal, Springer Verlag, no. 3, Sept. 1989, pp. 118-129.
Off
Columbia University
-
National Science Foundation
Maxemchuk, Nicholas
Nicholas Maxemchuk Submitted by Nicholas Maxemchuk on April 7th, 2011
This award supports the Third IFIP Working Conference on "Verified Software: Theories, Tools, and Experiments (VSTTE 2010)", August 16-19, 2010, hosted by Heriot-Watt University, Edinburgh Scotland. The construction of reliable software poses one of the most significant scientific and engineering challenges of the 21st century. Professor Tony Hoare of Microsoft Research has proposed the creation of a program verifier as a grand challenge for computer science and outlined an international program of research combining many disciplines such as the theory and implementation of programming languages, formal methods, program analysis, and automated theorem proving. The VSTTE conference series was established by the research community in response to this challenge. The VSTTE 2010 program includes two workshops focusing on the areas of: (1) theories, and (2) tools and experiments. This award is enabled through support provided by the NITRD High Confidence Software and Systems (HCSS) interagency Coordinating Group.
Off
SRI International
-
National Science Foundation
Shankar, Natarajan
Natarajan Shankar Submitted by Natarajan Shankar on April 7th, 2011
Abstract The objective of this proposal is to hold a grantees meeting on July 8-9, 2009 focused on the potential of cyber-physical systems and their impact on our lives. The event, "Cyber-Physical Systems" Leading the Way to a Smarter, Safer Future for Anyone, Anywhere, Anytime?, This is a two-day event: the first day will take place at the National Science Foundation and will be dedicated to a dry-run session; the second day of the CPS event will take place at Capitol Hill and will include a luncheon with the members of the Senate followed by demonstrations and poster presentations of research work related to CPS. The invited audience includes 25 members of the Senate Commerce Committee and their staffs. Intellectual merit: The demonstration and posters will showcase state-of-the-art and innovative research projects describing the potential benefits of CPS to the society, while highlighting the research challenges that need to be address in order to realize the CPS vision. Broader Impact: The Grantees meeting will provide an opportunity to showcase the current accomplishments in the CPS to some of the senior senators, members of the Senate Commerce Committee and their staffs and to the NSF staff. The workshop will have participation from 12 institutions and their post Docs, graduate students and undergraduate students. It also includes participation and demonstration by the High school students. This will be a great opportunity for them to interact with other participants and learn about many exciting opportunities in the CPS area.
Off
University of Alabama Tuscaloosa
-
National Science Foundation
Anderson, Monica
Submitted by Monica Anderson on April 7th, 2011
This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5). Many of the future applications of systems and control that will pertain to cyber-physical systems are those related to problems of (possibly) distributed estimation and control of multiple agents (both sensors and actuators) over networks. Examples include areas such as distributed sensor networks, control of distributed autonomous agents, collision avoidance, distributed power systems, etc. Central to the study of such systems is the study of the behavior of random Lyapunov and Riccati recursions (the analogy is to traditional LTI systems where deterministic Lyapunov and Riccati recursions and equations play a prominent role). Unfortunately, to date, the tools for analyzing such systems are woefully lacking, ostensibly because the recursions are both nonlinear and random, and hence intractable if one wants to analyze them exactly. The methodology proposed in this work is to exploit tools from the theory of large random matrices to find the asymptotic eigendistribution of the matrices in the random Riccati recursions when the number of states in the system, n, is large. In many cases, the eigendistribution contains sufficient information about the overall behavior of the system. Stability can be inferred from the eigenanalysis. The mean of the eigenvalues is simply related to the mean of the trace (i.e., the mean-square-error of the system), whereas the support set of the eigendistribution says something about best- and worst-case performances of the system. Furthermore, a general philosophy of this approach is to identify and exhibit the universal behavior of the system, provided such a behavior does exist. Here, "universal" means behavior that does not depend on the microscopic details of the system (where losses occur, what the exact topology of the network or underlying distributions are), but rather on some simple macroscopic properties. A main idea of the approach is to replace a high-dimensional matrix-valued nonlinear and stochastic recursion by a scalar-valued deterministic functional recursion (involving an appropriate transform of the eigendistribution), which is much more amenable to analysis and computation. The project will include course development and the recruitment of women and minority students to research. It will also make use of undergraduate and underrepresented minority student researchers through Caltech's SURF and MURF programs.
Off
California Institute of Technology
-
National Science Foundation
Hassibi, Babak
Babak Hassibi Submitted by Babak Hassibi on April 7th, 2011
The objective of this research is to develop the theoretical foundations for understanding implicit and explicit communication within cyber-physical systems. The approach is two-fold: (a) developing new information-theoretic tools to reveal the essential nature of implicit communication in a manner analogous to (and compatible with) classical network information theory; (b) viewing the wireless ecosystem itself as a cyber-physical system in which spectrum is the physical substrate that is manipulated by heterogeneous interacting cyber-systems that must be certified to meet safety and performance objectives. The intellectual merit of this project comes from the transformative technical approaches being developed. The key to understanding implicit communication is a conceptual breakthrough in attacking the unsolved 40-year-old Witsenhausen counterexample by using an approximate-optimality paradigm combined with new ideas from sphere-packing and cognitive radio channels. These techniques open up radically new mathematical avenues to attack distributed-control problems that have long been considered fundamentally intractable. They guide the development of nonlinear control strategies that are provably orders-of-magnitude better than the best linear strategies. The keys to understanding explicit communication in cyber-physical systems are new approaches to active learning, detection, and estimation in distributed environments that combine worst-case and probabilistic elements. Beyond the many diverse applications (the Internet, the smart grid, intelligent transportation, etc.) of heterogeneous cyber-physical systems themselves, this research reaches out to wireless policy: allowing the principled formulation of government regulations for next-generation networks. Graduate students (including female ones) and postdoctoral scholars will be trained and research results incorporated into both the undergraduate and graduate curricula.
Off
University of California-Berkeley
-
National Science Foundation
Sahai, Anant
Anant Sahai Submitted by Anant Sahai on April 7th, 2011
The objective of this research is to integrate user control with automated reflexes in the human-machine interface. The approach, taking inspiration from biology, analyzes control-switching issues in brain-computer interfaces. A nonhuman primate will perform a manual task while movement- and touch-related brain signals are recorded. While a robotic hand replays the movements, electronic signals will be recorded from touch sensors on the robot?s fingers, then mapped to touch-based brain signals, and used to give the subject tactile sensation via direct cortical stimulation. Context-dependent transfers of authority between the subject and reflex-like controls will be developed based on relationships between sensor signals and command signals. Issues of mixed authority and context awareness have general applicability in human-machine systems. This research advances methods for providing tactile feedback from a remote manipulator, dividing control appropriate to human and machine capabilities, and transferring authority in a smooth, context-dependent manner. These principles are essential to any cyber-physical system requiring robustness in the face of uncertainty, control delays, or limited information flow. The resulting transformative methods of human-machine communication and control will have applications for robotics (space, underwater, military, rescue, surgery, assistive, prosthetic), haptics, biomechanics, and neuroscience. Underrepresented undergraduates will be recruited from competitive university programs at Arizona State University and Mexico's Tec de Monterrey University. Outreach projects will engage the public and underrepresented school-aged children through interactive lab tours, instructional modules, and public lectures on robotics, human-machine systems, and social and ethical implications of neuroprostheses.
Off
Arizona State University
-
National Science Foundation
Santos, Veronica J.
Veronica Santos Submitted by Veronica Santos on April 7th, 2011
Subscribe to Foundations