Autonomous sensors that monitor and control physical or environmental conditions.
The goal of this project is to demonstrate new cyber-physical architectures that allow the sharing of closed-loop sensor networks among multiple applications through the dynamic allocation of sensing, networking, and computing resources. The sharing of sensor network infrastructures makes the provision of data more cost efficient and leads to virtual private sensor network (VPSN) architectures that can dramatically increase the number of sensor networks available for public use. These cyber infrastructures support a paradigm, called Sensing as a Service, in which users can obtain sensing and computational resources to generate the required data for their sensing applications. The challenge in sharing closed-loop sensor networks is that one application's actuation request might interfere with another's request. To address this challenge the VPSN architectures are comprised of three components: 1) a sensor virtualization layer that ensures that users obtain timely access to sensor data when requested and isolates their requests from others' through the creation of appropriate scheduling algorithms; 2) a computation virtualization layer that enables the allocation of computational resources for real-time data intensive applications which is closely tied to the sensor virtualization layer; 3) a virtualization toolkit that supports application developers in their efforts to build applications for virtualized, closed-loop sensor networks.
The sharing of closed-loop sensor networks leads to substantial savings on infrastructure and maintenance costs. The proposed VPSN architectures enable users to create their own applications without having detailed knowledge of sensing technologies and allows them to focus on the development of applications. VPSNs will contribute to the creation of a nationwide, shared sensing cyber infrastructure, which will provide critical information for public safety and security. VPSNs will also help to revolutionize the way undergraduate and graduate students from many disciplines perform research. Students will be shielded from some of the complexities of sensor networks and allowed to focus on their core research. To prepare students from the Electrical and Computer Engineering (ECE) department at the University of Massachusetts to perform this kind of research, new classes in the area of Integrative Systems Engineering and Sensor Network Virtualization will be offered.
Off
University of Massachusetts Amherst
-
National Science Foundation
This INSPIRE award is partially funded by the Cyber-Physical Systems Program in the Division of Computer and Network Systems in the Directorate for Computer and Information Science and Engineering, the Information and Intelligent Systems Program in the Division of Information and Intelligent Systems in the Directorate for Computer and Information Science and Engineering, the Computer Systems Research Program in the Division of Computer and Network Systems in the Directorate for Computer and Information Science and Engineering, and the Software and Hardware Foundations Program in the Division of Computing and Communications Foundations in the Directorate for Computer and Information Science and Engineering.
Sound plays a vital role in the ocean ecosystem as many organisms rely on acoustics for navigation, communication, detecting predators, and finding food. Therefore, the 3D underwater soundscape, i.e., the combination of sounds present in the immersive underwater environment, is of extreme importance to understand and protect underwater ecosystems. This project is creating a transformative distributed ocean observing system for studying the underwater soundscape at revolutionary spatial (~100 meters) and temporal (~100 seconds) resolutions that is also able to simultaneously resolve small-scale ocean current flow. These breakthroughs are achieved using a distributed collective of small hydrophone-equipped subsurface floats, which utilize group management techniques and sensor fusion to understand the ocean soundscape in a Lagrangian manner. The ability to record soundscapes provides a novel sensing technology to understand the effects of sound on marine ecosystems and the role that sound plays for species development. Experiments off the coast of San Diego, CA, and a research campaign in the Cayman Islands provide concrete scientific studies that are tightly interwoven with the engineering research.
Oceans are drivers of global climate, are home to some of the most important and diverse ecosystems, and represent a substantial contribution to the world's economy as a major source of food and employment. The technological and scientific advances in this project provide crucial tools to understand natural ocean resources, by studying soundscapes at spatio-temporal scales that were heretofore extremely burdensome and expensive to obtain.
Off
University of California at San Diego
-
National Science Foundation
Submitted by Curt Schurgers on December 21st, 2015
The objective of this project is to improve the performance of autonomous systems in dynamic environments, such as disaster recovery, by integrating perception, planning paradigms, learning, and databases. For the next generation of autonomous systems to be truly effective in terms of tangible performance improvements (e.g., long-term operations, complex and rapidly changing environments), a new level of intelligence must be attained.
This project improves the state of robotic systems by enhancing their ability to coordinate activities (such as searching a disaster zone), recognize objects or people, account for uncertainty, and "most important" learn, so the system's performance is continuously improving. To do this, the project takes an interdisciplinary approach to developing techniques in core areas and at the interface of perception, planning, learning, and databases to achieve robustness.
This project seeks to significantly improve the performance of cyber-physical systems for time-critical applications such as disaster monitoring, search and rescue, autonomous
navigation, and security and surveillance. It enables the development of techniques and
tools to augment all decision making processes and applications which are characterized by continuously changing operating conditions, missions and environments. The project contributes to education and a diverse engineering workforce by training students at the University of California, Riverside, one of the most diverse research institutions in US and an accredited Hispanic Serving Institution. Instruction and research opportunities cross traditional disciplinary boundaries, and the project serves as the basis for undergraduate capstone design projects and a new graduate course. The software and testbeds from this project will be shared with the cyber-physical system research community, industry, and end users. The project plans to present focused workshops/tutorials at major IEEE and ACM conferences. The results will be broadly disseminated through the project website.
For further information see the project website at: http://vislab.ucr.edu/RESEARCH/DSLC/DSLC.php
Off
University of California at Riverside
-
National Science Foundation
Amit Roy
Cyber-physical systems employed in transportation, security and manufacturing applications rely on a wide variety of sensors for prediction and control. In many of these systems, acquisition of information requires the deployment and activation of physical sensors, which can result in increased expense or delay. A fundamental aspect of these systems is that they must seek information intelligently in order to support their mission, and must determine the optimal tradeoffs as to the cost of physical measurements versus the improvement in information.
A recent explosion in sensor and UAV technology has led to new capabilities for controlling the nature and mobility of sensing actions by changing excitation levels, position, orientation, sensitivity, and similar parameters. This has in turn created substantial challenges to develop cyber-physical systems that can effectively exploit the degrees of freedom in selecting where and how to sense the environment. These challenges include high-dimensionality of observations and the associated "curse of dimensionality", non-trivial relationships between the observations and the latent variables, poor understanding of models relating the nature of potential sensing actions and the corresponding value of the collected information, and lack of sufficient training data from which to learn these models.
Intellectual Merit: The proposed research includes: (1) data-driven stochastic control theory for intelligent sensing in cyber-physical systems that incorporates costs/delays/risks and accounts for scenarios where models for sensing, decision-making, and prediction are unavailable or poorly understood. (2) Validation of control methods on a UAV sensor network in the real world domain of archaeological surveying.
Broader Impacts: The proposed effort includes: (a) Outreach: planned efforts for encouraging participation of women and under-represented groups; (b) Societal impact: research will lead to novel concepts in environmental monitoring, traffic surveillance, and security applications. (c) Multi- disciplinary activities: Impacting existing knowledge in cyber-physical systems, sensor management, and statistical learning. Research findings will be disseminated through conferences presentations, departmental seminars, journal papers, workshops and special sessions at IEEE CDC and RSS; (d) Curriculum development through new graduate level courses and course projects.
Off
Trustees of Boston University
-
National Science Foundation
Submitted by Venkatesh Saligrama on December 21st, 2015
Project
CPS: Synergy: Collaborative Research: Mutually Stabilized Correction in Physical Demonstration
Objective: How much a person should be allowed to interact with a controlled machine? If that machine is safety critical, and if the computer that oversees its operation is essential to its operation and safety, the answer may be that the person should not be allowed to interfere with its operation at all or very little. Moreover, whether the person is a novice or an expert matters.
Intellectual Merit: This research algorithmically resolves the tension between the need for safety and the need for performance, something a person may be much more adept at improving than a machine. Using a combination of techniques from numerical methods, systems theory, machine learning, human-machine interfaces, optimal control, and formal verification, this research will develop a computable notion of trust that allows the embedded system to assess the safety of the instruction a person is providing. The interface for interacting with a machine matters as well; designing motions for safety-critical systems using a keyboard may be unintuitive and lead to unsafe commands because of its limitations, while other interfaces may be more intuitive but threaten the stability of a system because the person does not understand the needs of the system. Hence, the person needs to develop trust with the machine over a period of time, and the last part of the research will include evaluating a person's performance by verifying the safety of the instructions the person provides. As the person becomes better at safe operation, she will be given more authority to control the machine while never putting the system in danger.
Broader Impacts: The activities will include outreach, development of public-domain software, experimental coursework including two massive online courses, and technology transfer to rehabilitation. Outreach will include exhibits at the Museum of Science and Industry and working with an inner-city high school. The algorithms to be developed will have immediate impact on projects with the Rehabilitation Institute of Chicago, including assistive devices, stroke assessment, and neuromuscular hand control. Providing a foundation for a science of trust has the potential to transform rehabilitation research.
Off
Northwestern University
-
National Science Foundation
The problem of controlling biomechatronic systems, such as multiarticulating prosthetic hands, involves unique challenges in the science and engineering of Cyber Physical Systems (CPS), requiring integration between computational systems for recognizing human functional activity and intent and controlling prosthetic devices to interact with the physical world. Research on this problem has been limited by the difficulties in noninvasively acquiring robust biosignals that allow intuitive and reliable control of multiple degrees of freedom (DoF). The objective of this research is to investigate a new sensing paradigm based on ultrasonic imaging of dynamic muscle activity. The synergistic research plan will integrate novel imaging technologies, new computational methods for activity recognition and learning, and high-performance embedded computing to enable robust and intuitive control of dexterous prosthetic hands with multiple DoF. The interdisciplinary research team involves collaboration between biomedical engineers, electrical engineers and computer scientists. The specific aims are to: (1) research and develop spatio-temporal image analysis and pattern recognition algorithms to learn and predict different dexterous tasks based on sonographic patterns of muscle activity (2) develop a wearable image-based biosignal sensing system by integrating multiple ultrasound imaging sensors with a low-power heterogeneous multicore embedded processor and (3) perform experiments to evaluate the real-time control of a prosthetic hand.
The proposed research methods are broadly applicable to assistive technologies where physical systems, computational frameworks and low-power embedded computing serve to augment human activities or to replace lost functionality. The research will advance CPS science and engineering through integration of portable sensors for image-based sensing of complex adaptive physical phenomena such as dynamic neuromuscular activity, and real-time sophisticated image understanding algorithms to interpret such phenomena running on low-power high performance embedded systems. The technological advances would enable practical wearable image-based biosensing, with applications in healthcare, and the computational methods would be broadly applicable to problems involving activity recognition from spatiotemporal image data, such as surveillance.
This research will have societal impacts as well as train students in interdisciplinary methods relevant to CPS. About 1.6 million Americans live with amputations that significantly affect activities of daily living. The proposed project has the long-term potential to significantly improve functionality of upper extremity prostheses, improve quality of life of amputees, and increase the acceptance of prosthetic limbs. This research could also facilitate intelligent assistive devices for more targeted neurorehabilitation of stroke victims. This project will provide immersive interdisciplinary CPS-relevant training for graduate and undergraduate students to integrate computational methods with imaging, processor architectures, human functional activity and artificial devices for solving challenging public health problems. A strong emphasis will be placed on involving undergraduate students in research as part of structured programs at our institution. The research team will involve students with disabilities in research activities by leveraging an ongoing NSF-funded project. Bioengineering training activities will be part of a newly developed undergraduate curriculum and a graduate curriculum under development.
The synergistic research plan has been designed to advance CPS science and engineering through the development of new computational methods for dynamic activity recognition and learning from image sequences, development of novel wearable imaging technologies including high-performance embedded computing, and real-time control of a physical system. The specific aims are to:
(1) Research and develop spatio-temporal image analysis and pattern recognition algorithms to learn and predict different dexterous tasks based on sonographic patterns of muscle activity. The first aim has three subtasks designed to collect, analyze and understand image sequences associated with functional tasks. (2) Develop a wearable image-based biosignal sensing system by integrating multiple ultrasound imaging sensors with a low-power heterogeneous multicore embedded processor. The second aim has two subtasks designed to integrate wearable imaging sensors with a real-time computational platform. (3) Perform experiments to evaluate the real-time control of a prosthetic hand. The third aim will integrate the wearable image acquisition system developed in Aim 2, and the image understanding algorithms developed in Aim 1, for real-time evaluation of the control of a prosthetic hand interacting with a virtual reality environment.
Successful completion of these aims will result in a real-time system that acquires image data from complex neuromuscular activity, decodes activity intent from spatiotemporal image data using computational algorithms, and controls a prosthetic limb in a virtual reality environment in real time. Once developed and validated, this system can be the starting point for developing a new class of sophisticated control algorithms for intuitive control of advanced prosthetic limbs, new assistive technologies for neurorehabilitation, and wearable real-time imaging systems for smart health applications.
Off
George Mason University
-
National Science Foundation
Submitted by Siddhartha Sikdar on December 21st, 2015
Reliable operation of cyber-physical systems (CPS) of societal importance such as Smart Electric Grids is critical for the seamless functioning of a vibrant economy. Sustained power outages can lead to major disruptions over large areas costing millions of dollars. Efficient computational techniques and tools that curtail such systematic failures by performing fault diagnosis and prognostics are therefore necessary. The Smart Electric Grid is a CPS: it consists of networks of physical components (including generation, transmission, and distribution facilities) interfaced with cyber components (such as intelligent sensors, communication networks, and control software). This grant provides funding to develop new methods to build models for the smart grid representing the failure dependencies in the physical and cyber components. The models will be used to build an integrated system-wide solution for diagnosing faults and predicting future failure propagations that can account for existing protection mechanisms. The original contribution of this work will be in the integrated modeling of failures on multiple levels in a large distributed cyber-physical system and the development of novel, hierarchical, robust, online algorithms for diagnostics and prognostics.
If successful, the model-based fault diagnostics and prognostics techniques will improve the effectiveness of isolating failures in large systems by identifying impending failure propagations and determining the time to critical failures that will increase system reliability and reduce the losses accrued due to failures. This work will bridge the gap between fault management approaches used in computer science and power engineering that are needed as the grid becomes smarter, more complex, and more data intensive. Outcomes of this project will include modeling and run-time software prototypes, research publications, and experimental results in collaborations with industry partners that will be made available to the scientific community.
Off
Vanderbilt University
-
National Science Foundation
Submitted by Gabor Karsai on December 21st, 2015
Reliable operation of cyber-physical systems (CPS) of societal importance such as Smart Electric Grids is critical for the seamless functioning of a vibrant economy. Sustained power outages can lead to major disruptions over large areas costing millions of dollars. Efficient computational techniques and tools that curtail such systematic failures by performing fault diagnosis and prognostics are therefore necessary. The Smart Electric Grid is a CPS: it consists of networks of physical components (including generation, transmission, and distribution facilities) interfaced with cyber components (such as intelligent sensors, communication networks, and control software). This grant provides funding to develop new methods to build models for the smart grid representing the failure dependencies in the physical and cyber components. The models will be used to build an integrated system-wide solution for diagnosing faults and predicting future failure propagations that can account for existing protection mechanisms. The original contribution of this work will be in the integrated modeling of failures on multiple levels in a large distributed cyber-physical system and the development of novel, hierarchical, robust, online algorithms for diagnostics and prognostics.
If successful, the model-based fault diagnostics and prognostics techniques will improve the effectiveness of isolating failures in large systems by identifying impending failure propagations and determining the time to critical failures that will increase system reliability and reduce the losses accrued due to failures. This work will bridge the gap between fault management approaches used in computer science and power engineering that are needed as the grid becomes smarter, more complex, and more data intensive. Outcomes of this project will include modeling and run-time software prototypes, research publications, and experimental results in collaborations with industry partners that will be made available to the scientific community.
Off
North Carolina State University
-
National Science Foundation
This project will result in fundamental physical and algorithmic building blocks of a novel cyber-physical for a two-way communication platform between handlers and working dogs designed to enable accurate training and control in open environments (eg, disaster response, emergency medical intervention).
Miniaturized sensor packages will be developed to enable non- or minimally-invasive monitoring of dogs' positions and physiology. Activity recognition algorithms will be developed to blend data from multiple sensors. The algorithms will dynamically determine position and behavior from time series of inertial and physiological measurements. Using contextual information about task performance, the algorithms will provide duty-cycling information to reduce sensor power consumption while increasing sensing specificity. The resulting technologies will be a platform for implementation of communication.
Strong interactions among computer science, electrical engineering, and veterinary science support this project. Work at the interface between electrical engineering and computer science will enable increased power efficiency and specificity of sensing in the detectors; work at the interface of electrical engineering and veterinary behavior will enable novel physiological sensing packages to be developed which measure behavioral signals in real time; Project outcomes will enable significant advances in how humans interact with both cyber and physical agents, including getting clearer pictures of behavior through real time physiological monitoring.
Students are part of the project and multidisciplinary training will help to provide development of the Cyber-Physical Systems pipeline. Project outreach efforts will include working with middle school children, especially women and under-represented minorities, presentations in public museums that will promote public engagement and appreciation of the contribution of cyber-physical systems to daily lives. The goal of each outreach activity is to encourage both interest and excitement for STEM topics, demonstrating how computer science and engineering can lead to effective and engaging cyber-physical systems.
Off
North Carolina State University
-
National Science Foundation
The goal of the project is the development of the theory, hardware and computational infrastructure that will enable automatically transforming user-defined, high-level tasks such as inspection of hazardous environments and object retrieval, into provably-correct control for modular robots. Modular robots are composed of simple individual modules; while a single module has limited capabilities, connecting multiple modules in different configurations allows the system to perform complex actions such as climbing, manipulating objects, traveling in unstructured environments and self-reconfiguring (breaking into multiple independent robots and reassembling into larger structures). The project includes (i) defining and populating a large library of perception and actuation building blocks both manually through educational activities and automatically through novel algorithms, (ii) creating automated tools to assign values to probabilistic metrics associated with the performance of library components, (iii) developing a grammar and automated tools for control synthesis that sequence different components of the library to accomplish higher level tasks, if possible, or provide feedback to the user if the task cannot be accomplished and (iv) designing and building a novel modular robot platform capable of rapid and robust self-reconfiguration.
This research will have several outcomes. First, it will lay the foundations for making modular robots easily controlled by anyone. This will enrich the robotic industry with new types of robots with unique capabilities. Second, the research will create novel algorithms that tightly combine perception, control and hardware capabilities. Finally, this project will create an open-source infrastructure that will allow the public to contribute basic controllers to the library thus promoting general research and social interest in robotics and engineering.
Off
Cornell University
-
National Science Foundation