The terms denote technology areas that are part of the CPS technology suite or that are impacted by CPS requirements.
Tracking Fish Movement with a School of Gliding Robotic Fish This project is focused on developing the technology for continuously tracking the movement of live fish implanted with acoustic tags, using a network of relatively inexpensive underwater robots called gliding robotic fish. The research addresses two fundamental challenges in the system design: (1) accommodating significant uncertainties due to environmental disturbances, communication delays, and apparent randomness in fish movement, and (2) balancing competing objectives (for example, accurate tracking versus long lifetime for the robotic network) while meeting multiple constraints on onboard computing, communication, and power resources. Fish movement data provide insight into choice of habitats, migratory routes, and spawning behavior. By advancing the state of the art in fish tracking technology, this project enables better-informed decisions for fishery management and conservation, including control of invasive species, restoration of native species, and stock assessment for high-valued species, and ultimately contributes to the sustainability of fisheries and aquatic ecosystems. By advancing the coordination and control of gliding robotic fish networks and enabling their operation in challenging environments such as the Great Lakes, the project also facilitates the practical adoption of these robotic systems for a myriad of other applications in environmental monitoring, port surveillance, and underwater structure inspection. The project enhances several graduate courses at Michigan State University, and provides unique interdisciplinary training opportunities for students including those from underrepresented groups. Outreach activities, including robotic fish demos, museum exhibits, teacher training, and "Follow That Fish" smartphone App, are specifically designed to pique the interest of pre-college students in science and engineering. The goal of this project is to create an integrative framework for the design of coupled robotic and biological systems that accommodates system uncertainties and competing objectives in a rigorous and holistic manner. This goal is realized through the pursuit of five tightly coupled research objectives associated with the application of tracking and modeling fish movement: (1) developing new robotic platforms to enable underwater communication and acoustic tag detection, (2) developing robust algorithms with analytical performance assurance to localize tagged fish based on time-of-arrival differences among multiple robots, (3) designing hidden Markov models and online model adaptation algorithms to capture fish movement effectively and efficiently, (4) exploring a two-tier decision architecture for the robots to accomplish fish tracking, which incorporates model-predictions of fish movement, energy consumption, and mobility constraints, and (5) experimentally evaluating the design framework, first in an inland lake for localizing or tracking stationary and moving tags, and then in Thunder Bay, Lake Huron, for tracking and modeling the movement of lake trout during spawning. This project offers fundamental insight into the design of robust robotic-physical-biological systems that addresses the challenges of system uncertainties and competing objectives. First, a feedback paradigm is presented for tight interactions between the robotic and biological components, to facilitate the refinement of biological knowledge and robotic strategies in the presence of uncertainties. Second, tools from estimation and control theory (e.g., Cramer-Rao bounds) are exploited in novel ways to analyze the performance limits of fish tracking algorithms, and to guide the design of optimal or near-optimal tradeoffs to meet multiple competing objectives while accommodating onboard resource constraints. On the biology side, continuous, dynamic tracking of tagged fish with robotic networks represents a significant step forward in acoustic telemetry, and results in novel datasets and models for advancing fish movement ecology.
Off
Michigan State University
-
National Science Foundation
Guoliang Xing
Charles Krueger
Submitted by Xiaobo Tan on December 22nd, 2015
Designing semi-autonomous networks of miniature robots for inspection of bridges and other large civil infrastructure According to the U.S. Department of Transportation, the United States has 605102 bridges of which 64% are 30 years or older and 11% are structurally deficient. Visual inspection is a standard procedure to identify structural flaws and possibly predict the imminent collapse of a bridge and determine effective precautionary measures and repairs. Experts who carry out this difficult task must travel to the location of the bridge and spend many hours assessing the integrity of the structure. The proposal is to establish (i) new design and performance analysis principles and (ii) technologies for creating a self-organizing network of small robots to aid visual inspection of bridges and other large civilian infrastructure. The main idea is to use such a network to aid the experts in remotely and routinely inspecting complex structures, such as the typical girder assemblage that supports the decks of a suspension bridge. The robots will use wireless information exchange to autonomously coordinate and cooperate in the inspection of pre-specified portions of a bridge. At the end of the task, or whenever possible, they will report images as well as other key measurements back to the experts for further evaluation. Common systems to aid visual inspection rely either on stationary cameras with restricted field of view, or tethered ground vehicles. Unmanned aerial vehicles cannot access constricted spaces and must be tethered due to power requirements and the need for uninterrupted communication to support the continual safety critical supervision by one or more operators. In contrast, the system proposed here would be able to access tight spaces, operate under any weather, and execute tasks autonomously over long periods of time. The fact that the proposed framework allows remote expert supervision will reduce cost and time between inspections. The added flexibility as well as the increased regularity and longevity of the deployments will improve the detection and diagnosis of problems, which will increase safety and support effective preventive maintenance. This project will be carried out by a multidisciplinary team specialized in diverse areas of cyber-physical systems and robotics, such as locomotion, network science, modeling, control systems, hardware sensor design and optimization. It involves collaboration between faculty from the University of Maryland (UMD) and Resensys, which specializes in remote bridge monitoring. The proposed system will be tested in collaboration with the Maryland State Highway Administration, which will also provide feedback and expertise throughout the project. This project includes concrete plans to involve undergraduate students throughout its duration. The investigators, who have an established record of STEM outreach and education, will also leverage on exiting programs and resources at the Maryland Robotics Center to support this initiative and carry out outreach activities. In order to make student participation more productive and educational, the structure of the proposed system conforms to a hardware architecture adopted at UMD and many other schools for the teaching of undergraduate courses relevant to cyber-physical systems and robotics. This grant will support research on fundamental principles and design of robotic and cyber-physical systems. It will focus on algorithm design for control and coordination, network science, performance evaluation, microfabrication and system integration to address the following challenges: (i) Devise new locomotion and adhesion principles to support mobility within steel and concrete girder structures. (ii) Investigate the design of location estimators, omniscience and coordination algorithms that are provably optimal, subject to power and computational constraints. (iii) Methods to design and analyze the performance of energy-efficient communication protocols to support robot coordination and localization in the presence of the severe propagation barriers caused by metal and concrete structures of a bridge.
Off
University of Maryland College Park
-
National Science Foundation
Nuno Martins Submitted by Nuno Martins on December 22nd, 2015
This project aims to enable cyber-physical systems that can be worn on the body in order to one day allow their users to touch, feel, and manipulate computationally simulated three-dimensional objects or digital data in physically realistic ways, using the whole hand. It will do this by precisely measuring touch and movement-induced displacements of the skin in the hand, and by reproducing these signals interactively, via new technologies to be developed in the project. The resulting systems will offer the potential to impact a wide range of human activities that depend on touch and interaction with the hands. The project seeks to enable new applications for wearable cyber physical interfaces that may have broad applications in health care, manufacturing, consumer electronics, and entertainment. Although human interactive technologies have advanced greatly, current systems employ only a fraction of the sensorimotor capabilities of their users, greatly limiting applications and usability. The development of whole-hand haptic interfaces that allow their wearers to feel and manipulate digital content has been a longstanding goal of engineering research, but has remained far from reality. The reason can be traced to the difficulty of reproducing or even characterizing the complex, action-dependent stimuli that give rise to touch sensations during everyday activities. This project will pioneer new methods for imaging complex haptic stimuli, consisting of movement dependent skin strain and contact-induced surface waves propagating in skin, and for modeling the dependence of these signals on hand kinematics during grasping. It will use the resulting fundamental advances to catalyze the development of novel wearable CPS, in the form of whole-hand haptic interfaces. The latter will employ surface wave and skin strain feedback to supply haptic feedback to the hand during interaction with real and computational objects, enabling a range of new applications in VR. The project will be executed through research in three main research areas. In the first, it will utilize novel contact and non-contact techniques based on data acquired through on-body sensor arrays to measure whole-hand mechanical stimuli and grasping kinematics at high spatial and temporal resolution. In a second research area, it will undertake data-driven systems modeling and analysis of statistical contingencies between the kinematic and cutaneous sensed during everyday activities. In a third research area, it will engineer and perceptually evaluate novel cyber physical systems consisting of haptic interfaces for whole hand interaction. In order to further advance the applications of these systems in medicine, through a collaboration with the Drexel College of Medicine, the project will develop new methods for assessing clinical skills of palpation during medical examination, with the aim of improving the efficacy of what is often the first, most common, and best opportunity for diagnosis, using physician's own sense of touch.
Off
Drexel University
-
National Science Foundation
Submitted by Yon Visell on December 22nd, 2015
More than one million people including many wounded warfighters from recent military missions are living with lower-limb amputation in the United States. This project will design wearable body area sensor systems for real-time measurement of amputee's energy expenditure and will develop computer algorithms for automatic lower-limb prosthesis optimization. The developed technology will offer a practical tool for the optimal prosthetic tuning that may maximally reduce amputee's energy expenditure during walking. Further, this project will develop user-control technology to support user's volitional control of lower-limb prostheses. The developed volitional control technology will allow the prosthesis to be adaptive to altered environments and situations such that amputees can walk as using their own biological limbs. An optimized prosthesis with user-control capability will increase equal force distribution on the intact and prosthetic limbs and decrease the risk of damage to the intact limb from the musculoskeletal imbalance or pathologies. Maintenance of health in these areas is essential for the amputee's quality of life and well-being. Student participation is supported. This research will advance Cyber-Physical Systems (CPS) science and engineering through the integration of sensor and computational technologies for the optimization and control of physical systems. This project will design body area sensor network systems which integrate spatiotemporal information from electromyography (EMG), electroencephalography (EEG) and inertia measurement unit (IMU) sensors, providing quantitative, real-time measurements of the user's physical load and mental effort for personalized prosthesis optimization. This project will design machine learning technology-based, automatic prosthesis parameter optimization technology to support in-home prosthesis optimization by users themselves. This project will also develop an EEG-based, embedded computing-supported volitional control technology to support user?s volitional control of a prosthesis in real-time by their thoughts to cope with altered situations and environments. The technical advances from this project will provide wearable and wireless body area sensing solutions for broader applications in healthcare and human-CPS interaction applications. The explored computational methods will be broadly applicable for real-time, automatic target recognition from spatiotemporal, multivariate data in CPS-related communication and control applications. This synergic project will be implemented under multidisciplinary team collaboration among computer scientists and engineers, clinicians and prosthetic industry engineers. This project will also provide interdisciplinary, CPS relevant training for both undergraduate and graduate students by integrating computational methods with sensor network, embedded processors, human physical and mental activity recognition, and prosthetic control.
Off
Virginia Commonwealth University
-
National Science Foundation
Submitted by Anonymous on December 22nd, 2015
In the next few decades, autonomous vehicles will become an integral part of the traffic flow on highways. However, they will constitute only a small fraction of all vehicles on the road. This research develops technologies to employ autonomous vehicles already in the stream to improve traffic flow of human-controlled vehicles. The goal is to mitigate undesirable jamming, traffic waves, and to ultimately reduce the fuel consumption. Contemporary control of traffic flow, such as ramp metering and variable speed limits, is largely limited to local and highly aggregate approaches. This research represents a step towards global control of traffic using a few autonomous vehicles, and it provides the mathematical, computational, and engineering structure to address and employ these new connections. Even if autonomous vehicles can provide only a small percentage reduction in fuel consumption, this will have a tremendous economic and environmental impact due to the heavy dependence of the transportation system on non-renewable fuels. The project is highly collaborative and interdisciplinary, involving personnel from different disciplines in engineering and mathematics. It includes the training of PhD students and a postdoctoral researcher, and outreach activities to disseminate traffic research to the broader public. This project develops new models, computational methods, software tools, and engineering solutions to employ autonomous vehicles to detect and mitigate traffic events that adversely affect fuel consumption and congestion. The approach is to combine the data measured by autonomous vehicles in the traffic flow, as well as other traffic data, with appropriate macroscopic traffic models to detect and predict congestion trends and events. Based on this information, the loop is closed by carefully following prescribed velocity controllers that are demonstrated to reduce congestion. These controllers require detection and response times that are beyond the limit of a human's ability. The choice of the best control strategy is determined via optimization approaches applied to the multiscale traffic model and suitable fuel consumption estimation. The communication between the autonomous vehicles, combined with the computational and control tasks on each individual vehicle, require a cyber-physical approach to the problem. This research considers new types of traffic models (micro-macro models, network approaches for higher-order models), new control algorithms for traffic flow regulation, and new sensing and control paradigms that are enabled by a small number of controllable systems available in a flow.
Off
Rutgers University Camden
-
National Science Foundation
Submitted by Benedetto Piccoli on December 22nd, 2015
In the next few decades, autonomous vehicles will become an integral part of the traffic flow on highways. However, they will constitute only a small fraction of all vehicles on the road. This research develops technologies to employ autonomous vehicles already in the stream to improve traffic flow of human-controlled vehicles. The goal is to mitigate undesirable jamming, traffic waves, and to ultimately reduce the fuel consumption. Contemporary control of traffic flow, such as ramp metering and variable speed limits, is largely limited to local and highly aggregate approaches. This research represents a step towards global control of traffic using a few autonomous vehicles, and it provides the mathematical, computational, and engineering structure to address and employ these new connections. Even if autonomous vehicles can provide only a small percentage reduction in fuel consumption, this will have a tremendous economic and environmental impact due to the heavy dependence of the transportation system on non-renewable fuels. The project is highly collaborative and interdisciplinary, involving personnel from different disciplines in engineering and mathematics. It includes the training of PhD students and a postdoctoral researcher, and outreach activities to disseminate traffic research to the broader public. This project develops new models, computational methods, software tools, and engineering solutions to employ autonomous vehicles to detect and mitigate traffic events that adversely affect fuel consumption and congestion. The approach is to combine the data measured by autonomous vehicles in the traffic flow, as well as other traffic data, with appropriate macroscopic traffic models to detect and predict congestion trends and events. Based on this information, the loop is closed by carefully following prescribed velocity controllers that are demonstrated to reduce congestion. These controllers require detection and response times that are beyond the limit of a human's ability. The choice of the best control strategy is determined via optimization approaches applied to the multiscale traffic model and suitable fuel consumption estimation. The communication between the autonomous vehicles, combined with the computational and control tasks on each individual vehicle, require a cyber-physical approach to the problem. This research considers new types of traffic models (micro-macro models, network approaches for higher-order models), new control algorithms for traffic flow regulation, and new sensing and control paradigms that are enabled by a small number of controllable systems available in a flow.
Off
Temple University
-
National Science Foundation
Submitted by Benjamin Seibold on December 22nd, 2015
Many safety-critical cyber-physical systems rely on advanced sensing capabilities to react to changing environmental conditions. One such domain is automotive systems. In this domain, a proliferation of advanced sensor technology is being fueled by an expanding range of autonomous capabilities (blind spot warnings, automatic lane-keeping, etc.). The limit of this expansion is full autonomy, which has been demonstrated in various one-off prototypes, but at the expensive of significant hardware over-provisioning that is not tenable for a consumer product. To enable features approaching full autonomy in a commercial vehicle, software infrastructure will be required that enables multiple sensor-processing streams to be multiplexed onto a common hardware platform at reasonable cost. This project is directed at the development of such infrastructure. The desired infrastructure will be developed by focusing on a particularly compelling challenge problem: enabling cost-effective driver-assist and autonomous-control automotive features that utilize vision-based sensing through cameras. This problem will be studied by (i) examining numerous multicore-based hardware configurations at various fixed price points based on realistic automotive use cases, and by (ii) characterizing the range of vision-based workloads that can be feasibly supported using the software infrastructure to be developed. The research to be conducted will be a collaboration involving academic researchers at UNC and engineers at General Motors Research. The collaborative nature of this effort increases the likelihood that the results obtained will have real impact in the U.S. automotive industry. Additionally, this project is expected to produce new open-source software and tools, new course content, public outreach through participation in UNC's demo program, and lectures and seminars by the investigators at national and international forums.
Off
University of North Carolina at Chapel Hill
-
National Science Foundation
Alexander Berg
Submitted by James Anderson on December 22nd, 2015
One of the challenges for the future cyber-physical systems is the exploration of large design spaces. Evolutionary algorithms (EAs), which embody a simplified computational model of the mutation and selection mechanisms of natural evolution, are known to be effective for design optimization. However, the traditional formulations are limited to choosing values for a predetermined set of parameters within a given fixed architecture. This project explores techniques, based on the idea of hidden genes, which enable EAs to select a variable number of components, thereby expanding the explored design space to include selection of a system's architecture. Hidden genetic optimization algorithms have a broad range of potential applications in cyber-physical systems, including automated construction systems, transportation systems, micro-grid systems, and space systems. The project integrates education with research by involving students ranging from high school through graduate school in activities commensurate with their skills, and promotes dissemination of the research results through open source distribution of algorithm implementation code and participation in the worldwide Global Trajectory Optimization Competition. Instead of using a single layer of coding to represent the variables of the system in current EAs, this project investigates adding a second layer of coding to enable hiding some of the variables, as needed, during the search for the optimal system's architecture. This genetic hiding concept is found in nature and provides a natural way of handling system architectures covering a range of different sizes in the design space. In addition, the standard mutation and selection operations in EAs will be replaced by new operations that are intended to extract the full potential of the hidden gene model. Specific applications include space mission design, microgrid optimization, and traffic network signal coordinated planning.
Off
Michigan Technological University
-
National Science Foundation
Ossama Abdelkhalik Submitted by Ossama Abdelkhalik on December 22nd, 2015
Despite many advances in vehicle automation, much remains to be done: the best autonomous vehicle today still lags behind human drivers, and connected vehicle (V2V) and infrastructure (V2I) standards are only just emerging. In order for such cyber-physical systems to fully realize their potential, they must be capable of exploiting one of the richest and most complex abilities of humans, which we take for granted: seeing and understanding the visual world. If automated vehicles had this ability, they could drive more intelligently, and share information about road and environment conditions, events, and anomalies to improve situational awareness and safety for other automated vehicles as well as human drivers. That is the goal of this project, to achieve a synergy between computer vision, machine learning and cyber-physical systems that leads to a safer, cheaper and smarter transportation sector, and which has potential applications to other sectors including agriculture, food quality control and environment monitoring. To achieve this goal, this project brings together expertise in computer vision, sensing, embedded computing, machine learning, big data analytics and sensor networks to develop an integrated edge-cloud architecture for (1) "anytime scene understanding" to unify diverse scene understanding methods in computer vision, and (2) "cooperative scene understanding" that leverages vehicle-to-vehicle and vehicle-to-infrastructure protocols to coordinate with multiple systems, while (3) emphasizing how security and privacy should be managed at scale without impacting overall quality-of-service. This architecture can be used for autonomous driving and driver-assist systems, and can be embedded within infrastructure (digital signs, traffic lights) to avoid traffic congestion, reduce risk of pile-ups and improve situational awareness. Validation and transition of the research to practice are through integration within City of Pittsburgh public works department vehicles, Carnegie Mellon University NAVLAB autonomous vehicles, and across the smart road infrastructure corridor under development in Pittsburgh. The project also includes activities to foster development of a new cyber-physical systems workforce, though involvement of students in the research, co-taught multi-disciplinary courses, and co-organized workshops.
Off
Carnegie-Mellon University
-
National Science Foundation
Submitted by Srinivasa Narasimhan on December 22nd, 2015
Recent developments in nanotechnology and synthetic biology have enabled a new direction in biological engineering: synthesis of collective behaviors and spatio-temporal patterns in multi-cellular bacterial and mammalian systems. This will have a dramatic impact in such areas as amorphous computing, nano-fabrication, and, in particular, tissue engineering, where patterns can be used to differentiate stem cells into tissues and organs. While recent technologies such as tissue- and organoid on-a-chip have the potential to produce a paradigm shift in tissue engineering and drug development, the synthesis of user-specified, emergent behaviors in cell populations is a key step to unlock this potential and remains a challenging, unsolved problem. This project brings together synthetic biology and micron-scale mobile robotics to define the basis of a next-generation cyber-physical system (CPS) called biological CPS (bioCPS). Synthetic gene circuits for decision making and local communication among the cells are automatically synthesized using a Bio-Design Automation (BDA) workflow. A Robot Assistant for Communication, Sensing, and Control in Cellular Networks (RA), which is designed and built as part of this project, is used to generate desired patterns in networks of engineered cells. In RA, the engineered cells interact with a set of micro-robots that implement control, sensing, and long-range communication strategies needed to achieve the desired global behavior. The micro-robots include both living and non-living matter (engineered cells attached to inorganic substrates that can be controlled using externally applied fields). This technology is applied to test the formation of various patterns in living cells. The project has a rich education and outreach plan, which includes nationwide activities for CPS education of high-school students, lab tours and competitions for high-school and undergraduate students, workshops, seminars, and courses for graduate students, as well as specific initiatives for under-represented groups. Central to the project is the development of theory and computational tools that will significantly advance that state of the art in CPS at large. A novel, formal methods approach is proposed for synthesis of emergent, global behaviors in large collections of locally interacting agents. In particular, a new logic whose formulas can be efficiently learned from quad-tree representations of partitioned images is developed. The quantitative semantics of the logic maps the synthesis of local control and communication protocols to an optimization problem. The project contributes to the nascent area of temporal logic inference by developing a machine learning method to learn temporal logic classifiers from large amounts of data. Novel abstraction and verification techniques for stochastic dynamical systems are defined and used to verify the correctness of the gene circuits in the BDA workflow.
Off
University of Pennsylvania
-
National Science Foundation
Submitted by Vijay Kumar on December 22nd, 2015
Subscribe to CPS Technologies