Monitoring and control of cyber-physical systems.
This NSF Cyber-Physical Systems (CPS) Frontier project "Verified Human Interfaces, Control, and Learning for Semi-Autonomous Systems (VeHICaL)" is developing the foundations of verified co-design of interfaces and control for human cyber-physical systems (h-CPS) --- cyber-physical systems that operate in concert with human operators. VeHICaL aims to bring a formal approach to designing both interfaces and control for h-CPS, with provable guarantees.
The VeHICaL project is grounded in a novel problem formulation that elucidates the unique requirements on h-CPS including not only traditional correctness properties on autonomous controllers but also quantitative requirements on the logic governing switching or sharing of control between human operator and autonomous controller, the user interface, privacy properties, etc. The project is making contributions along four thrusts: (1) formalisms for modeling h-CPS; (2) computational techniques for learning, verification, and control of h-CPS; (3) design and validation of sensor and human-machine interfaces, and (4) empirical evaluation in the domain of semi-autonomous vehicles. The VeHICaL approach is bringing a conceptual shift of focus away from separately addressing the design of control systems and human-machine interaction and towards the joint co-design of human interfaces and control using common modeling formalisms and requirements on the entire system. This co-design approach is making novel intellectual contributions to the areas of formal methods, control theory, sensing and perception, cognitive science, and human-machine interfaces.
Cyber-physical systems deployed in societal-scale applications almost always interact with humans. The foundational work being pursued in the VeHICaL project is being validated in two application domains: semi-autonomous ground vehicles that interact with human drivers, and semi-autonomous aerial vehicles (drones) that interact with human operators. A principled approach to h-CPS design --- one that obtains provable guarantees on system behavior with humans in the loop --- can have an enormous positive impact on the emerging national ``smart'' infrastructure. In addition, this project is pursuing a substantial educational and outreach program including: (i) integrating research into undergraduate and graduate coursework, especially capstone projects; (ii) extensive online course content leveraging existing work by the PIs; (iii) a strong undergraduate research program, and (iv) outreach and summer programs for school children with a focus on reaching under-represented groups.
Off
University of North Carolina at Chapel Hill
-
National Science Foundation
Software-Defined Control (SDC) is a revolutionary methodology for controlling manufacturing systems that uses a global view of the entire manufacturing system, including all of the physical components (machines, robots, and parts to be processed) as well as the cyber components (logic controllers, RFID readers, and networks). As manufacturing systems become more complex and more connected, they become more susceptible to small faults that could cascade into major failures or even cyber-attacks that enter the plant, such as, through the internet. In this project, models of both the cyber and physical components will be used to predict the expected behavior of the manufacturing system. Since the components of the manufacturing system are tightly coupled in both time and space, such a temporal-physical coupling, together with high-fidelity models of the system, allows any fault or attack that changes the behavior of the system to be detected and classified. Once detected and identified, the system will compute new routes for the physical parts through the plant, thus avoiding the affected locations. These new routes will be directly downloaded to the low-level controllers that communicate with the machines and robots, and will keep production operating (albeit at a reduced level), even in the face of an otherwise catastrophic fault. These algorithms will be inspired by the successful approach of Software-Defined Networking. Anomaly detection methods will be developed that can ascertain the difference between the expected (modeled) behavior of the system and the observed behavior (from sensors). Anomalies will be detected both at short time-scales, using high-fidelity models, and longer time-scales, using machine learning and statistical-based methods. The detection and classification of anomalies, whether they be random faults or cyber-attacks, will represent a significant contribution, and enable the re-programming of the control systems (through re-routing the parts) to continue production.
The manufacturing industry represents a significant fraction of the US GDP, and each manufacturing plant represents a large capital investment. The ability to keep these plants running in the face of inevitable faults and even malicious attacks can improve productivity -- keeping costs low for both manufacturers and consumers. Importantly, these same algorithms can be used to redefine the production routes (and machine programs) when a new part is introduced, or the desired production volume is changed, to maximize profitability for the manufacturing operation.
Off
University of Illinois at Urbana-Champaign
-
National Science Foundation
This NSF Cyber-Physical Systems (CPS) Frontier project "Verified Human Interfaces, Control, and Learning for Semi-Autonomous Systems (VeHICaL)" is developing the foundations of verified co-design of interfaces and control for human cyber-physical systems (h-CPS) --- cyber-physical systems that operate in concert with human operators. VeHICaL aims to bring a formal approach to designing both interfaces and control for h-CPS, with provable guarantees.
The VeHICaL project is grounded in a novel problem formulation that elucidates the unique requirements on h-CPS including not only traditional correctness properties on autonomous controllers but also quantitative requirements on the logic governing switching or sharing of control between human operator and autonomous controller, the user interface, privacy properties, etc. The project is making contributions along four thrusts: (1) formalisms for modeling h-CPS; (2) computational techniques for learning, verification, and control of h-CPS; (3) design and validation of sensor and human-machine interfaces, and (4) empirical evaluation in the domain of semi-autonomous vehicles. The VeHICaL approach is bringing a conceptual shift of focus away from separately addressing the design of control systems and human-machine interaction and towards the joint co-design of human interfaces and control using common modeling formalisms and requirements on the entire system. This co-design approach is making novel intellectual contributions to the areas of formal methods, control theory, sensing and perception, cognitive science, and human-machine interfaces.
Cyber-physical systems deployed in societal-scale applications almost always interact with humans. The foundational work being pursued in the VeHICaL project is being validated in two application domains: semi-autonomous ground vehicles that interact with human drivers, and semi-autonomous aerial vehicles (drones) that interact with human operators. A principled approach to h-CPS design --- one that obtains provable guarantees on system behavior with humans in the loop --- can have an enormous positive impact on the emerging national ``smart'' infrastructure. In addition, this project is pursuing a substantial educational and outreach program including: (i) integrating research into undergraduate and graduate coursework, especially capstone projects; (ii) extensive online course content leveraging existing work by the PIs; (iii) a strong undergraduate research program, and (iv) outreach and summer programs for school children with a focus on reaching under-represented groups.
Off
California Institute of Technology
-
National Science Foundation
Submitted by Richard Murray on November 30th, 2017
Recent years have seen an explosion in the use of cellular and wifi networks to deploy fleets of semi-autonomous physical systems, including unmanned aerial vehicles (UAVs), self-driving vehicles, and weather stations to perform tasks such as package delivery, crop harvesting, and weather prediction. The use of cellular and wifi networks has dramatically decreased the cost, energy, and maintenance associated with these forms of embedded technology, but has also added new challenges in the form of delay, packet drops, and loss of signal. Because of these new challenges, and because of our limited understanding of how unreliable communication affects performance, the current protocols for regulating physical systems over wireless networks are slow, inefficient, and potentially unstable. In this project we develop a new computational framework for designing provably fast, efficient and safe protocols for the control of fleets of semi-autonomous physical systems.
The systems considered in this project are dynamic, defined by coupled ordinary differential equations, and connected by feedback to a controller, with a feedback interconnection which has multiple static delays, multiple time-varying delays, or is sampled at discrete times. For these systems, we would like to design optimal and robust feedback controllers assuming a limited number of sensor measurements are available. Specifically, we seek to design a class of algorithms which are computationally efficient, which scale to large numbers of subsystems, and which, given models of the dynamics, communication links, and uncertainty, will return a controller which is provably stable, robust to model uncertainty, and provably optimal in the relevant metric of performance. To accomplish this task, we leverage a new duality result which allows the problem of controller synthesis for infinite-dimensional systems to be convexified. This result allows the problem of optimal and robust dynamic output-feedback controller synthesis to be reformulated as feasibility of a set of convex linear operator inequalities. We then use semidefinite programming to parametrize the set of feasible operators and thereby test feasibility of the inequalities with little to no conservatism. In a similar manner, estimator design and optimal controller synthesis are recast as semidefinite programming problems and used to solve the problems of sampled-data and systems with input delay. The algorithms will be scalable to at least 20 states and the controllers will be field-tested on a fleet of wheeled robotic vehicles.
Off
Arizona State University
-
National Science Foundation
The goal of this project is to investigate a low-cost and energy-efficient hardware and software system to close the loop between processing of sensor data, semantically high-level detection and trajectory generation in real-time. To safely integrate Unmanned Aerial Vehicles into national airspace, there is an urgent need to develop onboard sense-and-avoid capability. While deep neural networks (DNNs) have significantly improved the accuracy of object detection and decision making, they have prohibitively high complexity to be implemented on small UAVs. Moreover, existing UAV flight control approaches ignore the nonlinearities of UAVs and do not provide trajectory assurance.
The research thrusts of this project are: (i) FPGA implementation of DNNs: both fully connected and convolutional layers of deep (convolutional) neural networks will be trained using (block-)circulant matrix and implemented using custom designed universal Fast Fourier Transform kernels on FPGA. This research thrust will enable efficient implementation of DNNs, reducing memory and computation complexity from O(N2) to O(N) and O(NlogN), respectively; (ii) autonomous detection and perception for onboard sense-and-avoid: existing regional detection neural networks will be extended to work with images taken from different angles, and multi-modal sensor inputs; (iii) real-time waypoint and trajectory generation - an integrated trajectory generation and feedback control scheme for steering under-actuated vehicles through desired waypoints in 3D space will be developed. For efficient implementation and hardware reuse, both detection and control problems will be formulated and solved using DNNs with (block-)circulant weight matrix. Deep reinforcement learning models will be investigated for waypoint generation and to assign artificial potential around the obstacles to guarantee a safe distance. The fundamental research results will enable onboard computing, real-time detection and control, which are cornerstones of autonomous and next-generation UAVs.
Off
Syracuse University
-
National Science Foundation
Amit Sanyal
Yanzhi Wang
Jian Tang
Senem Velipasalar
This proposal is for research on the Mobile Automated Rovers Fly-By (MARS-FLY) for Bridge Network Resiliency. Bridges are often in remote locations and the cost of installing electricity and a data acquisition system in hundreds of thousands of bridges is prohibitive. The MARS-FLY project will develop a cyber-physical system (CPS) designed to monitor the health of highway bridges, control the loads imposed on bridges by heavy trucks, and provide visual inspectors with quantitative information for data-driven bridge health assessment requiring no electricity and a minimum of data acquisition electronics on site. For fly-by monitoring, GPS-controlled auto-piloted drones will periodically carry data acquisition electronics to the bridge and download the data from the sensors at a close range. Larger Imaging drones carrying infrared (IR) cameras will be used to detect detail damages like concrete delamination.
The research objectives will be accomplished first, by wireless recharging of remote sensor motes by drone to enhance the sensor operational lifetime whereas wireless recharging of drone battery will extend the operational efficiency, payload, and drone range. The novel multi-coil wireless powering approach will provide an investigation of an engineered material i.e. metamaterial with the resonant link to enhance the power level and link distance, otherwise unachievable. Next, by a major scientific breakthrough in the utilization of small quantities of low quality sensor data and IR images to determine damage information at all levels: detection of a change in behavior, location, and magnitude; streamlining of reliability analysis to incorporate the new information of damage into the bridge's reliability index based on combined numerical and probabilistic approaches such as Ensemble Empirical Mode Decomposition with the Hilbert Transform; and finally detection of nonlinearities in the signals in a Bayesian Updating framework. Moreover, an instrumented drive-by vehicle will complement damage detection on the bridge. A Bayesian updating framework will be used to update the probability distribution for bridge condition, given the measurements. Image processing of the infrared images to distinguish between the environmental effects and the true bridge deterioration (e.g. delamination in concrete) will be used to develop a better method of site-specific and environment-specific calibration.
Off
University of Alabama at Birmingham
-
National Science Foundation
Mohammad Haider
This paper proposes an event-triggered interactive gradient descent method for solving multi-objective optimization problems. We consider scenarios where a human decision maker works with a robot in a supervisory manner in order to find the best Pareto solution to an optimization problem. The human has a time-invariant function that represents the value she gives to the different outcomes. However, this function is implicit, meaning that the human does not know it in closed form, but can respond to queries about it.
Submitted by Jorge Cortes on October 13th, 2017
Event
ARCS 2018
CALL FOR PAPERS, WORKSHOPS, & TUTORIALS
31st International Conference on Architecture of Computing Systems (ARC 2018)
April 09 -12, 2018 | Braunschweig, Germany at the Technical University of Braunschweig | http://arcs2018.itec.kit.edu/
Airborne networking, unlike the networking of fixed sensors, mobile devices, and slowly-moving vehicles, is very challenging because of the high mobility, stringent safety requirements, and uncertain airspace environment. Airborne networking is important because of the growing complexity of the National Airspace System with the integration of unmanned aerial vehicles (UAVs). This project develops an innovative new theoretical framework for cyber-physical systems (CPS) to enable airborne networking, which utilizes direct flight-to-to-flight communication for flexible information sharing, safe maneuvering, and coordination of time-critical missions.
This project uses an innovative co-design approach that exploits the mutual benefits of networking and decentralized mobility control in an uncertain heterogeneous environment. The approach departs from the usual perspective that views physical mobility as communication constraints, communication as constraints for decentralized mobility control, and uncertain environment as constraints for both. Instead, approach taken here proactively exploits the constraints, uncertainty, and new structures with information to enable high-performance designs. The features of the co-design such as scalability, fast response, trackability, and robustness to uncertainty advance the core CPS science on decision-making for large-scale networks under uncertainty.
The technological advances developed in this research will contribute to multiple fields, including mobile networking, decentralized control, experiment design, and general real-time decision making under uncertainty for CPS. Technology transfer will be pursued through close collaboration with industries and national laboratories. This novel research direction will also serve as a unique backdrop to inspire the CPS workforce. New teaching materials will benefit the future CPS workforce by equipping them with a knowledge base in networking and control. Broad outreach and dissemination activities that involve undergraduate student societies, K-12 school teaching, and public events, all stemming from the PI's current efforts, will be enhanced.
Off
University of Texas at Arlington
-
National Science Foundation
Submitted by Yan Wan on October 3rd, 2017
Equipment operation represents one of the most dangerous tasks on a construction sites and accidents related to such operation often result in death and property damage on the construction site and the surrounding area. Such accidents can also cause considerable delays and disruption, and negatively impact the efficiency of operations. This award will conduct research to improve the safety and efficiency of cranes by integrating advances in robotics, computer vision, and construction management. It will create tools for quick and easy planning of crane operations and incorporate them into a safe and efficient system that can monitor a crane's environment and provide control feedback to the crane and the operator. Resulting gains in safety and efficiency wil reduce fatal and non-fatal crane accidents. Partnerships with industry will also ensure that these advances have a positive impact on construction practice, and can be extended broadly to smart infrastructure, intelligent manufacturing, surveillance, traffic monitoring, and other application areas. The research will involve undergraduates and includes outreach to K-12 students.
The work is driven by the hypothesis that the monitoring and control of cranes can be performed autonomously using robotics and computer vision algorithms, and that detailed and continuous monitoring and control feedback can lead to improved planning and simulation of equipment operations. It will particularly focus on developing methods for (a) planning construction operations while accounting for safety hazards through simulation; (b) estimating and providing analytics on the state of the equipment; (c) monitoring equipment surrounding the crane operating environment, including detection of safety hazards, and proximity analysis to dynamic resources including materials, equipment, and workers; (d) controlling crane stability in real-time; and (e) providing feedback to the user and equipment operators in a "transparent cockpit" using visual and haptic cues. It will address the underlying research challenges by improving the efficiency and reliability of planning through failure effects analysis and creating methods for contact state estimation and equilibrium analysis; improving monitoring through model-driven and real-time 3D reconstruction techniques, context-driven object recognition, and forecasting motion trajectories of objects; enhancing reliability of control through dynamic crane models, measures of instability, and algorithms for finding optimal controls; and, finally, improving efficiency of feedback loops through methods for providing visual and haptic cues.
Off
University of Florida
-
National Science Foundation