Synergy: Anytime Visual Scene Understanding for Heterogeneous and Distributed CPS

pdf

Despite many advances in vehicle automation, much remains to be done: the best autonomous vehicle today still lags behind human drivers, and connected vehicle (V2V) and infrastructure (V2I) standards are only just emerging. In order for such cyber-­‐physical systems to fully realize their potential, they must be capable of exploiting one of the richest and most complex abilities of humans, which we take for granted: seeing and understanding the visual world. If automated vehicles had this ability, they could drive more intelligently, and share information about road and environment conditions, events, and anomalies to improve situational awareness and safety for other automated vehicles as well as human drivers. That is the goal of this project, to achieve a synergy between computer vision, machine learning and cyber-­‐physical systems that leads to a safer, cheaper and smarter transportation sector, and which has potential applications to other sectors including agriculture, food quality control and environment monitoring.

To achieve this goal, this project brings together expertise in computer vision, sensing, embedded computing, machine learning, big data analytics and sensor networks to develop an integrated edge-­‐cloud architecture for (1) "anytime scene understanding" to unify diverse scene understanding methods in computer vision, and (2) "cooperative scene understanding" that leverages vehicle-­‐to-­‐vehicle and vehicle-­‐to-­‐infrastructure protocols to coordinate with multiple systems, while (3) emphasizing how security and privacy should be managed at scale without impacting overall quality-­‐of-­‐service. This architecture can be used for autonomous driving and driver-­‐assist systems, and can be embedded within infrastructure (digital signs, traffic lights) to avoid traffic congestion, reduce risk of pile-­‐ups and improve situational awareness. Validation and transition of the research to practice are through integration within City of Pittsburgh public works department vehicles, Carnegie Mellon University NAVLAB autonomous vehicles, and across the smart road infrastructure corridor under development in Pittsburgh. The project also includes activities to foster development of a new cyber-­‐physical systems workforce, though involvement of students in the research, co-­‐taught multi-­‐ disciplinary courses, and co-­‐organized workshops.

Explanation of Demonstration: Using advanced deep learning technologies, RoadBotics utilizes the latest in machine vision to identify surface distresses and other roadway features (fire hydrants, signage, license plates, etc.) based upon technology developed and licensed from Carnegie Mellon University. RoadBotics collects video data during normal driving that is uploaded to our cloud platform for analysis and is then presented in a GIS platform to our customers. Our domestic and international customers and prospects have been using our product to develop more robust planning and maintenance programs for their infrastructure.

Tags:
License: CC-2.5
Submitted by Srinivasa Narasimhan on