Visible to the public Doing More with Less: Cost-Effective Infrastructure for Automotive Vision Capabilities


Many safety-critical cyber-physical systems rely on advanced sensing capabilities to react to changing environmental conditions. However, cost-effective deployments of such capabilities have remained elusive. Such deployments will require software infrastructure that enables multiple sensor-processing streams to be multiplexed onto a common hardware platform at reasonable cost, as well as tools and methods for validating that required processing rates can be maintained. Currently, advanced driver assistance system (ADAS) capabilities have only been implemented in prototype vehicles using hardware, software, and engineering infrastructure that is very expensive. Prototype hardware commonly includes multiple high-end CPU and GPU chips and expensive LIDAR sensors. Focusing directly on judicious resource allocation, this project seeks to enable more economically viable implementations. Such implementations can reduce system cost by utilizing cameras in combination with low-cost embedded multicore CPU+GPU platforms. This project focuses on three principal objectives: (1) new implementation methods for multiplexing disparate image-processing streams on embedded multicore platforms augmented with GPUs; (2) new analysis methods for certifying required stream-processing rates; (3) new computer-vision (CV) methods for constructing image-processing pipelines. Addressing the second principal objective, we have developed a method for analyzing commonly used CV dataflow on heterogeneous architectures, using the standardized CV API OpenVX. The OpenVX API allows users to specify CV algorithms as computational graphs, with nodes being commonly used CV operations and edges as data dependencies. However, such graphs are difficult to analyze under current real-time analysis models, as the OpenVX specification lacks a task-based model, defined threading, and pipelining support. We have developed an OpenVX implementation that extends a current OpenVX implementation by NVIDIA, and overcomes these problems to establish a real-time analysis framework to guarantee bounded end-to-end response times with pipelined graph execution. The GPUSync GPU management framework is utilized to manage GPU access. Addressing the third principal objective, we have begun investigating deep learning approaches to automotive CV tasks. We are currently exploring object detection pipelines, which are being developed for use as part of future ADAS systems. Deep convolutional neural networks (CNNs) have achieved state-ofthe-art performance with less computational complexity compared to current sliding-window approaches. CNN-based detection pipelines consist of two main parts: region proposal and classification. We are analyzing varying approaches to both region proposal and classification, while looking at tradeoffs between accuracy and computational complexity.

Creative Commons 2.5

Other available formats:

Doing More with Less: Cost-Effective Infrastructure for Automotive Vision Capabilities
Switch to experimental viewer