Visible to the public Biblio

Filters: Keyword is self-driving vehicles  [Clear All Filters]
Cai, Feiyang, Li, Jiani, Koutsoukos, Xenofon.  2020.  Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression. 2020 IEEE Security and Privacy Workshops (SPW). :208–214.

Learning-enabled components (LECs) are widely used in cyber-physical systems (CPS) since they can handle the uncertainty and variability of the environment and increase the level of autonomy. However, it has been shown that LECs such as deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction. The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS. The proposed approach is based on inductive conformal prediction and uses a regression model based on variational autoencoder. The architecture allows to take into consideration both the input and the neural network prediction for detecting adversarial, and more generally, out-of-distribution examples. We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars where a DNN is used to estimate the distance to an obstacle. The simulation results show that the method can effectively detect adversarial examples with a short detection delay.

Razin, Y. S., Feigh, K. M..  2020.  Hitting the Road: Exploring Human-Robot Trust for Self-Driving Vehicles. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—6.

With self-driving cars making their way on to our roads, we ask not what it would take for them to gain acceptance among consumers, but what impact they may have on other drivers. How they will be perceived and whether they will be trusted will likely have a major effect on traffic flow and vehicular safety. This work first undertakes an exploratory factor analysis to validate a trust scale for human-robot interaction and shows how previously validated metrics and general trust theory support a more complete model of trust that has increased applicability in the driving domain. We experimentally test this expanded model in the context of human-automation interaction during simulated driving, revealing how using these dimensions uncovers significant biases within human-robot trust that may have particularly deleterious effects when it comes to sharing our future roads with automated vehicles.

Alheeti, K. M. A., McDonald-Maier, K..  2017.  An intelligent security system for autonomous cars based on infrared sensors. 2017 23rd International Conference on Automation and Computing (ICAC). :1–5.
Safety and non-safety applications in the external communication systems of self-driving vehicles require authentication of control data, cooperative awareness messages and notification messages. Traditional security systems can prevent attackers from hacking or breaking important system functionality in autonomous vehicles. This paper presents a novel security system designed to protect vehicular ad hoc networks in self-driving and semi-autonomous vehicles that is based on Integrated Circuit Metric technology (ICMetrics). ICMetrics has the ability to secure communication systems in autonomous vehicles using features of the autonomous vehicle system itself. This security system is based on unique extracted features from vehicles behaviour and its sensors. Specifically, features have been extracted from bias values of infrared sensors which are used alongside semantically extracted information from a trace file of a simulated vehicular ad hoc network. The practical experimental implementation and evaluation of this system demonstrates the efficiency in identifying of abnormal/malicious behaviour typical for an attack.