Visible to the public Biblio

Filters: Keyword is automotive  [Clear All Filters]
2017
Cheah, M., Bryans, J., Fowler, D. S., Shaikh, S. A..  2017.  Threat Intelligence for Bluetooth-Enabled Systems with Automotive Applications: An Empirical Study. 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :36–43.

Modern vehicles are opening up, with wireless interfaces such as Bluetooth integrated in order to enable comfort and safety features. Furthermore a plethora of aftermarket devices introduce additional connectivity which contributes to the driving experience. This connectivity opens the vehicle to potentially malicious attacks, which could have negative consequences with regards to safety. In this paper, we survey vehicles with Bluetooth connectivity from a threat intelligence perspective to gain insight into conditions during real world driving. We do this in two ways: firstly, by examining Bluetooth implementation in vehicles and gathering information from inside the cabin, and secondly, using war-nibbling (general monitoring and scanning for nearby devices). We find that as the vehicle age decreases, the security (relatively speaking) of the Bluetooth implementation increases, but that there is still some technological lag with regards to Bluetooth implementation in vehicles. We also find that a large proportion of vehicles and aftermarket devices still use legacy pairing (and are therefore more insecure), and that these vehicles remain visible for sufficient time to mount an attack (assuming some premeditation and preparation). We demonstrate a real-world threat scenario as an example of the latter. Finally, we provide some recommendations on how the security risks we discover could be mitigated.

2019
Chechik, Marsha.  2019.  Uncertain Requirements, Assurance and Machine Learning. 2019 IEEE 27th International Requirements Engineering Conference (RE). :2–3.
From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem - rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.
Hasan, Khondokar Fida, Kaur, Tarandeep, Hasan, Md. Mhedi, Feng, Yanming.  2019.  Cognitive Internet of Vehicles: Motivation, Layered Architecture and Security Issues. 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI). :1–6.
Over the past few years, we have experienced great technological advancements in the information and communication field, which has significantly contributed to reshaping the Intelligent Transportation System (ITS) concept. Evolving from the platform of a collection of sensors aiming to collect data, the data exchanged paradigm among vehicles is shifted from the local network to the cloud. With the introduction of cloud and edge computing along with ubiquitous 5G mobile network, it is expected to see the role of Artificial Intelligence (AI) in data processing and smart decision imminent. So as to fully understand the future automobile scenario in this verge of industrial revolution 4.0, it is necessary first of all to get a clear understanding of the cutting-edge technologies that going to take place in the automotive ecosystem so that the cyber-physical impact on transportation system can be measured. CIoV, which is abbreviated from Cognitive Internet of Vehicle, is one of the recently proposed architectures of the technological evolution in transportation, and it has amassed great attention. It introduces cloud-based artificial intelligence and machine learning into transportation system. What are the future expectations of CIoV? To fully contemplate this architecture's future potentials, and milestones set to achieve, it is crucial to understand all the technologies that leaned into it. Also, the security issues to meet the security requirements of its practical implementation. Aiming to that, this paper presents the evolution of CIoV along with the layer abstractions to outline the distinctive functional parts of the proposed architecture. It also gives an investigation of the prime security and privacy issues associated with technological evolution to take measures.
2020
Li, Yuekang, Chen, Hongxu, Zhang, Cen, Xiong, Siyang, Liu, Chaoyi, Wang, Yi.  2020.  Ori: A Greybox Fuzzer for SOME/IP Protocols in Automotive Ethernet. 2020 27th Asia-Pacific Software Engineering Conference (APSEC). :495—499.
With the emergence of smart automotive devices, the data communication between these devices gains increasing importance. SOME/IP is a light-weight protocol to facilitate inter- process/device communication, which supports both procedural calls and event notifications. Because of its simplicity and capability, SOME/IP is getting adopted by more and more automotive devices. Subsequently, the security of SOME/IP applications becomes crucial. However, previous security testing techniques cannot fit the scenario of vulnerability detection SOME/IP applications due to miscellaneous challenges such as the difficulty of server-side testing programs in parallel, etc. By addressing these challenges, we propose Ori - a greybox fuzzer for SOME/IP applications, which features two key innovations: the attach fuzzing mode and structural mutation. The attach fuzzing mode enables Ori to test server programs efficiently, and the structural mutation allows Ori to generate valid SOME/IP packets to reach deep paths of the target program effectively. Our evaluation shows that Ori can detect vulnerabilities in SOME/IP applications effectively and efficiently.
Sami, Muhammad, Ibarra, Matthew, Esparza, Anamaria C., Al-Jufout, Saleh, Aliasgari, Mehrdad, Mozumdar, Mohammad.  2020.  Rapid, Multi-vehicle and Feed-forward Neural Network based Intrusion Detection System for Controller Area Network Bus. 2020 IEEE Green Energy and Smart Systems Conference (IGESSC). :1–6.
In this paper, an Intrusion Detection System (IDS) in the Controller Area Network (CAN) bus of modern vehicles has been proposed. NESLIDS is an anomaly detection algorithm based on the supervised Deep Neural Network (DNN) architecture that is designed to counter three critical attack categories: Denial-of-service (DoS), fuzzy, and impersonation attacks. Our research scope included modifying DNN parameters, e.g. number of hidden layer neurons, batch size, and activation functions according to how well it maximized detection accuracy and minimized the false positive rate (FPR) for these attacks. Our methodology consisted of collecting CAN Bus data from online and in real-time, injecting attack data after data collection, preprocessing in Python, training the DNN, and testing the model with different datasets. Results show that the proposed IDS effectively detects all attack types for both types of datasets. NESLIDS outperforms existing approaches in terms of accuracy, scalability, and low false alarm rates.