Visible to the public Biblio

Filters: Keyword is automotive  [Clear All Filters]
2020-10-19
Hasan, Khondokar Fida, Kaur, Tarandeep, Hasan, Md. Mhedi, Feng, Yanming.  2019.  Cognitive Internet of Vehicles: Motivation, Layered Architecture and Security Issues. 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI). :1–6.
Over the past few years, we have experienced great technological advancements in the information and communication field, which has significantly contributed to reshaping the Intelligent Transportation System (ITS) concept. Evolving from the platform of a collection of sensors aiming to collect data, the data exchanged paradigm among vehicles is shifted from the local network to the cloud. With the introduction of cloud and edge computing along with ubiquitous 5G mobile network, it is expected to see the role of Artificial Intelligence (AI) in data processing and smart decision imminent. So as to fully understand the future automobile scenario in this verge of industrial revolution 4.0, it is necessary first of all to get a clear understanding of the cutting-edge technologies that going to take place in the automotive ecosystem so that the cyber-physical impact on transportation system can be measured. CIoV, which is abbreviated from Cognitive Internet of Vehicle, is one of the recently proposed architectures of the technological evolution in transportation, and it has amassed great attention. It introduces cloud-based artificial intelligence and machine learning into transportation system. What are the future expectations of CIoV? To fully contemplate this architecture's future potentials, and milestones set to achieve, it is crucial to understand all the technologies that leaned into it. Also, the security issues to meet the security requirements of its practical implementation. Aiming to that, this paper presents the evolution of CIoV along with the layer abstractions to outline the distinctive functional parts of the proposed architecture. It also gives an investigation of the prime security and privacy issues associated with technological evolution to take measures.
2020-02-10
Chechik, Marsha.  2019.  Uncertain Requirements, Assurance and Machine Learning. 2019 IEEE 27th International Requirements Engineering Conference (RE). :2–3.
From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem - rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.
2018-08-23
Cheah, M., Bryans, J., Fowler, D. S., Shaikh, S. A..  2017.  Threat Intelligence for Bluetooth-Enabled Systems with Automotive Applications: An Empirical Study. 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :36–43.

Modern vehicles are opening up, with wireless interfaces such as Bluetooth integrated in order to enable comfort and safety features. Furthermore a plethora of aftermarket devices introduce additional connectivity which contributes to the driving experience. This connectivity opens the vehicle to potentially malicious attacks, which could have negative consequences with regards to safety. In this paper, we survey vehicles with Bluetooth connectivity from a threat intelligence perspective to gain insight into conditions during real world driving. We do this in two ways: firstly, by examining Bluetooth implementation in vehicles and gathering information from inside the cabin, and secondly, using war-nibbling (general monitoring and scanning for nearby devices). We find that as the vehicle age decreases, the security (relatively speaking) of the Bluetooth implementation increases, but that there is still some technological lag with regards to Bluetooth implementation in vehicles. We also find that a large proportion of vehicles and aftermarket devices still use legacy pairing (and are therefore more insecure), and that these vehicles remain visible for sufficient time to mount an attack (assuming some premeditation and preparation). We demonstrate a real-world threat scenario as an example of the latter. Finally, we provide some recommendations on how the security risks we discover could be mitigated.