Blockchain, as an emerging distributed database, effectively addresses the issue of centralized storage in IoT data, where storage capacity cannot match the explosive growth in devices and data scale, as well as the contradictions arising from centralized data management concerning data privacy and security concerns. To alleviate the problem of excessive pressure on single-point storage and ensure data security, a blockchain data storage method based on erasure codes is proposed. This method involves constructing mathematical functions that describe the data to split the original block data into multiple fragments and add redundant slices. These fragments are then encoded and stored in different locations using a circular hash space with the addition of virtual nodes to ensure load balancing among nodes and reduce situations where a single node stores too many encoded data blocks, effectively enhancing the storage space utilization efficiency of the distributed storage database. The blockchain storage method stores encoded data digest information such as storage location, creation time, and hashes, allowing for the tracing of the origin of encoded data blocks. In case of accidental loss or malicious tampering, this enables effective recovery and ensures the integrity and availability of data in the network. Experimental results indicate that compared to traditional blockchain approaches, this method effectively reduces the storage pressure on nodes and exhibits a certain degree of disaster recovery capability.
Authored by Fanyao Meng, Jin Li, Jiaqi Gao, Junjie Liu, Junpeng Ru, Yueming Lu
An IC used in a safety-critical application such as automotive often requires a long lifetime of more than 10 years. Previously, stress test has been used as a means to establish the accelerated aging model for an IC product under a harsh operating condition. Then, the accelerated aging model is time-stretched to predict an IC’s normal lifetime. However, such a long-stretching prediction may not be very trustworthy. In this work, we present a more refined method to provide higher credibility in the IC lifetime prediction. We streamline in this paper a progressive lifetime prediction method with two phases – the training phase and the inference phase. During the training phase, we collect the aging histories of some training devices under various stress levels. During the inference phase, the extrapolation is performed on the “stressed lifetime” versus the “stress level” space and thereby leading to a more trustworthy prediction of the lifetime.
Authored by Chen-Lin Tsai, Shi-Yu Huang
Physical fitness is the prime priority of people these days as everyone wants to see himself as healthy. There are numbers of wearable devices available that help human to monitor their vital body signs through which one can get an average idea of their health. Advancements in the efficiency of healthcare systems have fueled the research and development of high-performance wearable devices. There is significant potential for portable healthcare systems to lower healthcare costs and provide continuous health monitoring of critical patients from remote locations. The most pressing need in this field is developing a safe, effective, and trustworthy medical device that can be used to reliably monitor vital signs from various human organs or the environment within or outside the body through flexible sensors. Still, the patient should be able to go about their normal day while sporting a wearable or implanted medical device. This article highlights the current scenario of wearable devices and sensors for healthcare applications. Specifically, it focuses on some widely used commercially available wearable devices for continuously gauging patient’s vital parameters and discusses the major factors influencing the surge in the demand for medical devices. Furthermore, this paper addresses the challenges and countermeasures of wearable devices in smart healthcare technology.
Authored by Kavery Verma, Preity Preity, Rakesh Ranjan
A fingerprint architecture based on a micro electro mechanical system (MEMS) for the use as a hardware security component is presented. The MEMS serves as a physically unclonable function (PUF) and is used for fingerprint ID generation, derived from the MEMS-specific parameters. The fingerprint is intended to allow the unique identifiability of electronic components and thus to ensure protection against unauthorized replacement or manipulation. The MEMS chip consists of 16 individual varactors with continuously adjustable capacitance values that are used for bit derivation (“analog” PUF). The focus is on the design-related forcing of random technological spread to provide a wide range of different parameters per chip or wafer to achieve a maximum key length. Key generation and verification is carried out via fingerprint electronics connected to the MEMS, which is realized by an FPGA.
Authored by Katja Meinel, Christian Schott, Franziska Mayer, Dhruv Gupta, Sebastian Mittag, Susann Hahn, Sebastian Weidlich, Daniel Bülz, Roman Forke, Karla Hiller, Ulrich Heinkel, Harald Kuhn
In the realm of Internet of Things (IoT) devices, the trust management system (TMS) has been enhanced through the utilisation of diverse machine learning (ML) classifiers in recent times. The efficacy of training machine learning classifiers with pre-existing datasets for establishing trustworthiness in IoT devices is constrained by the inadequacy of selecting suitable features. The current study employes a subset of the UNSW-NB15 dataset to compute additional features such as throughput, goodput, packet loss. These features may be combined with the best discriminatory features to distinguish between trustworthy and non-trustworthy IoT networks. In addition, the transformed dataset undergoes filter-based and wrapper-based feature selection methods to mitigate the presence of irrelevant and redundant features. The evaluation of classifiers is performed utilising diverse metrics, including accuracy, precision, recall, F1-score, true positive rate (TPR), and false positive rate (FPR). The performance assessment is conducted both with and without the application of feature selection methodologies. Ultimately, a comparative analysis of the machine learning models is performed, and the findings of the analysis demonstrate that our model s efficacy surpasses that of the approaches utilised in the existing literature.
Authored by Muhammad Aaqib, Aftab Ali, Liming Chen, Omar Nibouche
Memristive crossbar-based architecture provides an energy-efficient platform to accelerate neural networks (NNs) thanks to its Processing-in-Memory (PIM) nature. However, the device-to-device variation (DDV), which is typically modeled as Lognormal distribution, deviates the programmed weights from their target values, resulting in significant performance degradation. This paper proposes a new Bayesian Neural Network (BNN) approach to enhance the robustness of weights against DDV. Instead of using the widely-used Gaussian variational posterior in conventional BNNs, our approach adopts a DDV-specific variational posterior distribution, i.e., Lognormal distribution. Accordingly, in the new BNN approach, the prior distribution is modified to keep consistent with the posterior distribution to avoid expensive Monte Carlo simulations. Furthermore, the mean of the prior distribution is dynamically adjusted in accordance with the mean of the Lognormal variational posterior distribution for better convergence and accuracy. Compared with the state-of-the-art approaches, experimental results show that the proposed new BNN approach can significantly boost the inference accuracy with the consideration of DDV on several well-known datasets and modern NN architectures. For example, the inference accuracy can be improved from 18\% to 74\% in the scenario of ResNet-18 on CIFAR-10 even under large variations.
Authored by Yang Xiao, Qi Xu, Bo Yuan
In the landscape of modern computing, fog computing has emerged as a service provisioning mechanism that addresses the dual demands of low latency and service localisation. Fog architecture consists of a network of interconnected nodes that work collectively to execute tasks and process data in a localised area, thereby reducing the delay induced from communication with the cloud. However, a key issue associated with fog service provisioning models is its limited localised processing capability and storage relative to the cloud, thereby presenting inherent issues on its scalability. In this paper, we propose volunteer computing coupled with optimisation methods to address the issue of localised fog scalability. The use of optimisation methods ensures the optimal use of fog infrastructure. To scale the fog network as per the requirements, we leverage the notion of volunteer computing. We propose an intelligent approach for node selection in a trustworthy fog environment to satisfy the performance and bandwidth requirements of the fog network. The problem is formulated as a multi-criteria decision-making (MCDM) problem where nodes are evaluated and ranked based on several factors, including service level agreement (SLA) parameters and reputation value.
Authored by Asma Alkhalaf, Farookh Hussain
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
The digitalization and smartization of modern digital systems include the implementation and integration of emerging innovative technologies, such as Artificial Intelligence. By incorporating new technologies, the surface attack of the system also expands, and specialized cybersecurity mechanisms and tools are required to counter the potential new threats. This paper introduces a holistic security risk assessment methodology that aims to assist Artificial Intelligence system stakeholders guarantee the correct design and implementation of technical robustness in Artificial Intelligence systems. The methodology is designed to facilitate the automation of the security risk assessment of Artificial Intelligence components together with the rest of the system components. Supporting the methodology, the solution to the automation of Artificial Intelligence risk assessment is also proposed. Both the methodology and the tool will be validated when assessing and treating risks on Artificial Intelligence-based cybersecurity solutions integrated in modern digital industrial systems that leverage emerging technologies such as cloud continuum including Software-defined networking (SDN).
Authored by Eider Iturbe, Erkuden Rios, Nerea Toledo
Device recognition is the primary step toward a secure IoT system. However, the existing equipment recognition technology often faces the problems of unobvious data characteristics and insufficient training samples, resulting in low recognition rate. To address this problem, a convolutional neural network-based IoT device recognition method is proposed. We first extract the background icons of various IoT devices through the Internet, and then use the ResNet50 neural network to extract icon feature vectors to build an IoT icon library, and realize accurate identification of device types through image retrieval. The experimental results show that the accuracy rate of sampling retrieval in the icon library can reach 98.5\%, and the recognition accuracy rate outside the library can reach 83.3\%, which can effectively identify the type of IoT devices.
Authored by Minghao Lu, Linghui Li, Yali Gao, Xiaoyong Li
Recommender systems (RS) are an efficient tool to reduce information overload when one has an overwhelming choice of resources. Embedding context-awareness into RS is found to increase accuracy and user satisfaction by allowing systems to consider users current situation (context). Context-aware recommender system (CARS) has applications in various areas, including education, where it can help learners by suggesting learning resources, peers to collaborate with, and more. When CARS is used in a learning context, it adds to the issue of lack of trust in the information, source, and intention as one builds knowledge through it. Further, embedding context-awareness adds to the trust issue due to the additional layer of automated context detection and context interpretation without users involvement. I investigate how to build trust in CARS in an educational setting. My investigation will be threefold (a) Understanding users perceptions of CARS; (b) Investigating design interventions to build trust in CARS; (c) Designing and evaluating a multidimensional approach to build trust in CARS.
Authored by Neha Rani
Connected, Cooperative, and Autonomous Mobility (CCAM) will take intelligent transportation to a new level of complexity. CCAM systems can be thought of as complex Systems-of-Systems (SoSs). They pose new challenges to security as consequences of vulnerabilities or attacks become much harder to assess. In this paper, we propose the use of a specific type of a trust model, called subjective trust network, to model and assess trustworthiness of data and nodes in an automotive SoS. Given the complexity of the topic, we illustrate the application of subjective trust networks on a specific example, namely Cooperative Intersection Management (CIM). To this end, we introduce the CIM use-case and show how it can be modelled as a subjective trust network. We then analyze how such trust models can be useful both for design time and run-time analysis, and how they would allow us a more precise quantitative assessment of trust in automotive SoSs. Finally, we also discuss the open research problems and practical challenges that need to be addressed before such trust models can be applied in practice.
Authored by Frank Kargl, Nataša Trkulja, Artur Hermann, Florian Sommer, Anderson de Lucena, Alexander Kiening, Sergej Japs
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Authored by Fatemeh Stodt, Christoph Reich, Axel Sikora, Dominik Welte
The principles of social networking and the Internet of Things were combined to create the Social Internet of Things (SIoT) paradigm. Therefore, this paradigm cannot become widely adopted to the point where it becomes a well-established technology without a security mechanism to assure reliable interactions between SIoT nodes. A Trust Management (TM) model becomes a major challenge in SIoT systems to create a trust score for the network nodes ranking. Regarding the defined TM models methodology, this score will persist for the subsequent transaction and will only be changed after some time has passed or after another transaction. However, a trust evaluation methodology must be able to consider the different constraints of the SIoT environments (dynamism and scalability) when building trust scores. Based on both event-driven and time-driven methods for trust update solutions, this model can identify which damaging nodes should be eliminated based on their changing problematic behaviors over time. The effectiveness of our proposed model has been validated by a number of simulation-based experiments that were conducted on various scenarios.
Authored by Rim Magdich, Hanen Jemal, Mounir Ben Ayed
The prediction of human trust in machines within decision-aid systems is crucial for improving system performance. However, previous studies have only measured machine performance based on its decision history, failing to account for the machine’s current decision state. This delay in evaluating machine performance can result in biased trust predictions, making it challenging to enhance the overall performance of the human-machine system. To address this issue, this paper proposes incorporating machine estimated performance scores into a human-machine trust prediction model to improve trust prediction accuracy and system performance. We also provide an explanation for how this model can enhance system performance.To estimate the accuracy of the machine’s current decision, we employ the KNN(K-Nearest Neighbors) method and obtain a corresponding performance score. Next, we report the estimated score to humans through the human-machine interaction interface and obtain human trust via trust self-reporting. Finally, we fit the trust prediction model parameters using data and evaluate the model’s efficacy through simulation on a public dataset. Our ablation experiments show that the model reduces trust prediction bias by 3.6\% and significantly enhances the overall accuracy of human-machine decision-making.
Authored by Shaojun Chen, Yun-Bo Zhao, Yang Wang, Junsen Lu
Learning through web browsing, often termed Search-as-Learning (SaL), can create information overload, due to thousands of search results. SaL can be made more efficient by developing context-aware tools that recommend items to the user and minimize information overload. However, to use context-aware recommender systems (CARS) users need to trust it. Literature has proposed explanations as a feature that helps to build trust. We investigate the impact of explanation on user trust and user experience for using CARS for SaL. Our study results show that people trust a CARS without explanation more during the first use, but for a CARS with explanations, user trust is significant only after multiple uses. Through interviews, we also uncovered the interesting paradox that even though users do not perceive that explanations add to their learning outcomes, they still prefer to use a CARS with explanations over one without.
Authored by Neha Rani, Yadi Qian, Sharon Chu
Educational recommender systems (RS) have become widely popular with the paradigm shift to online learning and the availability of a wide variety of learning resources. Educational RS in various education platforms use a wide variety of filtering techniques. This has led to the development of multiple types of RS. Context-aware recommender systems (CARS) are identified as an emerging type of RS that uses users context for filtering recommendations, which makes recommendations more relevant to the user s current situation. CARS may face initial distrust compared to other RS due to the additional automation layer of context awareness and the use of more user data. Therefore, we conduct a survey-based study to find differences in user trust and perception between CARS and other RS. In the study, users viewed examples of CARS and RS. The results show that users have significantly lower trust in CARS compared to RS.
Authored by Neha Rani, Sharon Chu
Trust evaluation and trust establishment play crucial roles in the management of trust within a multi-agent system. When it comes to collaboration systems, trust becomes directly linked to the specific roles performed by agents. The Role-Based Collaboration (RBC) methodology serves as a framework for assigning roles that facilitate agent collaboration. Within this context, the behavior of an agent with respect to a role is referred to as a process role. This research paper introduces a role engine that incorporates a trust establishment algorithm aimed at identifying optimal and reliable process roles. In our study, we define trust as a continuous value ranging from 0 to 1. To optimize trustworthy process roles, we have developed a consensus-based Gaussian Process Factor Graph (GPFG) tool. Our simulations and experiments validate the feasibility and efficiency of our proposed approach with autonomous robots in unsignalized intersections and narrow hallways.
Authored by Behzad Akbari, Haibin Zhu, Ya-Jun Pan
The construction of traditional industrial networks poses challenges in cybersecurity, a sindus-tries are increasingly becoming more interconnected for management purposes. In this study, we analyzed events related to the insertion of the Zero Trust approach in industrial control systems. In a simulated test environment, we investigate how these systems respond to cyberattacks commonly observed in industrial scenarios. The results aim to identify potential benefits that Zero Trust policies can offer to industrial control systems vulnerable to cyber-attacks.
Authored by Lucas Cruz, Iguatemi Fonseca
This paper describes a Zero Trust Architecture (ZTA) approach for the survivability development of mission critical embedded systems. Designers could use ZTA as a systems analysis tool to explore the design space. The ZTA concept of “never trust, always verify” is being leveraged in the design process to guide the selection of security and resilience features for the codesign of functionality, performance, and survivability. The design example of a small drone for survivability is described along with the explanation of the ZTA approach.
Authored by Michael Vai, David Whelihan, Eric Simpson, Donato Kava, Alice Lee, Huy Nguyen, Jeffrey Hughes, Gabriel Torres, Jeffery Lim, Ben Nahill, Roger Khazan, Fred Schneider
Cybersecurity is largely based on the use of frameworks (ISO27k, NIST, etc.) which main objective is compliance with the standard. They do not, however, address the quantification of the risk deriving from a threat scenario. This paper proposes a methodology that, having evaluated the overall capability of the controls of an ISO27001 framework, allows to select those that mitigate a threat scenario and evaluate the risk according to a Cybersecurity Risk Quantification model.
Authored by Glauco Bertocchi, Alberto Piamonte
Simulation research on fish schooling behavior is of great significance. This paper proposes an improved fish schooling behavior simulation model, which introduces fish collision avoidance, escape, and pursuit rules based on the Boids model, so that the model can simulate the response of fish when facing threats. And the simulation of fish schooling behavior in complex environment was present based on Unity3D. The quantitative analysis of the simulation results shows that the model proposed in this paper can effectively reflect the behavior al characteristics of fish schools. These results are highly consistent with the actual fish schooling behavior, which clearly demonstrates the feasibility of the model in simulating fish schooling behavior.
Authored by Jiaxin Li, Xiaofeng Sun
Cyber-physical system such as automatic metering infrastructure (AMI) are overly complex infrastructures. With myriad stakeholders, real-time constraints, heterogeneous platforms and component dependencies, a plethora of attacks possibilities arise. Despite the best of available technology countermeasures and compliance standards, security practitioners struggle to protect their infrastructures. At the same time, it is important to note that not all attacks are same in terms of their likelihood of occurrence and impact. Hence, it is important to rank the various attacks and perform scenario analysis to have an objective decision on security countermeasures. In this paper, we make a comprehensive security risk analysis of AMI, both qualitatively and quantitatively. Qualitative analysis is performed by ranking the attacks in terms of sensitivity and criticality. Quantitative analysis is done by arranging the attacks as an attack tree and performing Bayesian analysis. Typically, state-of–the-art quantitative security risk analysis suffers from data scarcity. We acknowledge the aforementioned problem and circumvent it by using standard vulnerability database. Different from state-of-the-art surveys on the subject, which captures the big picture, our work is geared to is provide the prioritized baselines in addressing most common and damaging attacks.
Authored by Rajesh Kumar, Ishan Rai, Krish Vora, Mithil Shah
Intrusion detection is important in the defense in depth network security framework and a hot topic in computer network security in recent years. In this paper, an effective method for anomaly intrusion detection with low overhead and high efficiency is presented and applied to monitor the abnormal behavior of processes. The method is based on rough set theory and capable of extracting a set of detection rules with the minimum size to form a normal behavior model from the record of system call sequences generated during the normal execution of a process. Based on the network security knowledge base system, this paper proposes an intrusion detection model based on the network security knowledge base system, including data filtering, attack attempt analysis and situation assessment engine. In this model, evolutionary self organizing mapping is used to discover multi - target attacks of the same origin; The association rules obtained by time series analysis method are used to correlate online alarm events to identify complex attacks scattered in time; Finally, the corresponding evaluation indexes and corresponding quantitative evaluation methods are given for host level and LAN system level threats respectively. Compared with the existing IDS, this model has a more complete structure, richer knowledge available, and can more easily find cooperative attacks and effectively reduce the false positive rate.
Authored by Songjie Gong
In recent times, the research looks into the measures taken by financial institutions to secure their systems and reduce the likelihood of attacks. The study results indicate that all cultures are undergoing a digital transformation at the present time. The dawn of the Internet ushered in an era of increased sophistication in many fields. There has been a gradual but steady shift in attitude toward digital and networked computers in the business world over the past few years. Financial organizations are increasingly vulnerable to external cyberattacks due to the ease of usage and positive effects. They are also susceptible to attacks from within their own organisation. In this paper, we develop a machine learning based quantitative risk assessment model that effectively assess and minimises this risk. Quantitative risk calculation is used since it is the best way for calculating network risk. According to the study, a network s vulnerability is proportional to the number of times its threats have been exploited and the amount of damage they have caused. The simulation is used to test the model s efficacy, and the results show that the model detects threats more effectively than the other methods.
Authored by Lavanya M, Mangayarkarasi S