Biblio
Since remote ages, queues and delays have been a rather exasperating reality of human daily life. Today, they pursue us everywhere: in technical, social, socio-technical, and even control systems, dramatically deteriorating their performance. In this variety, it is the computer systems that are sure to cause the growing anxiety in our digital era. Although for our everyday Internet surfing, experiencing long-lasting and annoying delays is an unpleasant but not dangerous situation, for industrial control systems, especially those dealing with critical infrastructures, such behavior is unacceptable. The article presents a deterministic approach to solving some digital control system problems associated with delays and backlogs. Being based on Network calculus, in contrast to statistical methods of Queuing theory, it provides worst-case results, which are eminently desirable for critical infrastructures. The article covers the basics of a theory of deterministic queuing systems Network calculus, its evolution regarding the relationship between backlog bound and delay, and a technique for handling empirical data. The problems being solved by the deterministic approach: standard calculation of network performance measures, estimation of database maximum updating time, and cybersecurity assessment including such issues as the CIA triad representation, operational technology influence, and availability understanding focusing on its correlation with a delay are thoroughly discussed as well.
Computer networks and surging advancements of innovative information technology construct a critical infrastructure for network transactions of business entities. Information exchange and data access though such infrastructure is scrutinized by adversaries for vulnerabilities that lead to cyber-attacks. This paper presents an agent-based system modelling to conceptualize and extract explicit and latent structure of the complex enterprise systems as well as human interactions within the system to determine common vulnerabilities of the entity. The model captures emergent behavior resulting from interactions of multiple network agents including the number of workstations, regular, administrator and third-party users, external and internal attacks, defense mechanisms for the network setting, and many other parameters. A risk-based approach to modelling cybersecurity of a business entity is utilized to derive the rate of attacks. A neural network model will generalize the type of attack based on network traffic features allowing dynamic state changes. Rules of engagement to generate self-organizing behavior will be leveraged to appoint a defense mechanism suitable for the attack-state of the model. The effectiveness of the model will be depicted by time-state chart that shows the number of affected assets for the different types of attacks triggered by the entity risk and the time it takes to revert into normal state. The model will also associate a relevant cost per incident occurrence that derives the need for enhancement of security solutions.
The current trend of IoT user is toward the use of services and data externally due to voluminous processing, which demands resourceful machines. Instead of relying on the cloud of poor connectivity or a limited bandwidth, the IoT user prefers to use a cloudlet-based fog computing. However, the choice of cloudlet is solely dependent on its trust and reliability. In practice, even though a cloudlet possesses a required trusted platform module (TPM), we argue that the presence of a TPM is not enough to make the cloudlet trustworthy as the TPM supports only the primitive security of the bootstrap. Besides uncertainty in security, other uncertain conditions of the network (e.g. network bandwidth, latency and expectation time to complete a service request for cloud-based services) may also prevail for the cloudlets. Therefore, in order to evaluate the trust value of multiple cloudlets under uncertainty, this paper broadly proposes the empirical process for evaluation of trust. This will be followed by a measure of trust-based reputation of cloudlets through computational intelligence such as fuzzy logic and ant colony optimization (ACO). In the process, fuzzy logic-based inference and membership evaluation of trust are presented. In addition, ACO and its pheromone communication across different colonies are being modeled with multiple cloudlets. Finally, a measure of affinity or popular trust and reputation of the cloudlets is also proposed. Together with the context of application under multiple cloudlets, the computationally intelligent approaches have been investigated in terms of performance. Hence the contribution is subjected towards building a trusted cloudlet-based fog platform.
This paper proposes a deep learning-based white-hat worm launcher in Botnet Defense System (BDS). BDS uses white-hat botnets to defend an IoT system against malicious botnets. White-hat worm launcher literally launches white-hat worms to create white-hat botnets according to the strategy decided by BDS. The proposed launcher learns with deep learning where is the white-hat worms' right place to successfully drive out malicious botnets. Given a system situation invaded by malicious botnets, it predicts a worms' placement by the learning result and launches them. We confirmed the effect of the proposed launcher through simulating evaluation.
We consider different models of malicious multiple access channels, especially for binary adder channel and for A-channel, and show how they can be used for the reformulation of digital fingerprinting coding problems. In particular, we propose a new model of multimedia fingerprinting coding. In the new model, not only zeroes and plus/minus ones but arbitrary coefficients of linear combinations of noise-like signals for forming watermarks (digital fingerprints) can be used. This modification allows dramatically increase the possible number of users with the property that if t or less malicious users create a forge digital fingerprint then a dealer of the system can find all of them with zero-error probability. We show how arisen problems are related to the compressed sensing problem.
Improved safety, high mobility and environmental concerns in transportation systems across the world and the corresponding developments in information and communication technologies continue to drive attention towards Intelligent Transportation Systems (ITS). This is evident in advanced driver-assistance systems such as lane departure warning, adaptive cruise control and collision avoidance. However, in connected and autonomous vehicles, the efficient functionality of these applications depends largely on the ability of a vehicle to accurately predict it operating parameters such as location and speed. The ability to predict the immediate future/next location (or speed) of a vehicle or its ability to predict neighbors help in guaranteeing integrity, availability and accountability, thus boosting safety and resiliency of the Vehicular Network for Mobile Cyber Physical Systems (VCPS). In this paper, we proposed a secure movement-prediction for connected vehicles by using Kalman filter. Specifically, Kalman filter predicts the locations and speeds of individual vehicles with reference to already observed and known information such posted legal speed limit, geographic/road location, direction etc. The aim is to achieve resilience through the predicted and exchanged information between connected moving vehicles in an adaptive manner. By being able to predict their future locations, the following vehicle is able to adjust its position more accurately to avoid collision and to ensure optimal information exchange among vehicles.
In the original algorithm for grey correlation analysis, the detected edge is comparatively rough and the thresholds need determining in advance. Thus, an adaptive edge detection method based on grey correlation analysis is proposed, in which the basic principle of the original algorithm for grey correlation analysis is used to get adaptively automatic threshold according to the mean value of the 3×3 area pixels around the detecting pixel and the property of people's vision. Because the false edge that the proposed algorithm detected is relatively large, the proposed algorithm is enhanced by dealing with the eight neighboring pixels around the edge pixel, which is merged to get the final edge map. The experimental results show that the algorithm can get more complete edge map with better continuity by comparing with the traditional edge detection algorithms.
In this paper, a time-driven performance-aware mathematical model for trust in the robot is proposed for a Human-Robot Collaboration (HRC) framework. The proposed trust model is based on both the human operator and the robot performances. The human operator’s performance is modeled based on both the physical and cognitive performances, while the robot performance is modeled over its unpredictable, predictable, dependable, and faithful operation regions. The model is validated via different simulation scenarios. The simulation results show that the trust in the robot in the HRC framework is governed by robot performance and human operator’s performance and can be improved by enhancing the robot performance.
The emergence of Cyber-Physical Systems (CPSs) is a potential paradigm shift for the usage of Information and Communication Technologies (ICT). From predominantly a facilitator of information and communication services, the role of ICT in the present age has expanded to the management of objects and resources in the physical world. Thus, it is imperative to devise mechanisms to ensure the trustworthiness of data to secure vulnerable devices against security threats. This work presents an analytical framework based on non-cooperative game theory to evaluate the trustworthiness of individual sensor nodes that constitute the CPS. The proposed game-theoretic model captures the factors impacting the trustworthiness of CPS sensor nodes. Further, the model is used to estimate the Nash equilibrium solution of the game, to derive a trust threshold criterion. The trust threshold represents the minimum trust score required to be maintained by individual sensor nodes during CPS operation. Sensor nodes with trust scores below the threshold are potentially malicious and may be removed or isolated to ensure the secure operation of CPS.