Visible to the public Biblio

Found 314 results

Filters: Keyword is Task Analysis  [Clear All Filters]
2021-10-12
Hassan, Wajih Ul, Bates, Adam, Marino, Daniel.  2020.  Tactical Provenance Analysis for Endpoint Detection and Response Systems. 2020 IEEE Symposium on Security and Privacy (SP). :1172–1189.
Endpoint Detection and Response (EDR) tools provide visibility into sophisticated intrusions by matching system events against known adversarial behaviors. However, current solutions suffer from three challenges: 1) EDR tools generate a high volume of false alarms, creating backlogs of investigation tasks for analysts; 2) determining the veracity of these threat alerts requires tedious manual labor due to the overwhelming amount of low-level system logs, creating a "needle-in-a-haystack" problem; and 3) due to the tremendous resource burden of log retention, in practice the system logs describing long-lived attack campaigns are often deleted before an investigation is ever initiated.This paper describes an effort to bring the benefits of data provenance to commercial EDR tools. We introduce the notion of Tactical Provenance Graphs (TPGs) that, rather than encoding low-level system event dependencies, reason about causal dependencies between EDR-generated threat alerts. TPGs provide compact visualization of multi-stage attacks to analysts, accelerating investigation. To address EDR's false alarm problem, we introduce a threat scoring methodology that assesses risk based on the temporal ordering between individual threat alerts present in the TPG. In contrast to the retention of unwieldy system logs, we maintain a minimally-sufficient skeleton graph that can provide linkability between existing and future threat alerts. We evaluate our system, RapSheet, using the Symantec EDR tool in an enterprise environment. Results show that our approach can rank truly malicious TPGs higher than false alarm TPGs. Moreover, our skeleton graph reduces the long-term burden of log retention by up to 87%.
Zhao, Haojun, Lin, Yun, Gao, Song, Yu, Shui.  2020.  Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–5.
The discovery of adversarial examples poses a serious risk to the deep neural networks (DNN). By adding a subtle perturbation that is imperceptible to the human eye, a well-behaved DNN model can be easily fooled and completely change the prediction categories of the input samples. However, research on adversarial attacks in the field of modulation recognition mainly focuses on increasing the prediction error of the classifier, while ignores the importance of decreasing the perceptual invisibility of attack. Aiming at the task of DNNbased modulation recognition, this study designs the Fitting Difference as a metric to measure the perturbed waveforms and proposes a new method: the Nesterov Adam Iterative Method to generate adversarial examples. We show that the proposed algorithm not only exerts excellent white-box attacks but also can initiate attacks on a black-box model. Moreover, our method decreases the waveform perceptual invisibility of attacks to a certain degree, thereby reducing the risk of an attack being detected.
2021-10-04
Yadav, Mohini, Shankar, Deepak, Jose, Tom.  2020.  Functional Safety for Braking System through ISO 26262, Operating System Security and DO 254. 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC). :1–8.
This paper presents an introduction to functional safety through ISO 26262 focusing on system, software and hardware possible failures that bring security threats and discussion on DO 254. It discusses the approach to bridge the gap between different other hazard level and system ability to identify the particular fault and resolve it minimum time span possible. Results are analyzed by designing models to check and avoid all the failures, loophole prior development.
Abbas Hamdani, Syed Wasif, Waheed Khan, Abdul, Iltaf, Naima, Iqbal, Waseem.  2020.  DTMSim-IoT: A Distributed Trust Management Simulator for IoT Networks. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :491–498.
In recent years, several trust management frame-works and models have been proposed for the Internet of Things (IoT). Focusing primarily on distributed trust management schemes; testing and validation of these models is still a challenging task. It requires the implementation of the proposed trust model for verification and validation of expected outcomes. Nevertheless, a stand-alone and standard IoT network simulator for testing of distributed trust management scheme is not yet available. In this paper, a .NET-based Distributed Trust Management Simulator for IoT Networks (DTMSim-IoT) is presented which enables the researcher to implement any static/dynamic trust management model to compute the trust value of a node. The trust computation will be calculated based on the direct-observation and trust value is updated after every transaction. Transaction history and logs of each event are maintained which can be viewed and exported as .csv file for future use. In addition to that, the simulator can also draw a graph based on the .csv file. Moreover, the simulator also offers to incorporate the feature of identification and mitigation of the On-Off Attack (OOA) in the IoT domain. Furthermore, after identifying any malicious activity by any node in the networks, the malevolent node is added to the malicious list and disseminated in the network to prevent potential On-Off attacks.
Ding, Lei, Wang, Shida, Wan, Renzhuo, Zhou, Guopeng.  2020.  Securing core information sharing and exchange by blockchain for cooperative system. 2020 IEEE 9th Data Driven Control and Learning Systems Conference (DDCLS). :579–583.
The privacy protection and information security are two crucial issues for future advanced artificial intelligence devices, especially for cooperative system with rich core data exchange which may offer opportunities for attackers to fake interaction messages. To combat such threat, great efforts have been made by introducing trust mechanism in initiative or passive way. Furthermore, blockchain and distributed ledger technology provide a decentralized and peer-to-peer network, which has great potential application for multi-agent system, such as IoTs and robots. It eliminates third-party interference and data in the blockchain are stored in an encrypted way permanently and anti-destroys. In this paper, a methodology of blockchain is proposed and designed for advanced cooperative system with artificial intelligence to protect privacy and sensitive data exchange between multi-agents. The validation procedure is performed in laboratory by a three-level computing networks of Raspberry Pi 3B+, NVIDIA Jetson Tx2 and local computing server for a robot system with four manipulators and four binocular cameras in peer computing nodes by Go language.
2021-09-30
Tupakula, Uday, Varadharajan, Vijay, Karmakar, Kallol Krishna.  2020.  Attack Detection on the Software Defined Networking Switches. 2020 6th IEEE Conference on Network Softwarization (NetSoft). :262–266.
Software Defined Networking (SDN) is disruptive networking technology which adopts a centralised framework to facilitate fine-grained network management. However security in SDN is still in its infancy and there is need for significant work to deal with different attacks in SDN. In this paper we discuss some of the possible attacks on SDN switches and propose techniques for detecting the attacks on switches. We have developed a Switch Security Application (SSA)for SDN Controller which makes use of trusted computing technology and some additional components for detecting attacks on the switches. In particular TPM attestation is used to ensure that switches are in trusted state during boot time before configuring the flow rules on the switches. The additional components are used for storing and validating messages related to the flow rule configuration of the switches. The stored information is used for generating a trusted report on the expected flow rules in the switches and using this information for validating the flow rules that are actually enforced in the switches. If there is any variation to flow rules that are enforced in the switches compared to the expected flow rules by the SSA, then, the switch is considered to be under attack and an alert is raised to the SDN Administrator. The administrator can isolate the switch from network or make use of trusted report for restoring the flow rules in the switches. We will also present a prototype implementation of our technique.
Wang, Wei, Liu, Tieyuan, Chang, Liang, Gu, Tianlong, Zhao, Xuemei.  2020.  Convolutional Recurrent Neural Networks for Knowledge Tracing. 2020 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). :287–290.
Knowledge Tracing (KT) is a task that aims to assess students' mastery level of knowledge and predict their performance over questions, which has attracted widespread attention over the years. Recently, an increasing number of researches have applied deep learning techniques to knowledge tracing and have made a huge success over traditional Bayesian Knowledge Tracing methods. Most existing deep learning-based methods utilized either Recurrent Neural Networks (RNNs) or Convolutional Neural Networks (CNNs). However, it is worth noticing that these two sorts of models are complementary in modeling abilities. Thus, in this paper, we propose a novel knowledge tracing model by taking advantage of both two models via combining them into a single integrated model, named Convolutional Recurrent Knowledge Tracing (CRKT). Extensive experiments show that our model outperforms the state-of-the-art models in multiple KT datasets.
Mahmoud, Loreen, Praveen, Raja.  2020.  Network Security Evaluation Using Deep Neural Network. 2020 15th International Conference for Internet Technology and Secured Transactions (ICITST). :1–4.
One of the most significant systems in computer network security assurance is the assessment of computer network security. With the goal of finding an effective method for performing the process of security evaluation in a computer network, this paper uses a deep neural network to be responsible for the task of security evaluating. The DNN will be built with python on Spyder IDE, it will be trained and tested by 17 network security indicators then the output that we get represents one of the security levels that have been already defined. The maj or purpose is to enhance the ability to determine the security level of a computer network accurately based on its selected security indicators. The method that we intend to use in this paper in order to evaluate network security is simple, reduces the human factors interferences, and can obtain the correct results of the evaluation rapidly. We will analyze the results to decide if this method will enhance the process of evaluating the security of the network in terms of accuracy.
Weber, Iaçanã, Marchezan, Geaninne, Caimi, Luciano, Marcon, César, Moraes, Fernando G..  2020.  Open-Source NoC-Based Many-Core for Evaluating Hardware Trojan Detection Methods. 2020 IEEE International Symposium on Circuits and Systems (ISCAS). :1–5.
In many-cores based on Network-on-Chip (NoC), several applications execute simultaneously, sharing computation, communication and memory resources. This resource sharing leads to security and trust problems. Hardware Trojans (HTs) may steal sensitive information, degrade system performance, and in extreme cases, induce physical damages. Methods available in the literature to prevent attacks include firewalls, denial-of-service detection, dedicated routing algorithms, cryptography, task migration, and secure zones. The goal of this paper is to add an HT in an NoC, able to execute three types of attacks: packet duplication, block applications, and misrouting. The paper qualitatively evaluates the attacks' effect against methods available in the literature, and its effects showed in an NoC-based many-core. The resulting system is an open-source NoC-based many-core for researchers to evaluate new methods against HT attacks.
Serino, Anthony, Cheng, Liang.  2020.  Real-Time Operating Systems for Cyber-Physical Systems: Current Status and Future Research. 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics). :419–425.
This paper studies the current status and future directions of RTOS (Real-Time Operating Systems) for time-sensitive CPS (Cyber-Physical Systems). GPOS (General Purpose Operating Systems) existed before RTOS but did not meet performance requirements for time sensitive CPS. Many GPOS have put forward adaptations to meet the requirements of real-time performance, and this paper compares RTOS and GPOS and shows their pros and cons for CPS applications. Furthermore, comparisons among select RTOS such as VxWorks, RTLinux, and FreeRTOS have been conducted in terms of scheduling, kernel, and priority inversion. Various tools for WCET (Worst-Case Execution Time) estimation are discussed. This paper also presents a CPS use case of RTOS, i.e. JetOS for avionics, and future advancements in RTOS such as multi-core RTOS, new RTOS architecture and RTOS security for CPS.
Gautam, Savita, Umar, M. Sarosh, Samad, Abdus.  2020.  Multi-Fold Scheduling Algorithm for Multi-Core Multi-Processor Systems. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1–5.
Adapting parallel scheduling function in the design of multi-scheduling algorithm results significant impact in the operation of high performance parallel systems. The various methods of parallelizing scheduling functions are widely applied in traditional multiprocessor systems. In this paper a novel algorithm is introduced which works not only for parallel execution of jobs but also focuses the parallelization of scheduling function. It gives attention on reducing the execution time, minimizing the load balance performance by selecting the volume of tasks for migration in terms of packets. Jobs are grouped into packets consisting of 2n jobs which are scheduled in parallel. Thus, an enhancement in the scheduling mechanism by packet formation is made to carry out high utilization of underlying architecture with increased throughput. The proposed method is assessed on a desktop computer equipped with multi-core processors in cube based multiprocessor systems. The algorithm is implemented with different configuration of multi-core systems. The simulation results indicate that the proposed technique reduces the overall makespan of execution with an improved performance of the system.
2021-09-21
Lee, Yen-Ting, Ban, Tao, Wan, Tzu-Ling, Cheng, Shin-Ming, Isawa, Ryoichi, Takahashi, Takeshi, Inoue, Daisuke.  2020.  Cross Platform IoT-Malware Family Classification Based on Printable Strings. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :775–784.
In this era of rapid network development, Internet of Things (IoT) security considerations receive a lot of attention from both the research and commercial sectors. With limited computation resource, unfriendly interface, and poor software implementation, legacy IoT devices are vulnerable to many infamous mal ware attacks. Moreover, the heterogeneity of IoT platforms and the diversity of IoT malware make the detection and classification of IoT malware even more challenging. In this paper, we propose to use printable strings as an easy-to-get but effective cross-platform feature to identify IoT malware on different IoT platforms. The discriminating capability of these strings are verified using a set of machine learning algorithms on malware family classification across different platforms. The proposed scheme shows a 99% accuracy on a large scale IoT malware dataset consisted of 120K executable fils in executable and linkable format when the training and test are done on the same platform. Meanwhile, it also achieves a 96% accuracy when training is carried out on a few popular IoT platforms but test is done on different platforms. Efficient malware prevention and mitigation solutions can be enabled based on the proposed method to prevent and mitigate IoT malware damages across different platforms.
Sartoli, Sara, Wei, Yong, Hampton, Shane.  2020.  Malware Classification Using Recurrence Plots and Deep Neural Network. 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA). :901–906.
In this paper, we introduce a method for visualizing and classifying malware binaries. A malware binary consists of a series of data points of compiled machine codes that represent programming components. The occurrence and recurrence behavior of these components is determined by the common tasks malware samples in a particular family carry out. Thus, we view a malware binary as a series of emissions generated by an underlying stochastic process and use recurrence plots to transform malware binaries into two-dimensional texture images. We observe that recurrence plot-based malware images have significant visual similarities within the same family and are different from samples in other families. We apply deep CNN classifiers to classify malware samples. The proposed approach does not require creating malware signature or manual feature engineering. Our preliminary experimental results show that the proposed malware representation leads to a higher and more stable accuracy in comparison to directly transforming malware binaries to gray-scale images.
Jin, Xiang, Xing, Xiaofei, Elahi, Haroon, Wang, Guojun, Jiang, Hai.  2020.  A Malware Detection Approach Using Malware Images and Autoencoders. 2020 IEEE 17th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). :1–6.
Most machine learning-based malware detection systems use various supervised learning methods to classify different instances of software as benign or malicious. This approach provides no information regarding the behavioral characteristics of malware. It also requires a large amount of training data and is prone to labeling difficulties and can reduce accuracy due to redundant training data. Therefore, we propose a malware detection method based on deep learning, which uses malware images and a set of autoencoders to detect malware. The method is to design an autoencoder to learn the functional characteristics of malware, and then to observe the reconstruction error of autoencoder to realize the classification and detection of malware and benign software. The proposed approach achieves 93% accuracy and comparatively better F1-score values while detecting malware and needs little training data when compared with traditional malware detection systems.
Kartel, Anastasia, Novikova, Evgenia, Volosiuk, Aleksandr.  2020.  Analysis of Visualization Techniques for Malware Detection. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :337–340.
Due to the steady growth of various sophisticated types of malware, different malware analysis systems are becoming more and more demanded. While there are various automatic approaches available to identify and detect malware, the malware analysis is still time-consuming process. The visualization-driven techniques may significantly increase the efficiency of the malware analysis process by involving human visual system which is a powerful pattern seeker. In this paper the authors reviewed different visualization methods, examined their features and tasks solved with their help. The paper presents the most commonly used approaches and discusses open challenges in malware visual analytics.
2021-09-16
Mancini, Federico, Bruvoll, Solveig, Melrose, John, Leve, Frederick, Mailloux, Logan, Ernst, Raphael, Rein, Kellyn, Fioravanti, Stefano, Merani, Diego, Been, Robert.  2020.  A Security Reference Model for Autonomous Vehicles in Military Operations. 2020 IEEE Conference on Communications and Network Security (CNS). :1–8.
In a previous article [1] we proposed a layered framework to support the assessment of the security risks associated with the use of autonomous vehicles in military operations and determine how to manage these risks appropriately. We established consistent terminology and defined the problem space, while exploring the first layer of the framework, namely risks from the mission assurance perspective. In this paper, we develop the second layer of the framework. This layer focuses on the risk assessment of the vehicles themselves and on producing a highlevel security design adequate for the mission defined in the first layer. To support this process, we also define a reference model for autonomous vehicles to use as a common basis for the assessment of risks and the design of the security controls.
2021-09-07
Vamsi, G Krishna, Rasool, Akhtar, Hajela, Gaurav.  2020.  Chatbot: A Deep Neural Network Based Human to Machine Conversation Model. 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.
A conversational agent (chatbot) is computer software capable of communicating with humans using natural language processing. The crucial part of building any chatbot is the development of conversation. Despite many developments in Natural Language Processing (NLP) and Artificial Intelligence (AI), creating a good chatbot model remains a significant challenge in this field even today. A conversational bot can be used for countless errands. In general, they need to understand the user's intent and deliver appropriate replies. This is a software program of a conversational interface that allows a user to converse in the same manner one would address a human. Hence, these are used in almost every customer communication platform, like social networks. At present, there are two basic models used in developing a chatbot. Generative based models and Retrieval based models. The recent advancements in deep learning and artificial intelligence, such as the end-to-end trainable neural networks have rapidly replaced earlier methods based on hand-written instructions and patterns or statistical methods. This paper proposes a new method of creating a chatbot using a deep neural learning method. In this method, a neural network with multiple layers is built to learn and process the data.
Kumar, Nripesh, Srinath, G., Prataap, Abhishek, Nirmala, S. Jaya.  2020.  Attention-based Sequential Generative Conversational Agent. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1–6.
In this work, we examine the method of enabling computers to understand human interaction by constructing a generative conversational agent. An experimental approach in trying to apply the techniques of natural language processing using recurrent neural networks (RNNs) to emulate the concept of textual entailment or human reasoning is presented. To achieve this functionality, our experiment involves developing an integrated Long Short-Term Memory cell neural network (LSTM) system enhanced with an attention mechanism. The results achieved by the model are shown in terms of the number of epochs versus loss graphs as well as a brief illustration of the model's conversational capabilities.
Choi, Ho-Jin, Lee, Young-Jun.  2020.  Deep Learning Based Response Generation using Emotion Feature Extraction. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :255–262.
Neural response generation is to generate human-like response given human utterance by using a deep learning. In the previous studies, expressing emotion in response generation improve user performance, user engagement, and user satisfaction. Also, the conversational agents can communicate with users at the human level. However, the previous emotional response generation model cannot understand the subtle part of emotions, because this model use the desired emotion of response as a token form. Moreover, this model is difficult to generate natural responses related to input utterance at the content level, since the information of input utterance can be biased to the emotion token. To overcome these limitations, we propose an emotional response generation model which generates emotional and natural responses by using the emotion feature extraction. Our model consists of two parts: Extraction part and Generation part. The extraction part is to extract the emotion of input utterance as a vector form by using the pre-trained LSTM based classification model. The generation part is to generate an emotional and natural response to the input utterance by reflecting the emotion vector from the extraction part and the thought vector from the encoder. We evaluate our model on the emotion-labeled dialogue dataset: DailyDialog. We evaluate our model on quantitative analysis and qualitative analysis: emotion classification; response generation modeling; comparative study. In general, experiments show that the proposed model can generate emotional and natural responses.
2021-08-31
Rathod, Pawan Manoj, Shende, RajKumar K..  2020.  Recommendation System using optimized Matrix Multiplication Algorithm. 2020 IEEE International Symposium on Sustainable Energy, Signal Processing and Cyber Security (iSSSC). :1–4.
Volume, Variety, Velocity, Veracity & Value of data has drawn the attention of many analysts in the last few years. Performance optimization and comparison are the main challenges we face when we talk about the humongous volume of data. Data Analysts use data for activities like forecasting or deep learning and to process these data various tools are available which helps to achieve this task with minimum efforts. Recommendation System plays a crucial role while running any business such as a shopping website or travel agency where the system recommends the user according to their search history, likes, comments, or their past order/booking details. Recommendation System works on various strategies such as Content Filtering, Collaborative Filtering, Neighborhood Methods, or Matrix Factorization methods. For achieving maximum efficiency and accuracy based on the data a specific strategy can be the best case or the worst case for that scenario. Matrix Factorization is the key point of interest in this work. Matrix Factorization strategy includes multiplication of user matrix and item matrix in-order to get a rating matrix that can be recommended to the users. Matrix Multiplication can be achieved by using various algorithms such as Naive Algorithm, Strassen Algorithm, Coppersmith - Winograd (CW) Algorithm. In this work, a new algorithm is proposed to achieve less amount of time and space complexity used in-order for performing matrix multiplication which helps to get the results much faster. By using the Matrix Factorization strategy with various Matrix Multiplication Algorithm we are going to perform a comparative analysis of the same to conclude the proposed algorithm is more efficient.
Di Noia, Tommaso, Malitesta, Daniele, Merra, Felice Antonio.  2020.  TAaMR: Targeted Adversarial Attack against Multimedia Recommender Systems. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :1–8.
Deep learning classifiers are hugely vulnerable to adversarial examples, and their existence raised cybersecurity concerns in many tasks with an emphasis on malware detection, computer vision, and speech recognition. While there is a considerable effort to investigate attacks and defense strategies in these tasks, only limited work explores the influence of targeted attacks on input data (e.g., images, textual descriptions, audio) used in multimedia recommender systems (MR). In this work, we examine the consequences of applying targeted adversarial attacks against the product images of a visual-based MR. We propose a novel adversarial attack approach, called Target Adversarial Attack against Multimedia Recommender Systems (TAaMR), to investigate the modification of MR behavior when the images of a category of low recommended products (e.g., socks) are perturbed to misclassify the deep neural classifier towards the class of more recommended products (e.g., running shoes) with human-level slight images alterations. We explore the TAaMR approach studying the effect of two targeted adversarial attacks (i.e., FGSM and PGD) against input pictures of two state-of-the-art MR (i.e., VBPR and AMR). Extensive experiments on two real-world recommender fashion datasets confirmed the effectiveness of TAaMR in terms of recommendation lists changing while keeping the original human judgment on the perturbed images.
2021-08-12
Karie, Nickson M., Sahri, Nor Masri, Haskell-Dowland, Paul.  2020.  IoT Threat Detection Advances, Challenges and Future Directions. 2020 Workshop on Emerging Technologies for Security in IoT (ETSecIoT). :22—29.
It is predicted that, the number of connected Internet of Things (IoT) devices will rise to 38.6 billion by 2025 and an estimated 50 billion by 2030. The increased deployment of IoT devices into diverse areas of our life has provided us with significant benefits such as improved quality of life and task automation. However, each time a new IoT device is deployed, new and unique security threats emerge or are introduced into the environment under which the device must operate. Instantaneous detection and mitigation of every security threat introduced by different IoT devices deployed can be very challenging. This is because many of the IoT devices are manufactured with no consideration of their security implications. In this paper therefore, we review existing literature and present IoT threat detection research advances with a focus on the various IoT security challenges as well as the current developments towards combating cyber security threats in IoT networks. However, this paper also highlights several future research directions in the IoT domain.
Zheng, Yifeng, Pal, Arindam, Abuadbba, Sharif, Pokhrel, Shiva Raj, Nepal, Surya, Janicke, Helge.  2020.  Towards IoT Security Automation and Orchestration. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :55—63.
The massive boom of Internet of Things (IoT) has led to the explosion of smart IoT devices and the emergence of various applications such as smart cities, smart grids, smart mining, connected health, and more. While the proliferation of IoT systems promises many benefits for different sectors, it also exposes a large attack surface, raising an imperative need to put security in the first place. It is impractical to heavily rely on manual operations to deal with security of massive IoT devices and applications. Hence, there is a strong need for securing IoT systems with minimum human intervention. In light of this situation, in this paper, we envision security automation and orchestration for IoT systems. After conducting a comprehensive evaluation of the literature and having conversations with industry partners, we envision a framework integrating key elements towards this goal. For each element, we investigate the existing landscapes, discuss the current challenges, and identify future directions. We hope that this paper will bring the attention of the academic and industrial community towards solving challenges related to security automation and orchestration for IoT systems.
2021-08-11
Nan, Satyaki, Brahma, Swastik, Kamhoua, Charles A., Njilla, Laurent L..  2020.  On Development of a Game‐Theoretic Model for Deception‐Based Security. Modeling and Design of Secure Internet of Things. :123–140.
This chapter presents a game‐theoretic model to analyze attack–defense scenarios that use fake nodes (computing devices) for deception under consideration of the system deploying defense resources to protect individual nodes in a cost‐effective manner. The developed model has important applications in the Internet of Battlefield Things (IoBT). Our game‐theoretic model illustrates how the concept of the Nash equilibrium can be used by the defender to intelligently choose which nodes should be used for performing a computation task while deceiving the attacker into expending resources for attacking fake nodes. Our model considers the fact that defense resources may become compromised under an attack and suggests that the defender, in a probabilistic manner, may utilize unprotected nodes for performing a computation while the attacker is deceived into attacking a node with defense resources installed. The chapter also presents a deception‐based strategy to protect a target node that can be accessed via a tree network. Numerical results provide insights into the strategic deception techniques presented in this chapter.
Masuduzzaman, Md, Islam, Anik, Rahim, Tariq, Young Shin, Soo.  2020.  Blockchain-Assisted UAV-Employed Casualty Detection Scheme in Search and Rescue Mission in the Internet of Battlefield Things. 2020 International Conference on Information and Communication Technology Convergence (ICTC). :412–416.
As the unmanned aerial vehicle (UAV) can play a vital role to collect information remotely in a military battlefield, researchers have shown great interest to reveal the domain of internet of battlefield Things (IoBT). In a rescue mission on a battlefield, UAV can collect data from different regions to identify the casualty of a soldier. One of the major challenges in IoBT is to identify the soldier in a complex environment. Image processing algorithm can be helpful if proper methodology can be applied to identify the victims. However, due to the limited hardware resources of a UAV, processing task can be handover to the nearby edge computing server for offloading the task as every second is very crucial in a battlefield. Furthermore, to avoid any third-party interaction in the network and to store the data securely, blockchain can help to create a trusted network as it forms a distributed ledger among the participants. This paper proposes a UAV assisted casualty detection scheme based on image processing algorithm where data is protected using blockchain technology. Result analysis has been conducted to identify the victims on the battlefield successfully using image processing algorithm and network issues like throughput and delay has been analyzed in details using public-key cryptography.