Visible to the public Biblio

Filters: Keyword is Weapons  [Clear All Filters]
Paschal Mgembe, Innocent, Ladislaus Msongaleli, Dawson, Chaundhary, Naveen Kumar.  2022.  Progressive Standard Operating Procedures for Darkweb Forensics Investigation. 2022 10th International Symposium on Digital Forensics and Security (ISDFS). :1—3.
With the advent of information and communication technology, the digital space is becoming a playing ground for criminal activities. Criminals typically prefer darkness or a hidden place to perform their illegal activities in a real-world while sometimes covering their face to avoid being exposed and getting caught. The same applies in a digital world where criminals prefer features which provide anonymity or hidden features to perform illegal activities. It is from this spirit the Darkweb is attracting all kinds of criminal activities conducted over the Internet such as selling drugs, illegal weapons, child pornography, assassination for hire, hackers for hire, and selling of malicious exploits, to mention a few. Although the anonymity offered by Darkweb can be exploited as a tool to arrest criminals involved in cybercrime, an in-depth research is needed to advance criminal investigation on Darkweb. Analysis of illegal activities conducted in Darkweb is in its infancy and faces several challenges like lack of standard operating procedures. This study proposes progressive standard operating procedures (SOPs) for Darkweb forensics investigation. We provide the four stages of SOP for Darkweb investigation. The proposed SOP consists of the following stages; identification and profiling, discovery, acquisition and preservation, and the last stage is analysis and reporting. In each stage, we consider the objectives, tools and expected results of that particular stage. Careful consideration of this SOP revealed promising results in the Darkweb investigation.
Kiruthiga, G, Saraswathi, P, Rajkumar, S, Suresh, S, Dhiyanesh, B, Radha, R.  2022.  Effective DDoS Attack Detection using Deep Generative Radial Neural Network in the Cloud Environment. 2022 7th International Conference on Communication and Electronics Systems (ICCES). :675—681.
Recently, internet services have increased rapidly due to the Covid-19 epidemic. As a result, cloud computing applications, which serve end-users as subscriptions, are rising. Cloud computing provides various possibilities like cost savings, time and access to online resources via the internet for end-users. But as the number of cloud users increases, so does the potential for attacks. The availability and efficiency of cloud computing resources may be affected by a Distributed Denial of Service (DDoS) attack that could disrupt services' availability and processing power. DDoS attacks pose a serious threat to the integrity and confidentiality of computer networks and systems that remain important assets in the world today. Since there is no effective way to detect DDoS attacks, it is a reliable weapon for cyber attackers. However, the existing methods have limitations, such as relatively low accuracy detection and high false rate performance. To tackle these issues, this paper proposes a Deep Generative Radial Neural Network (DGRNN) with a sigmoid activation function and Mutual Information Gain based Feature Selection (MIGFS) techniques for detecting DDoS attacks for the cloud environment. Specifically, the proposed first pre-processing step uses data preparation using the (Network Security Lab) NSL-KDD dataset. The MIGFS algorithm detects the most efficient relevant features for DDoS attacks from the pre-processed dataset. The features are calculated by trust evaluation for detecting the attack based on relative features. After that, the proposed DGRNN algorithm is utilized for classification to detect DDoS attacks. The sigmoid activation function is to find accurate results for prediction in the cloud environment. So thus, the proposed experiment provides effective classification accuracy, performance, and time complexity.
Ramneet, Mudita, Gupta, Deepali.  2022.  ASMBoT: An Intelligent Sanitizing Robot in the Coronavirus Outbreak. 2022 1st IEEE International Conference on Industrial Electronics: Developments & Applications (ICIDeA). :106–109.
Technology plays a vital role in our lives to meet basic hygiene necessities. Currently, the whole world is facing an epidemic situation and the practice of using sanitizers is common nowadays. Sanitizers are used by people to sanitize their hands and bodies. It is also used for sanitizing objects that come into contact with the machine. While sanitizing a small area, people manage to sanitize via pumps, but it becomes difficult to sanitize the same area every day. One of the most severe sanitation concerns is a simple, economic and efficient method to adequately clean the indoor and outdoor environments. In particular, effective sanitization is required for people working in a clinical environment. Recently, some commonly used sanitizer techniques include electric sanitizer spray guns, electric sanitizer disinfectants, etc. However, these sanitizers are not automated, which means a person is required to roam personally with the device to every place to spray the disinfectant or sanitize an area. Therefore, a novel, cost-effective automatic sanitizing machine (ASM) named ASMBoT is designed that can dispense the sanitizer effectively by solving the aforementioned problems.
Huang, Dapeng, Chen, Haoran, Wang, Kai, Chen, Chen, Han, Weili.  2022.  A Traceability Method for Bitcoin Transactions Based on Gateway Network Traffic Analysis. 2022 International Conference on Networking and Network Applications (NaNA). :176–183.
Cryptocurrencies like Bitcoin have become a popular weapon for illegal activities. They have the characteristics of decentralization and anonymity, which can effectively avoid the supervision of government departments. How to de-anonymize Bitcoin transactions is a crucial issue for regulatory and judicial investigation departments to supervise and combat crimes involving Bitcoin effectively. This paper aims to de-anonymize Bitcoin transactions and present a Bitcoin transaction traceability method based on Bitcoin network traffic analysis. According to the characteristics of the physical network that the Bitcoin network relies on, the Bitcoin network traffic is obtained at the physical convergence point of the local Bitcoin network. By analyzing the collected network traffic data, we realize the traceability of the input address of Bitcoin transactions and test the scheme in the distributed Bitcoin network environment. The experimental results show that this traceability mechanism is suitable for nodes connected to the Bitcoin network (except for VPN, Tor, etc.), and can obtain 47.5% recall rate and 70.4% precision rate, which are promising in practice.
Philomina, Josna, Fahim Fathima, K A, Gayathri, S, Elias, Glory Elizabeth, Menon, Abhinaya A.  2022.  A comparitative study of machine learning models for the detection of Phishing Websites. 2022 International Conference on Computing, Communication, Security and Intelligent Systems (IC3SIS). :1–7.
Global cybersecurity threats have grown as a result of the evolving digital transformation. Cybercriminals have more opportunities as a result of digitization. Initially, cyberthreats take the form of phishing in order to gain confidential user credentials.As cyber-attacks get more sophisticated and sophisticated, the cybersecurity industry is faced with the problem of utilising cutting-edge technology and techniques to combat the ever-present hostile threats. Hackers use phishing to persuade customers to grant them access to a company’s digital assets and networks. As technology progressed, phishing attempts became more sophisticated, necessitating the development of tools to detect phishing.Machine learning is unsupervised one of the most powerful weapons in the fight against terrorist threats. The features used for phishing detection, as well as the approaches employed with machine learning, are discussed in this study.In this light, the study’s major goal is to propose a unique, robust ensemble machine learning model architecture that gives the highest prediction accuracy with the lowest error rate, while also recommending a few alternative robust machine learning models.Finally, the Random forest algorithm attained a maximum accuracy of 96.454 percent. But by implementing a hybrid model including the 3 classifiers- Decision Trees,Random forest, Gradient boosting classifiers, the accuracy increases to 98.4 percent.
Liu, Qin, Yang, Jiamin, Jiang, Hongbo, Wu, Jie, Peng, Tao, Wang, Tian, Wang, Guojun.  2022.  When Deep Learning Meets Steganography: Protecting Inference Privacy in the Dark. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications. :590–599.
While cloud-based deep learning benefits for high-accuracy inference, it leads to potential privacy risks when exposing sensitive data to untrusted servers. In this paper, we work on exploring the feasibility of steganography in preserving inference privacy. Specifically, we devise GHOST and GHOST+, two private inference solutions employing steganography to make sensitive images invisible in the inference phase. Motivated by the fact that deep neural networks (DNNs) are inherently vulnerable to adversarial attacks, our main idea is turning this vulnerability into the weapon for data privacy, enabling the DNN to misclassify a stego image into the class of the sensitive image hidden in it. The main difference is that GHOST retrains the DNN into a poisoned network to learn the hidden features of sensitive images, but GHOST+ leverages a generative adversarial network (GAN) to produce adversarial perturbations without altering the DNN. For enhanced privacy and a better computation-communication trade-off, both solutions adopt the edge-cloud collaborative framework. Compared with the previous solutions, this is the first work that successfully integrates steganography and the nature of DNNs to achieve private inference while ensuring high accuracy. Extensive experiments validate that steganography has excellent ability in accuracy-aware privacy protection of deep learning.
ISSN: 2641-9874
Figueiredo, Cainã, Lopes, João Gabriel, Azevedo, Rodrigo, Zaverucha, Gerson, Menasché, Daniel Sadoc, Pfleger de Aguiar, Leandro.  2021.  Software Vulnerabilities, Products and Exploits: A Statistical Relational Learning Approach. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :41—46.
Data on software vulnerabilities, products and exploits is typically collected from multiple non-structured sources. Valuable information, e.g., on which products are affected by which exploits, is conveyed by matching data from those sources, i.e., through their relations. In this paper, we leverage this simple albeit unexplored observation to introduce a statistical relational learning (SRL) approach for the analysis of vulnerabilities, products and exploits. In particular, we focus on the problem of determining the existence of an exploit for a given product, given information about the relations between products and vulnerabilities, and vulnerabilities and exploits, focusing on Industrial Control Systems (ICS), the National Vulnerability Database and ExploitDB. Using RDN-Boost, we were able to reach an AUC ROC of 0.83 and an AUC PR of 0.69 for the problem at hand. To reach that performance, we indicate that it is instrumental to include textual features, e.g., extracted from the description of vulnerabilities, as well as structured information, e.g., about product categories. In addition, using interpretable relational regression trees we report simple rules that shed insight on factors impacting the weaponization of ICS products.
Gupta, B. B., Gaurav, Akshat, Peraković, Dragan.  2021.  A Big Data and Deep Learning based Approach for DDoS Detection in Cloud Computing Environment. 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE). :287–290.
Recently, as a result of the COVID-19 pandemic, the internet service has seen an upsurge in use. As a result, the usage of cloud computing apps, which offer services to end users on a subscription basis, rises in this situation. However, the availability and efficiency of cloud computing resources are impacted by DDoS attacks, which are designed to disrupt the availability and processing power of cloud computing services. Because there is no effective way for detecting or filtering DDoS attacks, they are a dependable weapon for cyber-attackers. Recently, researchers have been experimenting with machine learning (ML) methods in order to create efficient machine learning-based strategies for detecting DDoS assaults. In this context, we propose a technique for detecting DDoS attacks in a cloud computing environment using big data and deep learning algorithms. The proposed technique utilises big data spark technology to analyse a large number of incoming packets and a deep learning machine learning algorithm to filter malicious packets. The KDDCUP99 dataset was used for training and testing, and an accuracy of 99.73% was achieved.
Hung, Benjamin W.K., Muramudalige, Shashika R., Jayasumana, Anura P., Klausen, Jytte, Libretti, Rosanne, Moloney, Evan, Renugopalakrishnan, Priyanka.  2019.  Recognizing Radicalization Indicators in Text Documents Using Human-in-the-Loop Information Extraction and NLP Techniques. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1–7.
Among the operational shortfalls that hinder law enforcement from achieving greater success in preventing terrorist attacks is the difficulty in dynamically assessing individualized violent extremism risk at scale given the enormous amount of primarily text-based records in disparate databases. In this work, we undertake the critical task of employing natural language processing (NLP) techniques and supervised machine learning models to classify textual data in analyst and investigator notes and reports for radicalization behavioral indicators. This effort to generate structured knowledge will build towards an operational capability to assist analysts in rapidly mining law enforcement and intelligence databases for cues and risk indicators. In the near-term, this effort also enables more rapid coding of biographical radicalization profiles to augment a research database of violent extremists and their exhibited behavioral indicators.
Liu, Xutao, Li, Qixiang.  2021.  Asymmetric Analysis of Anti-Terrorist Operations and Demand for Light Weapons under the Condition of Informationization. 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). :1152–1155.

Asymmetric warfare and anti-terrorist war have become a new style of military struggle in the new century, which will inevitably have an important impact on the military economy of various countries and catalyze the innovation climax of military logistics theory and practice. The war in the information age is the confrontation between systems, and “comprehensive integration” is not only the idea of information war ability construction, but also the idea of deterrence ability construction in the information age. Looking at the local wars under the conditions of modern informationization, it is not difficult to see that the status and role of light weapons and equipment have not decreased, on the contrary, higher demands have been put forward for their combat performance. From a forward-looking perspective, based on our army's preparation and logistics support for future asymmetric operations and anti-terrorist military struggle, this strategic issue is discussed in depth.

Khichi, Manish, Kumar Yadav, Rajesh.  2021.  A Threat of Deepfakes as a Weapon on Digital Platform and their Detection Methods. 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). :01–08.
Advances in machine learning, deep learning, and Artificial Intelligence(AI) allows people to exchange other people's faces and voices in videos to make it look like what they did or say whatever you want to say. These videos and photos are called “deepfake” and are getting more complicated every day and this has lawmakers worried. This technology uses machine learning technology to provide computers with real data about images, so that we can make forgeries. The creators of Deepfake use artificial intelligence and machine learning algorithms to mimic the work and characteristics of real humans. It differs from counterfeit traditional media because it is difficult to identify. As In the 2020 elections loomed, AI-generated deepfakes were hit the news cycle. DeepFakes threatens facial recognition and online content. This deception can be dangerous, because if used incorrectly, this technique can be abused. Fake video, voice, and audio clips can do enormous damage. This paper examines the algorithms used to generate deepfakes as well as the methods proposed to detect them. We go through the threats, research patterns, and future directions for deepfake technologies in detail. This research provides a detailed description of deep imitation technology and encourages the creation of new and more powerful methods to deal with increasingly severe deep imitation by studying the history of deep imitation.
Whittle, Cameron S., Liu, Hong.  2021.  Effectiveness of Entropy-Based DDoS Prevention for Software Defined Networks. 2021 IEEE International Symposium on Technologies for Homeland Security (HST). :1—7.
This work investigates entropy-based prevention of Distributed Denial-of-Service (DDoS) attacks for Software Defined Networks (SDN). The experiments are conducted on a virtual SDN testbed setup within Mininet, a Linux-based network emulator. An arms race iterates on the SDN testbed between offense, launching botnet-based DDoS attacks with progressive sophistications, and defense who is deploying SDN controls with emerging technologies from other faucets of cyber engineering. The investigation focuses on the transmission control protocol’s synchronize flood attack that exploits vulnerabilities in the three-way TCP handshake protocol, to lock up a host from serving new users.The defensive strategy starts with a common packet filtering-based design from the literature to mitigate attacks. Utilizing machine learning algorithms, SDNs actively monitor all possible traffic as a collective dataset to detect DDoS attacks in real time. A constant upgrade to a stronger defense is necessary, as cyber/network security is an ongoing front where attackers always have the element of surprise. The defense further invests on entropy methods to improve early detection of DDoS attacks within the testbed environment. Entropy allows SDNs to learn the expected normal traffic patterns for a network as a whole using real time mathematical calculations, so that the SDN controllers can sense the distributed attack vectors building up before they overwhelm the network.This work reveals the vulnerabilities of SDNs to stealthy DDoS attacks and demonstrates the effectiveness of deploying entropy in SDN controllers for detection and mitigation purposes. Future work includes provisions to use these entropy detection methods, as part of a larger system, to redirect traffic and protect networks dynamically in real time. Other types of DoS, such as ransomware, will also be considered.
Mahor, Vinod, Rawat, Romil, Kumar, Anil, Chouhan, Mukesh, Shaw, Rabindra Nath, Ghosh, Ankush.  2021.  Cyber Warfare Threat Categorization on CPS by Dark Web Terrorist. 2021 IEEE 4th International Conference on Computing, Power and Communication Technologies (GUCON). :1—6.
The Industrial Internet of Things (IIoT) also referred as Cyber Physical Systems (CPS) as critical elements, expected to play a key role in Industry 4.0 and always been vulnerable to cyber-attacks and vulnerabilities. Terrorists use cyber vulnerability as weapons for mass destruction. The dark web's strong transparency and hard-to-track systems offer a safe haven for criminal activity. On the dark web (DW), there is a wide variety of illicit material that is posted regularly. For supervised training, large-scale web pages are used in traditional DW categorization. However, new study is being hampered by the impossibility of gathering sufficiently illicit DW material and the time spent manually tagging web pages. We suggest a system for accurately classifying criminal activity on the DW in this article. Rather than depending on the vast DW training package, we used authorized regulatory to various types of illicit activity for training Machine Learning (ML) classifiers and get appreciable categorization results. Espionage, Sabotage, Electrical power grid, Propaganda and Economic disruption are the cyber warfare motivations and We choose appropriate data from the open source links for supervised Learning and run a categorization experiment on the illicit material obtained from the actual DW. The results shows that in the experimental setting, using TF-IDF function extraction and a AdaBoost classifier, we were able to achieve an accuracy of 0.942. Our method enables the researchers and System authoritarian agency to verify if their DW corpus includes such illicit activity depending on the applicable rules of the illicit categories they are interested in, allowing them to identify and track possible illicit websites in real time. Because broad training set and expert-supplied seed keywords are not required, this categorization approach offers another option for defining illicit activities on the DW.
Deng, Perry, Linsky, Cooper, Wright, Matthew.  2020.  Weaponizing Unicodes with Deep Learning -Identifying Homoglyphs with Weakly Labeled Data. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1–6.
Visually similar characters, or homoglyphs, can be used to perform social engineering attacks or to evade spam and plagiarism detectors. It is thus important to understand the capabilities of an attacker to identify homoglyphs - particularly ones that have not been previously spotted - and leverage them in attacks. We investigate a deep-learning model using embedding learning, transfer learning, and augmentation to determine the visual similarity of characters and thereby identify potential homoglyphs. Our approach uniquely takes advantage of weak labels that arise from the fact that most characters are not homoglyphs. Our model drastically outperforms the Normal-ized Compression Distance approach on pairwise homoglyph identification, for which we achieve an average precision of 0.97. We also present the first attempt at clustering homoglyphs into sets of equivalence classes, which is more efficient than pairwise information for security practitioners to quickly lookup homoglyphs or to normalize confusable string encodings. To measure clustering performance, we propose a metric (mBIOU) building on the classic Intersection-Over-Union (IOU) metric. Our clustering method achieves 0.592 mBIOU, compared to 0.430 for the naive baseline. We also use our model to predict over 8,000 previously unknown homoglyphs, and find good early indications that many of these may be true positives. Source code and list of predicted homoglyphs are uploaded to Github:\_unicode.
Petrenko, Sergei A., Petrenko, Alexey S., Makoveichuk, Krystina A., Olifirov, Alexander V..  2020.  "Digital Bombs" Neutralization Method. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :446–451.
The article discusses new models and methods for timely identification and blocking of malicious code of critically important information infrastructure based on static and dynamic analysis of executable program codes. A two-stage method for detecting malicious code in the executable program codes (the so-called "digital bombs") is described. The first step of the method is to build the initial program model in the form of a control graph, the construction is carried out at the stage of static analysis of the program. The article discusses the purpose, features and construction criteria of an ordered control graph. The second step of the method is to embed control points in the program's executable code for organizing control of the possible behavior of the program using a specially designed recognition automaton - an automaton of dynamic control. Structural criteria for the completeness of the functional control of the subprogram are given. The practical implementation of the proposed models and methods was completed and presented in a special instrumental complex IRIDA.
Jain, Harsh, Vikram, Aditya, Mohana, Kashyap, Ankit, Jain, Ayush.  2020.  Weapon Detection using Artificial Intelligence and Deep Learning for Security Applications. 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). :193—198.
Security is always a main concern in every domain, due to a rise in crime rate in a crowded event or suspicious lonely areas. Abnormal detection and monitoring have major applications of computer vision to tackle various problems. Due to growing demand in the protection of safety, security and personal properties, needs and deployment of video surveillance systems can recognize and interpret the scene and anomaly events play a vital role in intelligence monitoring. This paper implements automatic gun (or) weapon detection using a convolution neural network (CNN) based SSD and Faster RCNN algorithms. Proposed implementation uses two types of datasets. One dataset, which had pre-labelled images and the other one is a set of images, which were labelled manually. Results are tabulated, both algorithms achieve good accuracy, but their application in real situations can be based on the trade-off between speed and accuracy.
Claycomb, W. R., Huth, C. L., Phillips, B., Flynn, L., McIntire, D..  2013.  Identifying indicators of insider threats: Insider IT sabotage. 2013 47th International Carnahan Conference on Security Technology (ICCST). :1—5.
This paper describes results of a study seeking to identify observable events related to insider sabotage. We collected information from actual insider threat cases, created chronological timelines of the incidents, identified key points in each timeline such as when attack planning began, measured the time between key events, and looked for specific observable events or patterns that insiders held in common that may indicate insider sabotage is imminent or likely. Such indicators could be used by security experts to potentially identify malicious activity at or before the time of attack. Our process included critical steps such as identifying the point of damage to the organization as well as any malicious events prior to zero hour that enabled the attack but did not immediately cause harm. We found that nearly 71% of the cases we studied had either no observable malicious action prior to attack, or had one that occurred less than one day prior to attack. Most of the events observed prior to attack were behavioral, not technical, especially those occurring earlier in the case timelines. Of the observed technical events prior to attack, nearly one third involved installation of software onto the victim organizations IT systems.
Straub, J..  2020.  Modeling Attack, Defense and Threat Trees and the Cyber Kill Chain, ATT CK and STRIDE Frameworks as Blackboard Architecture Networks. 2020 IEEE International Conference on Smart Cloud (SmartCloud). :148—153.

Multiple techniques for modeling cybersecurity attacks and defense have been developed. The use of tree- structures as well as techniques proposed by several firms (such as Lockheed Martin's Cyber Kill Chain, Microsoft's STRIDE and the MITRE ATT&CK frameworks) have all been demonstrated. These approaches model actions that can be taken to attack or stopped to secure infrastructure and other resources, at different levels of detail.This paper builds on prior work on using the Blackboard Architecture for cyberwarfare and proposes a generalized solution for modeling framework/paradigm-based attacks that go beyond the deployment of a single exploit against a single identified target. The Blackboard Architecture Cyber Command Entity attack Route (BACCER) identification system combines rules and facts that implement attack type determination and attack decision making logic with actions that implement reconnaissance techniques and attack and defense actions. BACCER's efficacy to model examples of tree-structures and other models is demonstrated herein.

Hou, M..  2020.  IMPACT: A Trust Model for Human-Agent Teaming. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
A trust model IMPACT: Intention, Measurability, Predictability, Agility, Communication, and Transparency has been conceptualized to build human trust in autonomous agents. The six critical characteristics must be exhibited by the agents in order to gain and maintain the trust from their human partners towards an effective and collaborative team in achieving common goals. The IMPACT model guided a design of an intelligent adaptive decision aid for dynamic target engagement processes in a military context. Positive feedback from subject matter experts participated in a large scale joint exercise controlling multiple unmanned vehicles indicated the effectiveness of the decision aid. It also demonstrated the utility of the IMPACT model as design principles for building up a trusted human-agent teaming.
Katarya, R., Lal, A..  2020.  A Study on Combating Emerging Threat of Deepfake Weaponization. 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :485—490.
A breakthrough in the emerging use of machine learning and deep learning is the concept of autoencoders and GAN (Generative Adversarial Networks), architectures that can generate believable synthetic content called deepfakes. The threat lies when these low-tech doctored images, videos, and audios blur the line between fake and genuine content and are used as weapons to cause damage to an unprecedented degree. This paper presents a survey of the underlying technology of deepfakes and methods proposed for their detection. Based on a detailed study of all the proposed models of detection, this paper presents SSTNet as the best model to date, that uses spatial, temporal, and steganalysis for detection. The threat posed by document and signature forgery, which is yet to be explored by researchers, has also been highlighted in this paper. This paper concludes with the discussion of research directions in this field and the development of more robust techniques to deal with the increasing threats surrounding deepfake technology.
Maram, S. S., Vishnoi, T., Pandey, S..  2019.  Neural Network and ROS based Threat Detection and Patrolling Assistance. 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP). :1—5.

To bring a uniform development platform which seamlessly combines hardware components and software architecture of various developers across the globe and reduce the complexity in producing robots which help people in their daily ergonomics. ROS has come out to be a game changer. It is disappointing to see the lack of penetration of technology in different verticals which involve protection, defense and security. By leveraging the power of ROS in the field of robotic automation and computer vision, this research will pave path for identification of suspicious activity with autonomously moving bots which run on ROS. The research paper proposes and validates a flow where ROS and computer vision algorithms like YOLO can fall in sync with each other to provide smarter and accurate methods for indoor and limited outdoor patrolling. Identification of age,`gender, weapons and other elements which can disturb public harmony will be an integral part of the research and development process. The simulation and testing reflects the efficiency and speed of the designed software architecture.

Russell, S., Abdelzaher, T., Suri, N..  2019.  Multi-Domain Effects and the Internet of Battlefield Things. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :724—730.

This paper reviews the definitions and characteristics of military effects, the Internet of Battlefield Things (IoBT), and their impact on decision processes in a Multi-Domain Operating environment (MDO). The aspects of contemporary military decision-processes are illustrated and an MDO Effect Loop decision process is introduced. We examine the concept of IoBT effects and their implications in MDO. These implications suggest that when considering the concept of MDO, as a doctrine, the technological advances of IoBTs empower enhancements in decision frameworks and increase the viability of novel operational approaches and options for military effects.

Astaburuaga, Ignacio, Lombardi, Amee, La Torre, Brian, Hughes, Carolyn, Sengupta, Shamik.  2019.  Vulnerability Analysis of AR.Drone 2.0, an Embedded Linux System. 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC). :0666–0672.
The goal of this work was to identify and try to solve some of the vulnerabilities present in the AR Drone 2.0 by Parrot. The approach was to identify how the system worked, find and analyze vulnerabilities and flaws in the system as a whole and in the software, and find solutions to those problems. Analyzing the results of some tests showed that the system has an open WiFi network and the communication between the controller and the drone are unencrypted. Analyzing the Linux operating system that the drone uses, we see that "Pairing Mode" is the only way the system protects itself from unauthorized control. This is a feature that can be easily bypassed. Port scans reveal that the system has all the ports for its services open and exposed. This makes it susceptible to attacks like DoS and takeover. This research also focuses on some of the software vulnerabilities, such as Busybox that the drone runs. Lastly, this paper discuses some of the possible methods that can be used to secure the drone. These methods include securing the messages via SSH Tunnel, closing unused ports, and re-implementing the software used by the drone and the controller.
Sullivan, Daniel, Colbert, Edward, Cowley, Jennifer.  2018.  Mission Resilience for Future Army Tactical Networks. 2018 Resilience Week (RWS). :11—14.

Cyber-physical systems are an integral component of weapons, sensors and autonomous vehicles, as well as cyber assets directly supporting tactical forces. Mission resilience of tactical networks affects command and control, which is important for successful military operations. Traditional engineering methods for mission assurance will not scale during battlefield operations. Commanders need useful mission resilience metrics to help them evaluate the ability of cyber assets to recover from incidents to fulfill mission essential functions. We develop 6 cyber resilience metrics for tactical network architectures. We also illuminate how psychometric modeling is necessary for future research to identify resilience metrics that are both applicable to the dynamic mission state and meaningful to commanders and planners.

Hoffmann, Romuald.  2019.  Markov Models of Cyber Kill Chains with Iterations. 2019 International Conference on Military Communications and Information Systems (ICMCIS). :1–6.
A understanding of the nature of targeted cyber-attack processes is needed to defend against this kind of cyber threats. Generally, the models describing processes of targeted cyber attacks are called in the literature as cyber kill chains or rarely cyber-attacks life cycles. Despite the fact that cyber-attacks have random nature, almost no stochastic models of cyber kill chains bases on the theory of stochastic processes have been proposed so far. This work, attempting to fill this deficiency, proposes to start using Markov processes for modeling some cyber-attack kill chains. In this paper two example theoretical models of cycles of returning cyber-attacks are proposed which have been generally named as the models of cyber kill chains with iterations. Presented models are based on homogeneous continuous time Markov chains.