Visible to the public Biblio

Found 345 results

Filters: Keyword is Deep Learning  [Clear All Filters]
2022-01-25
Sun, Hao, Xu, Yanjie, Kuang, Gangyao, Chen, Jin.  2021.  Adversarial Robustness Evaluation of Deep Convolutional Neural Network Based SAR ATR Algorithm. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. :5263–5266.
Robustness, both to accident and to malevolent perturbations, is a crucial determinant of the successful deployment of deep convolutional neural network based SAR ATR systems in various security-sensitive applications. This paper performs a detailed adversarial robustness evaluation of deep convolutional neural network based SAR ATR models across two public available SAR target recognition datasets. For each model, seven different adversarial perturbations, ranging from gradient based optimization to self-supervised feature distortion, are generated for each testing image. Besides adversarial average recognition accuracy, feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks, which promotes the understanding of vulnerability of deep learning models.
Islam, Muhammad Aminul, Veal, Charlie, Gouru, Yashaswini, Anderson, Derek T..  2021.  Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Mathematical morphology has been explored in deep learning architectures, as a substitute to convolution, for problems like pattern recognition and object detection. One major advantage of using morphology in deep learning is the utility of morphological erosion and dilation. Specifically, these operations naturally embody interpretability due to their underlying connections to the analysis of geometric structures. While the use of these operations results in explainable learned filters, morphological deep learning lacks attribution modeling, i.e., a paradigm to specify what areas of the original observed image are important. Furthermore, convolution-based deep learning has achieved attribution modeling through a variety of neural eXplainable Artificial Intelligence (XAI) paradigms (e.g., saliency maps, integrated gradients, guided backpropagation, and gradient class activation mapping). Thus, a problem for morphology-based deep learning is that these XAI methods do not have a morphological interpretation due to the differences in the underlying mathematics. Herein, we extend the neural XAI paradigm of saliency maps to morphological deep learning, and by doing, so provide an example of morphological attribution modeling. Furthermore, our qualitative results highlight some advantages of using morphological attribution modeling.
Marulli, Fiammetta, Balzanella, Antonio, Campanile, Lelio, Iacono, Mauro, Mastroianni, Michele.  2021.  Exploring a Federated Learning Approach to Enhance Authorship Attribution of Misleading Information from Heterogeneous Sources. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Authorship Attribution (AA) is currently applied in several applications, among which fraud detection and anti-plagiarism checks: this task can leverage stylometry and Natural Language Processing techniques. In this work, we explored some strategies to enhance the performance of an AA task for the automatic detection of false and misleading information (e.g., fake news). We set up a text classification model for AA based on stylometry exploiting recurrent deep neural networks and implemented two learning tasks trained on the same collection of fake and real news, comparing their performances: one is based on Federated Learning architecture, the other on a centralized architecture. The goal was to discriminate potential fake information from true ones when the fake news comes from heterogeneous sources, with different styles. Preliminary experiments show that a distributed approach significantly improves recall with respect to the centralized model. As expected, precision was lower in the distributed model. This aspect, coupled with the statistical heterogeneity of data, represents some open issues that will be further investigated in future work.
2022-01-10
Freas, Christopher B., Shah, Dhara, Harrison, Robert W..  2021.  Accuracy and Generalization of Deep Learning Applied to Large Scale Attacks. 2021 IEEE International Conference on Communications Workshops (ICC Workshops). :1–6.
Distributed denial of service attacks threaten the security and health of the Internet. Remediation relies on up-to-date and accurate attack signatures. Signature-based detection is relatively inexpensive computationally. Yet, signatures are inflexible when small variations exist in the attack vector. Attackers exploit this rigidity by altering their attacks to bypass the signatures. Our previous work revealed a critical problem with conventional machine learning models. Conventional models are unable to generalize on the temporal nature of network flow data to classify attacks. We thus explored the use of deep learning techniques on real flow data. We found that a variety of attacks could be identified with high accuracy compared to previous approaches. We show that a convolutional neural network can be implemented for this problem that is suitable for large volumes of data while maintaining useful levels of accuracy.
Ugwu, Chukwuemeka Christian, Obe, Olumide Olayinka, Popoọla, Olugbemiga Solomon, Adetunmbi, Adebayo Olusọla.  2021.  A Distributed Denial of Service Attack Detection System using Long Short Term Memory with Singular Value Decomposition. 2020 IEEE 2nd International Conference on Cyberspac (CYBER NIGERIA). :112–118.
The increase in online activity during the COVID 19 pandemic has generated a surge in network traffic capable of expanding the scope of DDoS attacks. Cyber criminals can now afford to launch massive DDoS attacks capable of degrading the performances of conventional machine learning based IDS models. Hence, there is an urgent need for an effective DDoS attack detective model with the capacity to handle large magnitude of DDoS attack traffic. This study proposes a deep learning based DDoS attack detection system using Long Short Term Memory (LSTM). The proposed model was evaluated on UNSW-NB15 and NSL-KDD intrusion datasets, whereby twenty-three (23) and twenty (20) attack features were extracted from UNSW-NB15 and NSL-KDD, respectively using Singular Value Decomposition (SVD). The results from the proposed model show significant improvement when compared with results from some conventional machine learning techniques such as Naïve Bayes (NB), Decision Tree (DT), and Support Vector Machine (SVM) with accuracies of 94.28% and 90.59% on both datasets, respectively. Furthermore, comparative analysis of LSTM with other deep learning results reported in literature justified the choice of LSTM among its deep learning peers in detecting DDoS attacks over a network.
Paul, Avishek, Islam, Md Rabiul.  2021.  An Artificial Neural Network Based Anomaly Detection Method in CAN Bus Messages in Vehicles. 2021 International Conference on Automation, Control and Mechatronics for Industry 4.0 (ACMI). :1–5.
Controller Area Network is the bus standard that works as a central system inside the vehicles for communicating in-vehicle messages. Despite having many advantages, attackers may hack into a car system through CAN bus, take control of it and cause serious damage. For, CAN bus lacks security services like authentication, encryption etc. Therefore, an anomaly detection system must be integrated with CAN bus in vehicles. In this paper, we proposed an Artificial Neural Network based anomaly detection method to identify illicit messages in CAN bus. We trained our model with two types of attacks so that it can efficiently identify the attacks. When tested, the proposed algorithm showed high performance in detecting Denial of Service attacks (with accuracy 100%) and Fuzzy attacks (with accuracy 99.98%).
Sallam, Youssef F., Ahmed, Hossam El-din H., Saleeb, Adel, El-Bahnasawy, Nirmeen A., El-Samie, Fathi E. Abd.  2021.  Implementation of Network Attack Detection Using Convolutional Neural Network. 2021 International Conference on Electronic Engineering (ICEEM). :1–6.
The Internet obviously has a major impact on the global economy and human life every day. This boundless use pushes the attack programmers to attack the data frameworks on the Internet. Web attacks influence the reliability of the Internet and its administrations. These attacks are classified as User-to-Root (U2R), Remote-to-Local (R2L), Denial-of-Service (DoS) and Probing (Probe). Subsequently, making sure about web framework security and protecting data are pivotal. The conventional layers of safeguards like antivirus scanners, firewalls and proxies, which are applied to treat the security weaknesses are insufficient. So, Intrusion Detection Systems (IDSs) are utilized to screen PC and data frameworks for security shortcomings. IDS adds more effectiveness in securing networks against attacks. This paper presents an IDS model based on Deep Learning (DL) with Convolutional Neural Network (CNN) hypothesis. The model has been evaluated on the NSLKDD dataset. It has been trained by Kddtrain+ and tested twice, once using kddtrain+ and the other using kddtest+. The achieved test accuracies are 99.7% and 98.43% with 0.002 and 0.02 wrong alert rates for the two test scenarios, respectively.
Zheng, Shiji.  2021.  Network Intrusion Detection Model Based on Convolutional Neural Network. 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). 5:634–637.
Network intrusion detection is an important research direction of network security. The diversification of network intrusion mode and the increasing amount of network data make the traditional detection methods can not meet the requirements of the current network environment. The development of deep learning technology and its successful application in the field of artificial intelligence provide a new solution for network intrusion detection. In this paper, the convolutional neural network in deep learning is applied to network intrusion detection, and an intelligent detection model which can actively learn is established. The experiment on KDD99 data set shows that it can effectively improve the accuracy and adaptive ability of intrusion detection, and has certain effectiveness and advancement.
2021-12-22
Poli, Jean-Philippe, Ouerdane, Wassila, Pierrard, Régis.  2021.  Generation of Textual Explanations in XAI: The Case of Semantic Annotation. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Semantic image annotation is a field of paramount importance in which deep learning excels. However, some application domains, like security or medicine, may need an explanation of this annotation. Explainable Artificial Intelligence is an answer to this need. In this work, an explanation is a sentence in natural language that is dedicated to human users to provide them clues about the process that leads to the decision: the labels assignment to image parts. We focus on semantic image annotation with fuzzy logic that has proven to be a useful framework that captures both image segmentation imprecision and the vagueness of human spatial knowledge and vocabulary. In this paper, we present an algorithm for textual explanation generation of the semantic annotation of image regions.
Nascita, Alfredo, Montieri, Antonio, Aceto, Giuseppe, Ciuonzo, Domenico, Persico, Valerio, Pescapè, Antonio.  2021.  Unveiling MIMETIC: Interpreting Deep Learning Traffic Classifiers via XAI Techniques. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :455–460.
The widespread use of powerful mobile devices has deeply affected the mix of traffic traversing both the Internet and enterprise networks (with bring-your-own-device policies). Traffic encryption has become extremely common, and the quick proliferation of mobile apps and their simple distribution and update have created a specifically challenging scenario for traffic classification and its uses, especially network-security related ones. The recent rise of Deep Learning (DL) has responded to this challenge, by providing a solution to the time-consuming and human-limited handcrafted feature design, and better clas-sification performance. The counterpart of the advantages is the lack of interpretability of these black-box approaches, limiting or preventing their adoption in contexts where the reliability of results, or interpretability of polices is necessary. To cope with these limitations, eXplainable Artificial Intelligence (XAI) techniques have seen recent intensive research. Along these lines, our work applies XAI-based techniques (namely, Deep SHAP) to interpret the behavior of a state-of-the-art multimodal DL traffic classifier. As opposed to common results seen in XAI, we aim at a global interpretation, rather than sample-based ones. The results quantify the importance of each modality (payload- or header-based), and of specific subsets of inputs (e.g., TLS SNI and TCP Window Size) in determining the classification outcome, down to per-class (viz. application) level. The analysis is based on a publicly-released recent dataset focused on mobile app traffic.
Kim, Jiha, Park, Hyunhee.  2021.  OA-GAN: Overfitting Avoidance Method of GAN Oversampling Based on xAI. 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN). :394–398.
The most representative method of deep learning is data-driven learning. These methods are often data-dependent, and lack of data leads to poor learning. There is a GAN method that creates a likely image as a way to solve a problem that lacks data. The GAN determines that the discriminator is fake/real with respect to the image created so that the generator learns. However, overfitting problems when the discriminator becomes overly dependent on the learning data. In this paper, we explain overfitting problem when the discriminator decides to fake/real using xAI. Depending on the area of the described image, it is possible to limit the learning of the discriminator to avoid overfitting. By doing so, the generator can produce similar but more diverse images.
2021-12-21
Zhai, Tongqing, Li, Yiming, Zhang, Ziqi, Wu, Baoyuan, Jiang, Yong, Xia, Shu-Tao.  2021.  Backdoor Attack Against Speaker Verification. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2560–2564.
Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data (e.g., data from the Internet or third-party data company). This raises the question of whether adopting untrusted third-party data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data. Specifically, we design a clustering-based attack scheme where poisoned samples from different clusters will contain different triggers (i.e., pre-defined utterances), based on our understanding of verification tasks. The infected models behave normally on benign samples, while attacker-specified unenrolled triggers will successfully pass the verification even if the attacker has no information about the enrolled speaker. We also demonstrate that existing back-door attacks cannot be directly adopted in attacking speaker verification. Our approach not only provides a new perspective for designing novel attacks, but also serves as a strong baseline for improving the robustness of verification methods. The code for reproducing main results is available at https://github.com/zhaitongqing233/Backdoor-attack-against-speaker-verification.
Ayed, Mohamed Ali, Talhi, Chamseddine.  2021.  Federated Learning for Anomaly-Based Intrusion Detection. 2021 International Symposium on Networks, Computers and Communications (ISNCC). :1–8.
We are attending a severe zero-day cyber attacks. Machine learning based anomaly detection is definitely the most efficient defence in depth approach. It consists to analyzing the network traffic in order to distinguish the normal behaviour from the abnormal one. This approach is usually implemented in a central server where all the network traffic is analyzed which can rise privacy issues. In fact, with the increasing adoption of Cloud infrastructures, it is important to reduce as much as possible the outsourcing of such sensitive information to the several network nodes. A better approach is to ask each node to analyze its own data and then to exchange its learning finding (model) with a coordinator. In this paper, we investigate the application of federated learning for network-based intrusion detection. Our experiment was conducted based on the C ICIDS2017 dataset. We present a f ederated learning on a deep learning algorithm C NN based on model averaging. It is a self-learning system for detecting anomalies caused by malicious adversaries without human intervention and can cope with new and unknown attacks without decreasing performance. These experimentation demonstrate that this approach is effective in detecting intrusion.
2021-12-20
Sahay, Rajeev, Brinton, Christopher G., Love, David J..  2021.  Frequency-based Automated Modulation Classification in the Presence of Adversaries. ICC 2021 - IEEE International Conference on Communications. :1–6.
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals. Recent work has demonstrated the ability of deep learning to achieve robust AMC performance using raw in-phase and quadrature (IQ) time samples. Yet, deep learning models are highly susceptible to adversarial interference, which cause intelligent prediction models to misclassify received samples with high confidence. Furthermore, adversarial interference is often transferable, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification network. In this work, we present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference. Specifically, we show that adversarial attacks crafted to fool models trained on time-domain features are not easily transferable to models trained using frequency-domain features. In this capacity, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs). We further demonstrate our frequency feature-based classification models to achieve accuracies greater than 99% in the absence of attacks.
Nasr, Milad, Songi, Shuang, Thakurta, Abhradeep, Papemoti, Nicolas, Carlin, Nicholas.  2021.  Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. 2021 IEEE Symposium on Security and Privacy (SP). :866–882.
Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage. DP formalizes this data leakage through a cryptographic game, where an adversary must predict if a model was trained on a dataset D, or a dataset D′ that differs in just one example. If observing the training algorithm does not meaningfully increase the adversary's odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially private. Hence, the purpose of privacy analysis is to upper bound the probability that any adversary could successfully guess which dataset the model was trained on.In our paper, we instantiate this hypothetical adversary in order to establish lower bounds on the probability that this distinguishing game can be won. We use this adversary to evaluate the importance of the adversary capabilities allowed in the privacy analysis of DP training algorithms.For DP-SGD, the most common method for training neural networks with differential privacy, our lower bounds are tight and match the theoretical upper bound. This implies that in order to prove better upper bounds, it will be necessary to make use of additional assumptions. Fortunately, we find that our attacks are significantly weaker when additional (realistic) restrictions are put in place on the adversary's capabilities. Thus, in the practical setting common to many real-world deployments, there is a gap between our lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as suggested by the theoretical bound.
Janapriya, N., Anuradha, K., Srilakshmi, V..  2021.  Adversarial Deep Learning Models With Multiple Adversaries. 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA). :522–525.
Adversarial machine learning calculations handle adversarial instance age, producing bogus data information with the ability to fool any machine learning model. As the word implies, “foe” refers to a rival, whereas “rival” refers to a foe. In order to strengthen the machine learning models, this section discusses about the weakness of machine learning models and how effectively the misinterpretation occurs during the learning cycle. As definite as it is, existing methods such as creating adversarial models and devising powerful ML computations, frequently ignore semantics and the general skeleton including ML section. This research work develops an adversarial learning calculation by considering the coordinated portrayal by considering all the characteristics and Convolutional Neural Networks (CNN) explicitly. Figuring will most likely express minimal adjustments via data transport represented over positive and negative class markings, as well as a specific subsequent data flow misclassified by CNN. The final results recommend a certain game theory and formative figuring, which obtain incredible favored ensuring about significant learning models against the execution of shortcomings, which are reproduced as attack circumstances against various adversaries.
NING, Baifeng, Xiao, Liang.  2021.  Defense Against Advanced Persistent Threats in Smart Grids: A Reinforcement Learning Approach. 2021 40th Chinese Control Conference (CCC). :8598–8603.
In smart girds, supervisory control and data acquisition (SCADA) systems have to protect data from advanced persistent threats (APTs), which exploit vulnerabilities of the power infrastructures to launch stealthy and targeted attacks. In this paper, we propose a reinforcement learning-based APT defense scheme for the control center to choose the detection interval and the number of Central Processing Units (CPUs) allocated to the data concentrators based on the data priority, the size of the collected meter data, the history detection delay, the previous number of allocated CPUs, and the size of the labeled compromised meter data without the knowledge of the attack interval and attack CPU allocation model. The proposed scheme combines deep learning and policy-gradient based actor-critic algorithm to accelerate the optimization speed at the control center, where an actor network uses the softmax distribution to choose the APT defense policy and the critic network updates the actor network weights to improve the computational performance. The advantage function is applied to reduce the variance of the policy gradient. Simulation results show that our proposed scheme has a performance gain over the benchmarks in terms of the detection delay, data protection level, and utility.
2021-12-02
Rao, Poojith U., Sodhi, Balwinder, Sodhi, Ranjana.  2020.  Cyber Security Enhancement of Smart Grids Via Machine Learning - A Review. 2020 21st National Power Systems Conference (NPSC). :1–6.
The evolution of power system as a smart grid (SG) not only has enhanced the monitoring and control capabilities of the power grid, but also raised its security concerns and vulnerabilities. With a boom in Internet of Things (IoT), a lot a sensors are being deployed across the grid. This has resulted in huge amount of data available for processing and analysis. Machine learning (ML) and deep learning (DL) algorithms are being widely used to extract useful information from this data. In this context, this paper presents a comprehensive literature survey of different ML and DL techniques that have been used in the smart grid cyber security area. The survey summarizes different type of cyber threats which today's SGs are prone to, followed by various ML and DL-assisted defense strategies. The effectiveness of the ML based methods in enhancing the cyber security of SGs is also demonstrated with the help of a case study.
2021-11-29
Hu, Shengze, He, Chunhui, Ge, Bin, Liu, Fang.  2020.  Enhanced Word Embedding Method in Text Classification. 2020 6th International Conference on Big Data and Information Analytics (BigDIA). :18–22.
For the task of natural language processing (NLP), Word embedding technology has a certain impact on the accuracy of deep neural network algorithms. Considering that the current word embedding method cannot realize the coexistence of words and phrases in the same vector space. Therefore, we propose an enhanced word embedding (EWE) method. Before completing the word embedding, this method introduces a unique sentence reorganization technology to rewrite all the sentences in the original training corpus. Then, all the original corpus and the reorganized corpus are merged together as the training corpus of the distributed word embedding model, so as to realize the coexistence problem of words and phrases in the same vector space. We carried out experiment to demonstrate the effectiveness of the EWE algorithm on three classic benchmark datasets. The results show that the EWE method can significantly improve the classification performance of the CNN model.
Takemoto, Shu, Shibagaki, Kazuya, Nozaki, Yusuke, Yoshikawa, Masaya.  2020.  Deep Learning Based Attack for AI Oriented Authentication Module. 2020 35th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC). :5–8.
Neural Network Physical Unclonable Function (NN-PUF) has been proposed for the secure implementation of Edge AI. This study evaluates the tamper resistance of NN-PUF against machine learning attacks. The machine learning attack in this study learns CPRs using deep learning. As a result of the evaluation experiment, the machine learning attack predicted about 82% for CRPs. Therefore, this study revealed that NN-PUF is vulnerable to machine learning attacks.
Hou, Xiaolu, Breier, Jakub, Jap, Dirmanto, Ma, Lei, Bhasin, Shivam, Liu, Yang.  2020.  Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection. 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA). :1–6.
Deep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions. resulting into faults.
Naeem, Hajra, Alalfi, Manar H..  2020.  Identifying Vulnerable IoT Applications Using Deep Learning. 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER). :582–586.
This paper presents an approach for the identification of vulnerable IoT applications using deep learning algorithms. The approach focuses on a category of vulnerabilities that leads to sensitive information leakage which can be identified using taint flow analysis. First, we analyze the source code of IoT apps in order to recover tokens along their frequencies and tainted flows. Second, we develop, Token2Vec, which transforms the source code tokens into vectors. We have also developed Flow2Vec, which transforms the identified tainted flows into vectors. Third, we use the recovered vectors to train a deep learning algorithm to build a model for the identification of tainted apps. We have evaluated the approach on two datasets and the experiments show that the proposed approach of combining tainted flows features with the base benchmark that uses token frequencies only, has improved the accuracy of the prediction models from 77.78% to 92.59% for Corpus1 and 61.11% to 87.03% for Corpus2.
Nait-Abdesselam, Farid, Darwaish, Asim, Titouna, Chafiq.  2020.  An Intelligent Malware Detection and Classification System Using Apps-to-Images Transformations and Convolutional Neural Networks. 2020 16th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). :1–6.
With the proliferation of Mobile Internet, handheld devices are facing continuous threats from apps that contain malicious intents. These malicious apps, or malware, have the capability of dynamically changing their intended code as they spread. Moreover, the diversity and volume of their variants severely undermine the effectiveness of traditional defenses, which typically use signature-based techniques, and make them unable to detect the previously unknown malware. However, the variants of malware families share typical behavioral patterns reflecting their origin and purpose. The behavioral patterns, obtained either statically or dynamically, can be exploited to detect and classify unknown malware into their known families using machine learning techniques. In this paper, we propose a new approach for detecting and analyzing a malware. Mainly focused on android apps, our approach adopts the two following steps: (1) performs a transformation of an APK file into a lightweight RGB image using a predefined dictionary and intelligent mapping, and (2) trains a convolutional neural network on the obtained images for the purpose of signature detection and malware family classification. The results obtained using the Androzoo dataset show that our system classifies both legacy and new malware apps with high accuracy, low false-negative rate (FNR), and low false-positive rate (FPR).
2021-11-08
Hu, Feng, Chen, Bing, Shi, Dian, Zhang, Xinyue, Pan, Haijun ZhangMiao.  2020.  Secure Routing Protocol in Wireless Ad Hoc Networks via Deep Learning. 2020 IEEE Wireless Communications and Networking Conference (WCNC). :1–6.
Open wireless channels make a wireless ad hoc network vulnerable to various security attacks, so it is crucial to design a routing protocol that can defend against the attacks of malicious nodes. In this paper, we first measure the trust value calculated by the node behavior in a period to judge whether the node is trusted, and then combine other QoS requirements as the routing metrics to design a secure routing approach. Moreover, we propose a deep learning-based model to learn the routing environment repeatedly from the data sets of packet flow and corresponding optimal paths. Then, when a new packet flow is input, the model can output a link set that satisfies the node's QoS and trust requirements directly, and therefore the optimal path of the packet flow can be obtained. The extensive simulation results show that compared with the traditional optimization-based method, our proposed deep learning-based approach cannot only guarantee more than 90% accuracy, but also significantly improves the computation time.
Aygül, Mehmet Ali, Nazzal, Mahmoud, Ekti, Ali Rıza, Görçin, Ali, da Costa, Daniel Benevides, Ateş, Hasan Fehmi, Arslan, Hüseyin.  2020.  Spectrum Occupancy Prediction Exploiting Time and Frequency Correlations Through 2D-LSTM. 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). :1–5.
The identification of spectrum opportunities is a pivotal requirement for efficient spectrum utilization in cognitive radio systems. Spectrum prediction offers a convenient means for revealing such opportunities based on the previously obtained occupancies. As spectrum occupancy states are correlated over time, spectrum prediction is often cast as a predictable time-series process using classical or deep learning-based models. However, this variety of methods exploits time-domain correlation and overlooks the existing correlation over frequency. In this paper, differently from previous works, we investigate a more realistic scenario by exploiting correlation over time and frequency through a 2D-long short-term memory (LSTM) model. Extensive experimental results show a performance improvement over conventional spectrum prediction methods in terms of accuracy and computational complexity. These observations are validated over the real-world spectrum measurements, assuming a frequency range between 832-862 MHz where most of the telecom operators in Turkey have private uplink bands.