Visible to the public Biblio

Filters: Keyword is classifiers  [Clear All Filters]
2021-03-09
Hossain, M. D., Ochiai, H., Doudou, F., Kadobayashi, Y..  2020.  SSH and FTP brute-force Attacks Detection in Computer Networks: LSTM and Machine Learning Approaches. 2020 5th International Conference on Computer and Communication Systems (ICCCS). :491—497.

Network traffic anomaly detection is of critical importance in cybersecurity due to the massive and rapid growth of sophisticated computer network attacks. Indeed, the more new Internet-related technologies are created, the more elaborate the attacks become. Among all the contemporary high-level attacks, dictionary-based brute-force attacks (BFA) present one of the most unsurmountable challenges. We need to develop effective methods to detect and mitigate such brute-force attacks in realtime. In this paper, we investigate SSH and FTP brute-force attack detection by using the Long Short-Term Memory (LSTM) deep learning approach. Additionally, we made use of machine learning (ML) classifiers: J48, naive Bayes (NB), decision table (DT), random forest (RF) and k-nearest-neighbor (k-NN), for additional detection purposes. We used the well-known labelled dataset CICIDS2017. We evaluated the effectiveness of the LSTM and ML algorithms, and compared their performance. Our results show that the LSTM model outperforms the ML algorithms, with an accuracy of 99.88%.

Hegde, M., Kepnang, G., Mazroei, M. Al, Chavis, J. S., Watkins, L..  2020.  Identification of Botnet Activity in IoT Network Traffic Using Machine Learning. 2020 International Conference on Intelligent Data Science Technologies and Applications (IDSTA). :21—27.

Today our world benefits from Internet of Things (IoT) technology; however, new security problems arise when these IoT devices are introduced into our homes. Because many of these IoT devices have access to the Internet and they have little to no security, they make our smart homes highly vulnerable to compromise. Some of the threats include IoT botnets and generic confidentiality, integrity, and availability (CIA) attacks. Our research explores botnet detection by experimenting with supervised machine learning and deep-learning classifiers. Further, our approach assesses classifier performance on unbalanced datasets that contain benign data, mixed in with small amounts of malicious data. We demonstrate that the classifiers can separate malicious activity from benign activity within a small IoT network dataset. The classifiers can also separate malicious activity from benign activity in increasingly larger datasets. Our experiments have demonstrated incremental improvement in results for (1) accuracy, (2) probability of detection, and (3) probability of false alarm. The best performance results include 99.9% accuracy, 99.8% probability of detection, and 0% probability of false alarm. This paper also demonstrates how the performance of these classifiers increases, as IoT training datasets become larger and larger.

Muhammad, A., Asad, M., Javed, A. R..  2020.  Robust Early Stage Botnet Detection using Machine Learning. 2020 International Conference on Cyber Warfare and Security (ICCWS). :1—6.

Among the different types of malware, botnets are rising as the most genuine risk against cybersecurity as they give a stage to criminal operations (e.g., Distributed Denial of Service (DDOS) attacks, malware dispersal, phishing, and click fraud and identity theft). Existing botnet detection techniques work only on specific botnet Command and Control (C&C) protocols and lack in providing early-stage botnet detection. In this paper, we propose an approach for early-stage botnet detection. The proposed approach first selects the optimal features using feature selection techniques. Next, it feeds these features to machine learning classifiers to evaluate the performance of the botnet detection. Experiments reveals that the proposed approach efficiently classifies normal and malicious traffic at an early stage. The proposed approach achieves the accuracy of 99%, True Positive Rate (TPR) of 0.99 %, and False Positive Rate (FPR) of 0.007 % and provide an efficient detection rate in comparison with the existing approach.

Yerima, S. Y., Alzaylaee, M. K..  2020.  Mobile Botnet Detection: A Deep Learning Approach Using Convolutional Neural Networks. 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1—8.

Android, being the most widespread mobile operating systems is increasingly becoming a target for malware. Malicious apps designed to turn mobile devices into bots that may form part of a larger botnet have become quite common, thus posing a serious threat. This calls for more effective methods to detect botnets on the Android platform. Hence, in this paper, we present a deep learning approach for Android botnet detection based on Convolutional Neural Networks (CNN). Our proposed botnet detection system is implemented as a CNN-based model that is trained on 342 static app features to distinguish between botnet apps and normal apps. The trained botnet detection model was evaluated on a set of 6,802 real applications containing 1,929 botnets from the publicly available ISCX botnet dataset. The results show that our CNN-based approach had the highest overall prediction accuracy compared to other popular machine learning classifiers. Furthermore, the performance results observed from our model were better than those reported in previous studies on machine learning based Android botnet detection.

Rahmati, A., Moosavi-Dezfooli, S.-M., Frossard, P., Dai, H..  2020.  GeoDA: A Geometric Framework for Black-Box Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :8443–8452.
Adversarial examples are known as carefully perturbed images fooling image classifiers. We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-1 label of the classifier. Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples. We propose an effective iterative algorithm to generate query-efficient black-box perturbations with small p norms which is confirmed via experimental evaluations on state-of-the-art natural image classifiers. Moreover, for p=2, we theoretically show that our algorithm actually converges to the minimal perturbation when the curvature of the decision boundary is bounded. We also obtain the optimal distribution of the queries over the iterations of the algorithm. Finally, experimental results confirm that our principled black-box attack algorithm performs better than state-of-the-art algorithms as it generates smaller perturbations with a reduced number of queries.
2021-03-04
Carlini, N., Farid, H..  2020.  Evading Deepfake-Image Detectors with White- and Black-Box Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2804—2813.

It is now possible to synthesize highly realistic images of people who do not exist. Such content has, for example, been implicated in the creation of fraudulent socialmedia profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content.We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near- 0% accuracy. We develop five attack case studies on a state- of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.

Kalin, J., Ciolino, M., Noever, D., Dozier, G..  2020.  Black Box to White Box: Discover Model Characteristics Based on Strategic Probing. 2020 Third International Conference on Artificial Intelligence for Industries (AI4I). :60—63.

In Machine Learning, White Box Adversarial Attacks rely on knowing underlying knowledge about the model attributes. This works focuses on discovering to distrinct pieces of model information: the underlying architecture and primary training dataset. With the process in this paper, a structured set of input probes and the output of the model become the training data for a deep classifier. Two subdomains in Machine Learning are explored - image based classifiers and text transformers with GPT-2. With image classification, the focus is on exploring commonly deployed architectures and datasets available in popular public libraries. Using a single transformer architecture with multiple levels of parameters, text generation is explored by fine tuning off different datasets. Each dataset explored in image and text are distinguishable from one another. Diversity in text transformer outputs implies further research is needed to successfully classify architecture attribution in text domain.

2021-03-01
Tao, J., Xiong, Y., Zhao, S., Xu, Y., Lin, J., Wu, R., Fan, C..  2020.  XAI-Driven Explainable Multi-view Game Cheating Detection. 2020 IEEE Conference on Games (CoG). :144–151.
Online gaming is one of the most successful applications having a large number of players interacting in an online persistent virtual world through the Internet. However, some cheating players gain improper advantages over normal players by using illegal automated plugins which has brought huge harm to game health and player enjoyment. Game industries have been devoting much efforts on cheating detection with multiview data sources and achieved great accuracy improvements by applying artificial intelligence (AI) techniques. However, generating explanations for cheating detection from multiple views still remains a challenging task. To respond to the different purposes of explainability in AI models from different audience profiles, we propose the EMGCD, the first explainable multi-view game cheating detection framework driven by explainable AI (XAI). It combines cheating explainers to cheating classifiers from different views to generate individual, local and global explanations which contributes to the evidence generation, reason generation, model debugging and model compression. The EMGCD has been implemented and deployed in multiple game productions in NetEase Games, achieving remarkable and trustworthy performance. Our framework can also easily generalize to other types of related tasks in online games, such as explainable recommender systems, explainable churn prediction, etc.
D’Alterio, P., Garibaldi, J. M., John, R. I..  2020.  Constrained Interval Type-2 Fuzzy Classification Systems for Explainable AI (XAI). 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.
In recent year, there has been a growing need for intelligent systems that not only are able to provide reliable classifications but can also produce explanations for the decisions they make. The demand for increased explainability has led to the emergence of explainable artificial intelligence (XAI) as a specific research field. In this context, fuzzy logic systems represent a promising tool thanks to their inherently interpretable structure. The use of a rule-base and linguistic terms, in fact, have allowed researchers to create models that are able to produce explanations in natural language for each of the classifications they make. So far, however, designing systems that make use of interval type-2 (IT2) fuzzy logic and also give explanations for their outputs has been very challenging, partially due to the presence of the type-reduction step. In this paper, it will be shown how constrained interval type-2 (CIT2) fuzzy sets represent a valid alternative to conventional interval type-2 sets in order to address this issue. Through the analysis of two case studies from the medical domain, it is shown how explainable CIT2 classifiers are produced. These systems can explain which rules contributed to the creation of each of the endpoints of the output interval centroid, while showing (in these examples) the same level of accuracy as their IT2 counterpart.
2021-02-16
Nandi, S., Phadikar, S., Majumder, K..  2020.  Detection of DDoS Attack and Classification Using a Hybrid Approach. 2020 Third ISEA Conference on Security and Privacy (ISEA-ISAP). :41—47.
In the area of cloud security, detection of DDoS attack is a challenging task such that legitimate users use the cloud resources properly. So in this paper, detection and classification of the attacking packets and normal packets are done by using various machine learning classifiers. We have selected the most relevant features from NSL KDD dataset using five (Information gain, gain ratio, chi-squared, ReliefF, and symmetrical uncertainty) commonly used feature selection methods. Now from the entire selected feature set, the most important features are selected by applying our hybrid feature selection method. Since all the anomalous instances of the dataset do not belong to DDoS category so we have separated only the DDoS packets from the dataset using the selected features. Finally, the dataset has been prepared and named as KDD DDoS dataset by considering the selected DDoS packets and normal packets. This KDD DDoS dataset has been discretized using discretize tool in weka for getting better performance. Finally, this discretize dataset has been applied on some commonly used (Naive Bayes, Bayes Net, Decision Table, J48 and Random Forest) classifiers for determining the detection rate of the classifiers. 10 fold cross validation has been used here for measuring the robustness of the system. To measure the efficiency of our hybrid feature selection method, we have also applied the same set of classifiers on the NSL KDD dataset, where it gives the best anomaly detection rate of 99.72% and average detection rate 98.47% similarly, we have applied the same set of classifiers on NSL DDoS dataset and obtain the average DDoS detection of 99.01% and the best DDoS detection rate of 99.86%. In order to compare the performance of our proposed hybrid method, we have also applied the existing feature selection methods and measured the detection rate using the same set of classifiers. Finally, we have seen that our hybrid approach for detecting the DDoS attack gives the best detection rate compared to some existing methods.
2021-02-10
Banerjee, R., Baksi, A., Singh, N., Bishnu, S. K..  2020.  Detection of XSS in web applications using Machine Learning Classifiers. 2020 4th International Conference on Electronics, Materials Engineering Nano-Technology (IEMENTech). :1—5.
Considering the amount of time we spend on the internet, web pages have evolved over a period of time with rapid progression and momentum. With such advancement, we find ourselves fronting a few hostile ideologies, breaching the security levels of webpages as such. The most hazardous of them all is XSS, known as Cross-Site Scripting, is one of the attacks which frequently occur in website-based applications. Cross-Site Scripting (XSS) attacks happen when malicious data enters a web application through an untrusted source. The spam attacks happen in the form of Wall posts, News feed, Message spam and mostly when a user is open to download content of webpages. This paper investigates the use of machine learning to build classifiers to allow the detection of XSS. Establishing our approach, we target the detection modus operandi of XSS attack via two features: URLs and JavaScript. To predict the level of XSS threat, we will be using four machine learning algorithms (SVM, KNN, Random forest and Logistic Regression). Proposing these classified algorithms, webpages will be branded as malicious or benign. After assessing and calculating the dataset features, we concluded that the Random Forest Classifier performed most accurately with the lowest False Positive Rate of 0.34. This precision will ensure a method much efficient to evaluate threatening XSS for the smooth functioning of the system.
2021-01-20
Rashid, A., Siddique, M. J., Ahmed, S. M..  2020.  Machine and Deep Learning Based Comparative Analysis Using Hybrid Approaches for Intrusion Detection System. 2020 3rd International Conference on Advancements in Computational Sciences (ICACS). :1—9.

Intrusion detection is one of the most prominent and challenging problem faced by cybersecurity organizations. Intrusion Detection System (IDS) plays a vital role in identifying network security threats. It protects the network for vulnerable source code, viruses, worms and unauthorized intruders for many intranet/internet applications. Despite many open source APIs and tools for intrusion detection, there are still many network security problems exist. These problems are handled through the proper pre-processing, normalization, feature selection and ranking on benchmark dataset attributes prior to the enforcement of self-learning-based classification algorithms. In this paper, we have performed a comprehensive comparative analysis of the benchmark datasets NSL-KDD and CIDDS-001. For getting optimal results, we have used the hybrid feature selection and ranking methods before applying self-learning (Machine / Deep Learning) classification algorithmic approaches such as SVM, Naïve Bayes, k-NN, Neural Networks, DNN and DAE. We have analyzed the performance of IDS through some prominent performance indicator metrics such as Accuracy, Precision, Recall and F1-Score. The experimental results show that k-NN, SVM, NN and DNN classifiers perform approx. 100% accuracy regarding performance evaluation metrics on the NSL-KDD dataset whereas k-NN and Naïve Bayes classifiers perform approx. 99% accuracy on the CIDDS-001 dataset.

2021-01-15
Amerini, I., Galteri, L., Caldelli, R., Bimbo, A. Del.  2019.  Deepfake Video Detection through Optical Flow Based CNN. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :1205—1207.
Recent advances in visual media technology have led to new tools for processing and, above all, generating multimedia contents. In particular, modern AI-based technologies have provided easy-to-use tools to create extremely realistic manipulated videos. Such synthetic videos, named Deep Fakes, may constitute a serious threat to attack the reputation of public subjects or to address the general opinion on a certain event. According to this, being able to individuate this kind of fake information becomes fundamental. In this work, a new forensic technique able to discern between fake and original video sequences is given; unlike other state-of-the-art methods which resorts at single video frames, we propose the adoption of optical flow fields to exploit possible inter-frame dissimilarities. Such a clue is then used as feature to be learned by CNN classifiers. Preliminary results obtained on FaceForensics++ dataset highlight very promising performances.
2020-12-28
Raju, R. S., Lipasti, M..  2020.  BlurNet: Defense by Filtering the Feature Maps. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :38—46.

Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.

2020-11-09
Kemp, C., Calvert, C., Khoshgoftaar, T..  2018.  Utilizing Netflow Data to Detect Slow Read Attacks. 2018 IEEE International Conference on Information Reuse and Integration (IRI). :108–116.
Attackers can leverage several techniques to compromise computer networks, ranging from sophisticated malware to DDoS (Distributed Denial of Service) attacks that target the application layer. Application layer DDoS attacks, such as Slow Read, are implemented with just enough traffic to tie up CPU or memory resources causing web and application servers to go offline. Such attacks can mimic legitimate network requests making them difficult to detect. They also utilize less volume than traditional DDoS attacks. These low volume attack methods can often go undetected by network security solutions until it is too late. In this paper, we explore the use of machine learners for detecting Slow Read DDoS attacks on web servers at the application layer. Our approach uses a generated dataset based upon Netflow data collected at the application layer on a live network environment. Our Netflow data uses the IP Flow Information Export (IPFIX) standard providing significant flexibility and features. These Netflow features can process and handle a growing amount of traffic and have worked well in our previous DDoS work detecting evasion techniques. Our generated dataset consists of real-world network data collected from a production network. We use eight different classifiers to build Slow Read attack detection models. Our wide selection of learners provides us with a more comprehensive analysis of Slow Read detection models. Experimental results show that the machine learners were quite successful in identifying the Slow Read attacks with a high detection and low false alarm rate. The experiment demonstrates that our chosen Netflow features are discriminative enough to detect such attacks accurately.
2020-11-04
Apruzzese, G., Colajanni, M., Ferretti, L., Marchetti, M..  2019.  Addressing Adversarial Attacks Against Security Systems Based on Machine Learning. 2019 11th International Conference on Cyber Conflict (CyCon). 900:1—18.

Machine-learning solutions are successfully adopted in multiple contexts but the application of these techniques to the cyber security domain is complex and still immature. Among the many open issues that affect security systems based on machine learning, we concentrate on adversarial attacks that aim to affect the detection and prediction capabilities of machine-learning models. We consider realistic types of poisoning and evasion attacks targeting security solutions devoted to malware, spam and network intrusion detection. We explore the possible damages that an attacker can cause to a cyber detector and present some existing and original defensive techniques in the context of intrusion detection systems. This paper contains several performance evaluations that are based on extensive experiments using large traffic datasets. The results highlight that modern adversarial attacks are highly effective against machine-learning classifiers for cyber detection, and that existing solutions require improvements in several directions. The paper paves the way for more robust machine-learning-based techniques that can be integrated into cyber security platforms.

2020-10-30
Basu, Kanad, Elnaggar, Rana, Chakrabarty, Krishnendu, Karri, Ramesh.  2019.  PREEMPT: PReempting Malware by Examining Embedded Processor Traces. 2019 56th ACM/IEEE Design Automation Conference (DAC). :1—6.

Anti-virus software (AVS) tools are used to detect Malware in a system. However, software-based AVS are vulnerable to attacks. A malicious entity can exploit these vulnerabilities to subvert the AVS. Recently, hardware components such as Hardware Performance Counters (HPC) have been used for Malware detection. In this paper, we propose PREEMPT, a zero overhead, high-accuracy and low-latency technique to detect Malware by re-purposing the embedded trace buffer (ETB), a debug hardware component available in most modern processors. The ETB is used for post-silicon validation and debug and allows us to control and monitor the internal activities of a chip, beyond what is provided by the Input/Output pins. PREEMPT combines these hardware-level observations with machine learning-based classifiers to preempt Malware before it can cause damage. There are many benefits of re-using the ETB for Malware detection. It is difficult to hack into hardware compared to software, and hence, PREEMPT is more robust against attacks than AVS. PREEMPT does not incur performance penalties. Finally, PREEMPT has a high True Positive value of 94% and maintains a low False Positive value of 2%.

2020-10-29
Vi, Bao Ngoc, Noi Nguyen, Huu, Nguyen, Ngoc Tran, Truong Tran, Cao.  2019.  Adversarial Examples Against Image-based Malware Classification Systems. 2019 11th International Conference on Knowledge and Systems Engineering (KSE). :1—5.

Malicious software, known as malware, has become urgently serious threat for computer security, so automatic mal-ware classification techniques have received increasing attention. In recent years, deep learning (DL) techniques for computer vision have been successfully applied for malware classification by visualizing malware files and then using DL to classify visualized images. Although DL-based classification systems have been proven to be much more accurate than conventional ones, these systems have been shown to be vulnerable to adversarial attacks. However, there has been little research to consider the danger of adversarial attacks to visualized image-based malware classification systems. This paper proposes an adversarial attack method based on the gradient to attack image-based malware classification systems by introducing perturbations on resource section of PE files. The experimental results on the Malimg dataset show that by a small interference, the proposed method can achieve success attack rate when challenging convolutional neural network malware classifiers.

2020-10-05
Cruz, Rodrigo Santa, Fernando, Basura, Cherian, Anoop, Gould, Stephen.  2018.  Neural Algebra of Classifiers. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). :729—737.

The world is fundamentally compositional, so it is natural to think of visual recognition as the recognition of basic visually primitives that are composed according to well-defined rules. This strategy allows us to recognize unseen complex concepts from simple visual primitives. However, the current trend in visual recognition follows a data greedy approach where huge amounts of data are required to learn models for any desired visual concept. In this paper, we build on the compositionality principle and develop an "algebra" to compose classifiers for complex visual concepts. To this end, we learn neural network modules to perform boolean algebra operations on simple visual classifiers. Since these modules form a complete functional set, a classifier for any complex visual concept defined as a boolean expression of primitives can be obtained by recursively applying the learned modules, even if we do not have a single training sample. As our experiments show, using such a framework, we can compose classifiers for complex visual concepts outperforming standard baselines on two well-known visual recognition benchmarks. Finally, we present a qualitative analysis of our method and its properties.

2020-09-04
Khan, Aasher, Rehman, Suriya, Khan, Muhammad U.S, Ali, Mazhar.  2019.  Synonym-based Attack to Confuse Machine Learning Classifiers Using Black-box Setting. 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1—7.
Twitter being the most popular content sharing platform is giving rise to automated accounts called “bots”. Majority of the users on Twitter are bots. Various machine learning (ML) algorithms are designed to detect bots avoiding the vulnerability constraints of ML-based models. This paper contributes to exploit vulnerabilities of machine learning (ML) algorithms through black-box attack. An adversarial text sequence misclassifies the results of deep learning (DL) classifiers for bot detection. Literature shows that ML models are vulnerable to attacks. The aim of this paper is to compromise the accuracy of ML-based bot detection algorithms by replacing original words in tweets with their synonyms. Our results show 7.2% decrease in the accuracy for bot tweets, therefore classifying bot tweets as legitimate tweets.
Song, Chengru, Xu, Changqiao, Yang, Shujie, Zhou, Zan, Gong, Changhui.  2019.  A Black-Box Approach to Generate Adversarial Examples Against Deep Neural Networks for High Dimensional Input. 2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC). :473—479.
Generating adversarial samples is gathering much attention as an intuitive approach to evaluate the robustness of learning models. Extensive recent works have demonstrated that numerous advanced image classifiers are defenseless to adversarial perturbations in the white-box setting. However, the white-box setting assumes attackers to have prior knowledge of model parameters, which are generally inaccessible in real world cases. In this paper, we concentrate on the hard-label black-box setting where attackers can only pose queries to probe the model parameters responsible for classifying different images. Therefore, the issue is converted into minimizing non-continuous function. A black-box approach is proposed to address both massive queries and the non-continuous step function problem by applying a combination of a linear fine-grained search, Fibonacci search, and a zeroth order optimization algorithm. However, the input dimension of a image is so high that the estimation of gradient is noisy. Hence, we adopt a zeroth-order optimization method in high dimensions. The approach converts calculation of gradient into a linear regression model and extracts dimensions that are more significant. Experimental results illustrate that our approach can relatively reduce the amount of queries and effectively accelerate convergence of the optimization method.
Usama, Muhammad, Qayyum, Adnan, Qadir, Junaid, Al-Fuqaha, Ala.  2019.  Black-box Adversarial Machine Learning Attack on Network Traffic Classification. 2019 15th International Wireless Communications Mobile Computing Conference (IWCMC). :84—89.

Deep machine learning techniques have shown promising results in network traffic classification, however, the robustness of these techniques under adversarial threats is still in question. Deep machine learning models are found vulnerable to small carefully crafted adversarial perturbations posing a major question on the performance of deep machine learning techniques. In this paper, we propose a black-box adversarial attack on network traffic classification. The proposed attack successfully evades deep machine learning-based classifiers which highlights the potential security threat of using deep machine learning techniques to realize autonomous networks.

2020-08-13
Zola, Francesco, Eguimendia, Maria, Bruse, Jan Lukas, Orduna Urrutia, Raul.  2019.  Cascading Machine Learning to Attack Bitcoin Anonymity. 2019 IEEE International Conference on Blockchain (Blockchain). :10—17.

Bitcoin is a decentralized, pseudonymous cryptocurrency that is one of the most used digital assets to date. Its unregulated nature and inherent anonymity of users have led to a dramatic increase in its use for illicit activities. This calls for the development of novel methods capable of characterizing different entities in the Bitcoin network. In this paper, a method to attack Bitcoin anonymity is presented, leveraging a novel cascading machine learning approach that requires only a few features directly extracted from Bitcoin blockchain data. Cascading, used to enrich entities information with data from previous classifications, led to considerably improved multi-class classification performance with excellent values of Precision close to 1.0 for each considered class. Final models were implemented and compared using different machine learning models and showed significantly higher accuracy compared to their baseline implementation. Our approach can contribute to the development of effective tools for Bitcoin entity characterization, which may assist in uncovering illegal activities.

2020-07-10
Nahmias, Daniel, Cohen, Aviad, Nissim, Nir, Elovici, Yuval.  2019.  TrustSign: Trusted Malware Signature Generation in Private Clouds Using Deep Feature Transfer Learning. 2019 International Joint Conference on Neural Networks (IJCNN). :1—8.

This paper presents TrustSign, a novel, trusted automatic malware signature generation method based on high-level deep features transferred from a VGG-19 neural network model pre-trained on the ImageNet dataset. While traditional automatic malware signature generation techniques rely on static or dynamic analysis of the malware's executable, our method overcomes the limitations associated with these techniques by producing signatures based on the presence of the malicious process in the volatile memory. Signatures generated using TrustSign well represent the real malware behavior during runtime. By leveraging the cloud's virtualization technology, TrustSign analyzes the malicious process in a trusted manner, since the malware is unaware and cannot interfere with the inspection procedure. Additionally, by removing the dependency on the malware's executable, our method is capable of signing fileless malware. Thus, we focus our research on in-browser cryptojacking attacks, which current antivirus solutions have difficulty to detect. However, TrustSign is not limited to cryptojacking attacks, as our evaluation included various ransomware samples. TrustSign's signature generation process does not require feature engineering or any additional model training, and it is done in a completely unsupervised manner, obviating the need for a human expert. Therefore, our method has the advantage of dramatically reducing signature generation and distribution time. The results of our experimental evaluation demonstrate TrustSign's ability to generate signatures invariant to the process state over time. By using the signatures generated by TrustSign as input for various supervised classifiers, we achieved 99.5% classification accuracy.

2020-07-03
Usama, Muhammad, Asim, Muhammad, Qadir, Junaid, Al-Fuqaha, Ala, Imran, Muhammad Ali.  2019.  Adversarial Machine Learning Attack on Modulation Classification. 2019 UK/ China Emerging Technologies (UCET). :1—4.

Modulation classification is an important component of cognitive self-driving networks. Recently many ML-based modulation classification methods have been proposed. We have evaluated the robustness of 9 ML-based modulation classifiers against the powerful Carlini & Wagner (C-W) attack and showed that the current ML-based modulation classifiers do not provide any deterrence against adversarial ML examples. To the best of our knowledge, we are the first to report the results of the application of the C-W attack for creating adversarial examples against various ML models for modulation classification.