Visible to the public Biblio

Found 116 results

Filters: Keyword is Detectors  [Clear All Filters]
Almashfi, Nabil, Lu, Lunjin.  2020.  Code Smell Detection Tool for Java Script Programs. 2020 5th International Conference on Computer and Communication Systems (ICCCS). :172–176.
JavaScript is a client-side scripting language that is widely used in web applications. It is dynamic, loosely-typed and prototype-based with first-class functions. The dynamic nature of JavaScript makes it powerful and highly flexible in almost every way. However, this flexibility may result in what is known as code smells. Code smells are characteristics in the source code of a program that usually correspond to a deeper problem. They can lead to a variety of comprehension and maintenance issues and they may impact fault- and change-proneness of the application in the future. We present TAJSlint, an automated code smell detection tool for JavaScript programs that is based on static analysis. TAJSlint includes a set of 14 code smells, 9 of which are collected from various sources and 5 new smells we propose. We conduct an empirical evaluation of TAJSlint on a number of JavaScript projects and show that TAJSlint achieves an overall precision of 98% with a small number of false positives. We also study the prevalence of code smells in these projects.
Saini, Anu, Sri, Manepalli Ratna, Thakur, Mansi.  2021.  Intrinsic Plagiarism Detection System Using Stylometric Features and DBSCAN. 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). :13—18.
Plagiarism is the act of using someone else’s words or ideas without giving them due credit and representing it as one’s own work. In today's world, it is very easy to plagiarize others' work due to advancement in technology, especially by the use of the Internet or other offline sources such as books or magazines. Plagiarism can be classified into two broad categories on the basis of detection namely extrinsic and intrinsic plagiarism. Extrinsic plagiarism detection refers to detecting plagiarism in a document by comparing it against a given reference dataset, whereas, Intrinsic plagiarism detection refers to detecting plagiarism with the help of variation in writing styles without using any reference corpus. Although there are many approaches which can be adopted to detect extrinsic plagiarism, few are available for intrinsic plagiarism detection. In this paper, a simplified approach is proposed for developing an intrinsic plagiarism detector which is helpful in detecting plagiarism even when no reference corpus is available. The approach deals with development of an intrinsic plagiarism detection system by identifying the writing style of authors in the document using stylometric features and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering. The proposed system has an easy to use interactive interface where user has to upload a text document to be checked for plagiarism and the result is displayed on the web page itself. In addition, the user can also see the analysis of the document in the form of graphs.
Oshnoei, Soroush, Aghamohammadi, Mohammadreza.  2021.  Detection and Mitigation of Coordinate False DataInjection Attacks in Frequency Control of Power Grids. 2021 11th Smart Grid Conference (SGC). :1—5.
In modern power grids (PGs), load frequency control (LFC) is effectively employed to preserve the frequency within the allowable ranges. However, LFC dependence on information and communication technologies (ICTs) makes PGs vulnerable to cyber attacks. Manipulation of measured data and control commands known as false data injection attacks (FDIAs) can negatively affect grid frequency performance and destabilize PG. This paper investigates the frequency performance of an isolated PG under coordinated FDIAs. A control scheme based on the combination of a Kalman filter, a chi-square detector, and a linear quadratic Gaussian controller is proposed to detect and mitigate the coordinated FDIAs. The efficiency of the proposed control scheme is evaluated under two types of scaling and exogenous FDIAs. The simulation results demonstrate that the proposed control scheme has significant capabilities to detect and mitigate the designed FDIAs.
Ajiri, Victor, Butakov, Sergey, Zavarsky, Pavol.  2020.  Detection Efficiency of Static Analyzers against Obfuscated Android Malware. 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). :231–234.
Mobile antivirus technologies incorporate static analysis which involves the analysis of programs without its execution. This process relies on pattern matching against a signature repository to identify malware, which can be easily tricked by transformation techniques such as obfuscation. Obfuscation as an evasion technique renders character strings disguised and incomprehensive, to prevent tampering and reengineering, which poses to be a valuable technique malware developers adopt to evade detection. This paper attempts to study the detection efficiency of static analyzers against obfuscated Android malware. This study is the first step in a larger project attempting to improve the efficiency of malware detectors.
Akowuah, Francis, Kong, Fanxin.  2021.  Real-Time Adaptive Sensor Attack Detection in Autonomous Cyber-Physical Systems. 2021 IEEE 27th Real-Time and Embedded Technology and Applications Symposium (RTAS). :237—250.
Cyber-Physical Systems (CPS) tightly couple information technology with physical processes, which rises new vulnerabilities such as physical attacks that are beyond conventional cyber attacks. Attackers may non-invasively compromise sensors and spoof the controller to perform unsafe actions. This issue is even emphasized with the increasing autonomy in CPS. While this fact has motivated many defense mechanisms against sensor attacks, a clear vision on the timing and usability (or the false alarm rate) of attack detection still remains elusive. Existing works tend to pursue an unachievable goal of minimizing the detection delay and false alarm rate at the same time, while there is a clear trade-off between the two metrics. Instead, we argue that attack detection should bias different metrics when a system sits in different states. For example, if the system is close to unsafe states, reducing the detection delay is preferable to lowering the false alarm rate, and vice versa. To achieve this, we make the following contributions. In this paper, we propose a real-time adaptive sensor attack detection framework. The framework can dynamically adapt the detection delay and false alarm rate so as to meet a detection deadline and improve the usability according to different system status. The core component of this framework is an attack detector that identifies anomalies based on a CUSUM algorithm through monitoring the cumulative sum of difference (or residuals) between the nominal (predicted) and observed sensor values. We augment this algorithm with a drift parameter that can govern the detection delay and false alarm. The second component is a behavior predictor that estimates nominal sensor values fed to the core component for calculating the residuals. The predictor uses a deep learning model that is offline extracted from sensor data through leveraging convolutional neural network (CNN) and recurrent neural network (RNN). The model relies on little knowledge of the system (e.g., dynamics), but uncovers and exploits both the local and complex long-term dependencies in multivariate sequential sensor measurements. The third component is a drift adaptor that estimates a detection deadline and then determines the drift parameter fed to the detector component for adjusting the detection delay and false alarms. Finally, we implement the proposed framework and validate it using realistic sensor data of automotive CPS to demonstrate its efficiency and efficacy.
Barros, Bettina D., Venkategowda, Naveen K. D., Werner, Stefan.  2021.  Quickest Detection of Stochastic False Data Injection Attacks with Unknown Parameters. 2021 IEEE Statistical Signal Processing Workshop (SSP). :426—430.
This paper considers a multivariate quickest detection problem with false data injection (FDI) attacks in internet of things (IoT) systems. We derive a sequential generalized likelihood ratio test (GLRT) for zero-mean Gaussian FDI attacks. Exploiting the fact that covariance matrices are positive, we propose strategies to detect positive semi-definite matrix additions rather than arbitrary changes in the covariance matrix. The distribution of the GLRT is only known asymptotically whereas quickest detectors deal with short sequences, thereby leading to loss of performance. Therefore, we use a finite-sample correction to reduce the false alarm rate. Further, we provide a numerical approach to estimate the threshold sequences, which are analytically intractable to compute. We also compare the average detection delay of the proposed detector for constant and varying threshold sequences. Simulations showed that the proposed detector outperforms the standard sequential GLRT detector.
Ji, Xiaoyu, Cheng, Yushi, Zhang, Yuepeng, Wang, Kai, Yan, Chen, Xu, Wenyuan, Fu, Kevin.  2021.  Poltergeist: Acoustic Adversarial Machine Learning against Cameras and Computer Vision. 2021 IEEE Symposium on Security and Privacy (SP). :160–175.
Autonomous vehicles increasingly exploit computer-vision-based object detection systems to perceive environments and make critical driving decisions. To increase the quality of images, image stabilizers with inertial sensors are added to alleviate image blurring caused by camera jitters. However, such a trend opens a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of the emerging image stabilizer hardware susceptible to acoustic manipulation and the object detection algorithms subject to adversarial examples. By emitting deliberately designed acoustic signals, an adversary can control the output of an inertial sensor, which triggers unnecessary motion compensation and results in a blurred image, even if the camera is stable. The blurred images can then induce object misclassification affecting safety-critical decision making. We model the feasibility of such acoustic manipulation and design an attack framework that can accomplish three types of attacks, i.e., hiding, creating, and altering objects. Evaluation results demonstrate the effectiveness of our attacks against four academic object detectors (YOLO V3/V4/V5 and Fast R-CNN), and one commercial detector (Apollo). We further introduce the concept of AMpLe attacks, a new class of system-level security vulnerabilities resulting from a combination of adversarial machine learning and physics-based injection of information-carrying signals into hardware.
Sultana, Habiba, Kamal, A H M.  2021.  Image Steganography System based on Hybrid Edge Detector. 2021 24th International Conference on Computer and Information Technology (ICCIT). :1—6.

In the field of image steganography, edge detection based implantation methods play vital rules in providing stronger security of hided data. In this arena, researcher applies a suitable edge detection method to detect edge pixels in an image. Those detected pixels then conceive secret message bits. A very recent trend is to employ multiple edge detection methods to increase edge pixels in an image and thus to enhance the embedding capacity. The uses of multiple edge detectors additionally boost up the data security. Like as the demand for embedding capacity, many applications need to have the modified image, i.e., stego image, with good quality. Indeed, when the message payload is low, it will not be a better idea to finds more local pixels for embedding that small payload. Rather, the image quality will look better, visually and statistically, if we could choose a part but sufficient pixels to implant bits. In this article, we propose an algorithm that uses multiple edge detection algorithms to find edge pixels separately and then selects pixels which are common to all edges. This way, the proposed method decreases the number of embeddable pixels and thus, increases the image quality. The experimental results provide promising output.

Hussain, Shehzeen, Neekhara, Paarth, Jere, Malhar, Koushanfar, Farinaz, McAuley, Julian.  2021.  Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). :3347–3356.
Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before. Manipulated videos can fuel disinformation and reduce trust in media. Therefore detection of fake videos has garnered immense interest in academia and industry. Recently developed Deepfake detection methods rely on Deep Neural Networks (DNNs) to distinguish AI-generated fake videos from real videos. In this work, we demonstrate that it is possible to bypass such detectors by adversarially modifying fake videos synthesized using existing Deepfake generation methods. We further demonstrate that our adversarial perturbations are robust to image and video compression codecs, making them a real-world threat. We present pipelines in both white-box and black-box attack scenarios that can fool DNN based Deepfake detectors into classifying fake videos as real.
Khalil, Hady A., Maged, Shady A..  2021.  Deepfakes Creation and Detection Using Deep Learning. 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC). :1–4.
Deep learning has been used in a wide range of applications like computer vision, natural language processing and image detection. The advancement in deep learning algorithms in image detection and manipulation has led to the creation of deepfakes, deepfakes use deep learning algorithms to create fake images that are at times very hard to distinguish from real images. With the rising concern around personal privacy and security, Many methods to detect deepfake images have emerged, in this paper the use of deep learning for creating as well as detecting deepfakes is explored, this paper also propose the use of deep learning image enhancement method to improve the quality of deepfakes created.
Sunil, Ajeet, Sheth, Manav Hiren, E, Shreyas, Mohana.  2021.  Usual and Unusual Human Activity Recognition in Video using Deep Learning and Artificial Intelligence for Security Applications. 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–6.
The main objective of Human Activity Recognition (HAR) is to detect various activities in video frames. Video surveillance is an import application for various security reasons, therefore it is essential to classify activities as usual and unusual. This paper implements the deep learning model that has the ability to classify and localize the activities detected using a Single Shot Detector (SSD) algorithm with a bounding box, which is explicitly trained to detect usual and unusual activities for security surveillance applications. Further this model can be deployed in public places to improve safety and security of individuals. The SSD model is designed and trained using transfer learning approach. Performance evaluation metrics are visualised using Tensor Board tool. This paper further discusses the challenges in real-time implementation.
Bernardi, Simona, Javierre, Raúl, Merseguer, José, Requeno, José Ignacio.  2021.  Detectors of Smart Grid Integrity Attacks: an Experimental Assessment. 2021 17th European Dependable Computing Conference (EDCC). :75–82.
Today cyber-attacks to critical infrastructures can perform outages, economical loss, physical damage to people and the environment, among many others. In particular, the smart grid is one of the main targets. In this paper, we develop and evaluate software detectors for integrity attacks to smart meter readings. The detectors rely upon different techniques and models, such as autoregressive models, clustering, and neural networks. Our evaluation considers different “attack scenarios”, then resembling the plethora of attacks found in last years. Starting from previous works in the literature, we carry out a detailed experimentation and analysis, so to identify which “detectors” best fit for each “attack scenario”. Our results contradict some findings of previous works and also offer a light for choosing the techniques that can address best the attacks to smart meters.
Itria, Massimiliano Leone, Schiavone, Enrico, Nostro, Nicola.  2021.  Towards anomaly detection in smart grids by combining Complex Events Processing and SNMP objects. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :212—217.
This paper describes the architecture and the fundamental methodology of an anomaly detector, which by continuously monitoring Simple Network Management Protocol data and by processing it as complex-events, is able to timely recognize patterns of faults and relevant cyber-attacks. This solution has been applied in the context of smart grids, and in particular as part of a security and resilience component of the Information and Communication Technologies (ICT) Gateway, a middleware-based architecture that correlates and fuses measurement data from different sources (e.g., Inverters, Smart Meters) to provide control coordination and to enable grid observability applications. The detector has been evaluated through experiments, where we selected some representative anomalies that can occur on the ICT side of the energy distribution infrastructure: non-malicious faults (indicated by patterns in the system resources usage), as well as effects of typical cyber-attacks directed to the smart grid infrastructure. The results show that the detection is promisingly fast and efficient.
Wang, Xiying, Ni, Rongrong, Li, Wenjie, Zhao, Yao.  2021.  Adversarial Attack on Fake-Faces Detectors Under White and Black Box Scenarios. 2021 IEEE International Conference on Image Processing (ICIP). :3627–3631.
Generative Adversarial Network (GAN) models have been widely used in various fields. More recently, styleGAN and styleGAN2 have been developed to synthesize faces that are indistinguishable to the human eyes, which could pose a threat to public security. But latest work has shown that it is possible to identify fakes using powerful CNN networks as classifiers. However, the reliability of these techniques is unknown. Therefore, in this paper we focus on the generation of content-preserving images from fake faces to spoof classifiers. Two GAN-based frameworks are proposed to achieve the goal in the white-box and black-box. For the white-box, a network without up/down sampling is proposed to generate face images to confuse the classifier. In the black-box scenario (where the classifier is unknown), real data is introduced as a guidance for GAN structure to make it adversarial, and a Real Extractor as an auxiliary network to constrain the feature distance between the generated images and the real data to enhance the adversarial capability. Experimental results show that the proposed method effectively reduces the detection accuracy of forensic models with good transferability.
Lee, Jungbeom, Yi, Jihun, Shin, Chaehun, Yoon, Sungroh.  2021.  BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :2643–2651.
Weakly supervised segmentation methods using bounding box annotations focus on obtaining a pixel-level mask from each box containing an object. Existing methods typically depend on a class-agnostic mask generator, which operates on the low-level information intrinsic to an image. In this work, we utilize higher-level information from the behavior of a trained object detector, by seeking the smallest areas of the image from which the object detector produces almost the same result as it does from the whole image. These areas constitute a bounding-box attribution map (BBAM), which identifies the target object in its bounding box and thus serves as pseudo ground-truth for weakly supervised semantic and instance segmentation. This approach significantly outperforms recent comparable techniques on both the PASCAL VOC and MS COCO benchmarks in weakly supervised semantic and instance segmentation. In addition, we provide a detailed analysis of our method, offering deeper insight into the behavior of the BBAM.
Xu, Baoyue, Du, Dajun, Zhang, Changda, Zhang, Jin.  2021.  A Honeypot-based Attack Detection Method for Networked Inverted Pendulum System. 2021 40th Chinese Control Conference (CCC). :8645–8650.
The data transmitted via the network may be vulnerable to cyber attacks in networked inverted pendulum system (NIPS), how to detect cyber attacks is a challenging issue. To solve this problem, this paper investigates a honeypot-based attack detection method for NIPS. Firstly, honeypot for NIPS attack detection (namely NipsPot) is constructed by deceptive environment module of a virtual closed-loop control system, and the stealthiness of typical covert attacks is analysed. Secondly, attack data is collected by NipsPot, which is used to train supported vector machine (SVM) model for attack detection. Finally, simulation results demonstrate that NipsPot-based attack detector can achieve the accuracy rate of 99.78%, the precision rate of 98.75%, and the recall rate of 100%.
He, Zhangying, Miari, Tahereh, Makrani, Hosein Mohammadi, Aliasgari, Mehrdad, Homayoun, Houman, Sayadi, Hossein.  2021.  When Machine Learning Meets Hardware Cybersecurity: Delving into Accurate Zero-Day Malware Detection. 2021 22nd International Symposium on Quality Electronic Design (ISQED). :85–90.
Cybersecurity for the past decades has been in the front line of global attention as a critical threat to the information technology infrastructures. According to recent security reports, malicious software (a.k.a. malware) is rising at an alarming rate in numbers as well as harmful purposes to compromise security of computing systems. To address the high complexity and computational overheads of conventional software-based detection techniques, Hardware-Supported Malware Detection (HMD) has proved to be efficient for detecting malware at the processors' microarchitecture level with the aid of Machine Learning (ML) techniques applied on Hardware Performance Counter (HPC) data. Existing ML-based HMDs while accurate in recognizing known signatures of malicious patterns, have not explored detecting unknown (zero-day) malware data at run-time which is a more challenging problem, since its HPC data does not match any known attack applications' signatures in the existing database. In this work, we first present a review of recent ML-based HMDs utilizing built-in HPC registers information. Next, we examine the suitability of various standard ML classifiers for zero-day malware detection and demonstrate that such methods are not capable of detecting unknown malware signatures with high detection rate. Lastly, to address the challenge of run-time zero-day malware detection, we propose an ensemble learning-based technique to enhance the performance of the standard malware detectors despite using a small number of microarchitectural features that are captured at run-time by existing HPCs. The experimental results demonstrate that our proposed approach by applying AdaBoost ensemble learning on Random Forrest classifier as a regular classifier achieves 92% F-measure and 95% TPR with only 2% false positive rate in detecting zero-day malware using only the top 4 microarchitectural features.
Shen, Cheng, Liu, Tian, Huang, Jun, Tan, Rui.  2021.  When LoRa Meets EMR: Electromagnetic Covert Channels Can Be Super Resilient. 2021 IEEE Symposium on Security and Privacy (SP). :1304–1317.
Due to the low power of electromagnetic radiation (EMR), EM convert channel has been widely considered as a short-range attack that can be easily mitigated by shielding. This paper overturns this common belief by demonstrating how covert EM signals leaked from typical laptops, desktops and servers are decoded from hundreds of meters away, or penetrate aggressive shield previously considered as sufficient to ensure emission security. We achieve this by designing EMLoRa – a super resilient EM covert channel that exploits memory as a LoRa-like radio. EMLoRa represents the first attempt of designing an EM covert channel using state-of-the-art spread spectrum technology. It tackles a set of unique challenges, such as handling complex spectral characteristics of EMR, tolerating signal distortions caused by CPU contention, and preventing adversarial detectors from demodulating covert signals. Experiment results show that EMLoRa boosts communication range by 20x and improves attenuation resilience by up to 53 dB when compared with prior EM covert channels at the same bit rate. By achieving this, EMLoRa allows an attacker to circumvent security perimeter, breach Faraday cage, and localize air-gapped devices in a wide area using just a small number of inexpensive sensors. To countermeasure EMLoRa, we further explore the feasibility of uncovering EMLoRa's signal using energy- and CNN-based detectors. Experiments show that both detectors suffer limited range, allowing EMLoRa to gain a significant range advantage. Our results call for further research on the countermeasure against spread spectrum-based EM covert channels.
Silva, Douglas Simões, Graczyk, Rafal, Decouchant, Jérémie, Völp, Marcus, Esteves-Verissimo, Paulo.  2021.  Threat Adaptive Byzantine Fault Tolerant State-Machine Replication. 2021 40th International Symposium on Reliable Distributed Systems (SRDS). :78–87.
Critical infrastructures have to withstand advanced and persistent threats, which can be addressed using Byzantine fault tolerant state-machine replication (BFT-SMR). In practice, unattended cyberdefense systems rely on threat level detectors that synchronously inform them of changing threat levels. However, to have a BFT-SMR protocol operate unattended, the state-of-the-art is still to configure them to withstand the highest possible number of faulty replicas \$f\$ they might encounter, which limits their performance, or to make the strong assumption that a trusted external reconfiguration service is available, which introduces a single point of failure. In this work, we present ThreatAdaptive the first BFT-SMR protocol that is automatically strengthened or optimized by its replicas in reaction to threat level changes. We first determine under which conditions replicas can safely reconfigure a BFT-SMR system, i.e., adapt the number of replicas \$n\$ and the fault threshold \$f\$ so as to outpace an adversary. Since replicas typically communicate with each other using an asynchronous network they cannot rely on consensus to decide how the system should be reconfigured. ThreatAdaptive avoids this pitfall by proactively preparing the reconfiguration that may be triggered by an increasing threat when it optimizes its performance. Our evaluation shows that ThreatAdaptive can meet the latency and throughput of BFT baselines configured statically for a particular level of threat, and adapt 30% faster than previous methods, which make stronger assumptions to provide safety.
Kim, Jaewon, Ko, Woo-Hyun, Kumar, P. R..  2021.  Cyber-Security through Dynamic Watermarking for 2-rotor Aerial Vehicle Flight Control Systems. 2021 International Conference on Unmanned Aircraft Systems (ICUAS). :1277–1283.
We consider the problem of security for unmanned aerial vehicle flight control systems. To provide a concrete setting, we consider the security problem in the context of a helicopter which is compromised by a malicious agent that distorts elevation measurements to the control loop. This is a particular example of the problem of the security of stochastic control systems under erroneous observation measurements caused by malicious sensors within the system. In order to secure the control system, we consider dynamic watermarking, where a private random excitation signal is superimposed onto the control input of the flight control system. An attack detector at the actuator can then check if the reported sensor measurements are appropriately correlated with the private random excitation signal. This is done via two specific statistical tests whose violation signifies an attack. We apply dynamic watermarking technique to a 2-rotor-based 3-DOF helicopter control system test-bed. We demonstrate through both simulation and experimental results the performance of the attack detector on two attack models: a stealth attack, and a random bias injection attack.
Rao, Poojith U., Sodhi, Balwinder, Sodhi, Ranjana.  2020.  Cyber Security Enhancement of Smart Grids Via Machine Learning - A Review. 2020 21st National Power Systems Conference (NPSC). :1–6.
The evolution of power system as a smart grid (SG) not only has enhanced the monitoring and control capabilities of the power grid, but also raised its security concerns and vulnerabilities. With a boom in Internet of Things (IoT), a lot a sensors are being deployed across the grid. This has resulted in huge amount of data available for processing and analysis. Machine learning (ML) and deep learning (DL) algorithms are being widely used to extract useful information from this data. In this context, this paper presents a comprehensive literature survey of different ML and DL techniques that have been used in the smart grid cyber security area. The survey summarizes different type of cyber threats which today's SGs are prone to, followed by various ML and DL-assisted defense strategies. The effectiveness of the ML based methods in enhancing the cyber security of SGs is also demonstrated with the help of a case study.
Fang, Hao, Zhang, Tao, Cai, Yueming, Zhang, Linyuan, Wu, Hao.  2020.  Detection Schemes of Illegal Spectrum Access Behaviors in Multiple Authorized Users Scenario. 2020 International Conference on Wireless Communications and Signal Processing (WCSP). :933–938.
In this paper, our aim is to detect illegal spectrum access behaviors. Firstly, we detect whether the channel is busy, and then if it is busy, recognizing whether there are illegal users. To get closer to the actual situation, we consider a more general scenario where multiple users are authorized to work on the same channel under certain interference control strategies, and build it as a ternary hypothesis test model using the generalized multi-hypothesis Neyman-Pearson criterion. Considering the various potential combination of multiple authorized users, the spectrum detection process utilizes a two-step detector. We adopt the Generalized Likelihood Ratio Test (GLRT) and the Rao test to detect illegal spectrum access behaviors. What is more, the Wald test is proposed which has a compromise between computational complexity and performance. The relevant formulas of the three detection schemes are derived. Finally, comprehensive and in-depth simulations are provided to verify the effectiveness of the proposed detection scheme that it has the best detection performance under different authorized sample numbers and different performance constraints. Besides, we illustrate the probability of detection of illegal behaviors under different parameters of illegal behaviors and different sets of AUs' states under the Wald test.
Sun, Yixin, Jee, Kangkook, Sivakorn, Suphannee, Li, Zhichun, Lumezanu, Cristian, Korts-Parn, Lauri, Wu, Zhenyu, Rhee, Junghwan, Kim, Chung Hwan, Chiang, Mung et al..  2020.  Detecting Malware Injection with Program-DNS Behavior. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :552–568.
Analyzing the DNS traffic of Internet hosts has been a successful technique to counter cyberattacks and identify connections to malicious domains. However, recent stealthy attacks hide malicious activities within seemingly legitimate connections to popular web services made by benign programs. Traditional DNS monitoring and signature-based detection techniques are ineffective against such attacks. To tackle this challenge, we present a new program-level approach that can effectively detect such stealthy attacks. Our method builds a fine-grained Program-DNS profile for each benign program that characterizes what should be the “expected” DNS behavior. We find that malware-injected processes have DNS activities which significantly deviate from the Program-DNS profile of the benign program. We then develop six novel features based on the Program-DNS profile, and evaluate the features on a dataset of over 130 million DNS requests collected from a real-world enterprise and 8 million requests from malware-samples executed in a sandbox environment. We compare our detection results with that of previously-proposed features and demonstrate that our new features successfully detect 190 malware-injected processes which fail to be detected by previously-proposed features. Overall, our study demonstrates that fine-grained Program-DNS profiles can provide meaningful and effective features in building detectors for attack campaigns that bypass existing detection systems.
Zhong, Zhenyu, Hu, Zhisheng, Chen, Xiaowei.  2020.  Quantifying DNN Model Robustness to the Real-World Threats. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :150–157.
DNN models have suffered from adversarial example attacks, which lead to inconsistent prediction results. As opposed to the gradient-based attack, which assumes white-box access to the model by the attacker, we focus on more realistic input perturbations from the real-world and their actual impact on the model robustness without any presence of the attackers. In this work, we promote a standardized framework to quantify the robustness against real-world threats. It is composed of a set of safety properties associated with common violations, a group of metrics to measure the minimal perturbation that causes the offense, and various criteria that reflect different aspects of the model robustness. By revealing comparison results through this framework among 13 pre-trained ImageNet classifiers, three state-of-the-art object detectors, and three cloud-based content moderators, we deliver the status quo of the real-world model robustness. Beyond that, we provide robustness benchmarking datasets for the community.
Bhowmick, Chandreyee, Jagannathan, S..  2020.  Availability-Resilient Control of Uncertain Linear Stochastic Networked Control Systems. 2020 American Control Conference (ACC). :4016–4021.
The resilient output feedback control of linear networked control (NCS) system with uncertain dynamics in the presence of Gaussian noise is presented under the denial of service (DoS) attacks on communication networks. The DoS attacks on the sensor-to-controller (S-C) and controller- to-actuator (C-A) networks induce random packet losses. The NCS is viewed as a jump linear system, where the linear NCS matrices are a function of induced losses that are considered unknown. A set of novel correlation detectors is introduced to detect packet drops in the network channels using the property of Gaussian noise. By using an augmented system representation, the output feedback Q-learning based control scheme is designed for the jump linear NCS with uncertain dynamics to cope with the changing values of the mean packet losses. Simulation results are included to support the theoretical claims.