Visible to the public Biblio

Found 4496 results

Filters: Keyword is Resiliency  [Clear All Filters]
2019-12-10
Ponuma, R, Amutha, R, Haritha, B.  2018.  Compressive Sensing and Hyper-Chaos Based Image Compression-Encryption. 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). :1-5.

A 2D-Compressive Sensing and hyper-chaos based image compression-encryption algorithm is proposed. The 2D image is compressively sampled and encrypted using two measurement matrices. A chaos based measurement matrix construction is employed. The construction of the measurement matrix is controlled by the initial and control parameters of the chaotic system, which are used as the secret key for encryption. The linear measurements of the sparse coefficients of the image are then subjected to a hyper-chaos based diffusion which results in the cipher image. Numerical simulation and security analysis are performed to verify the validity and reliability of the proposed algorithm.

Tian, Yun, Xu, Wenbo, Qin, Jing, Zhao, Xiaofan.  2018.  Compressive Detection of Random Signals from Sparsely Corrupted Measurements. 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC). :389-393.

Compressed sensing (CS) integrates sampling and compression into a single step to reduce the processed data amount. However, the CS reconstruction generally suffers from high complexity. To solve this problem, compressive signal processing (CSP) is recently proposed to implement some signal processing tasks directly in the compressive domain without reconstruction. Among various CSP techniques, compressive detection achieves the signal detection based on the CS measurements. This paper investigates the compressive detection problem of random signals when the measurements are corrupted. Different from the current studies that only consider the dense noise, our study considers both the dense noise and sparse error. The theoretical performance is derived, and simulations are provided to verify the derived theoretical results.

Shiddik, Luthfi Rakha, Novamizanti, Ledya, Ramatryana, I N Apraz Nyoman, Hanifan, Hasya Azqia.  2019.  Compressive Sampling for Robust Video Watermarking Based on BCH Code in SWT-SVD Domain. 2019 International Conference on Sustainable Engineering and Creative Computing (ICSECC). :223-227.

The security and confidentiality of the data can be guaranteed by using a technique called watermarking. In this study, compressive sampling is designed and analyzed on video watermarking. Before the watermark compression process was carried out, the watermark was encoding the Bose Chaudhuri Hocquenghem Code (BCH Code). After that, the watermark is processed using the Discrete Sine Transform (DST) and Discrete Wavelet Transform (DWT). The watermark insertion process to the video host using the Stationary Wavelet Transform (SWT), and Singular Value Decomposition (SVD) methods. The results of our system are obtained with the PSNR 47.269 dB, MSE 1.712, and BER 0.080. The system is resistant to Gaussian Blur and rescaling noise attacks.

Zhou, Guorui, Zhu, Xiaoqiang, Song, Chenru, Fan, Ying, Zhu, Han, Ma, Xiao, Yan, Yanghui, Jin, Junqi, Li, Han, Gai, Kun.  2018.  Deep Interest Network for Click-Through Rate Prediction. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :1059-1068.

Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.

Tai, Kai Sheng, Sharan, Vatsal, Bailis, Peter, Valiant, Gregory.  2018.  Sketching Linear Classifiers over Data Streams. Proceedings of the 2018 International Conference on Management of Data. :757-772.

We introduce a new sub-linear space sketch—the Weight-Median Sketch—for learning compressed linear classifiers over data streams while supporting the efficient recovery of large-magnitude weights in the model. This enables memory-limited execution of several statistical analyses over streams, including online feature selection, streaming data explanation, relative deltoid detection, and streaming estimation of pointwise mutual information. Unlike related sketches that capture the most frequently-occurring features (or items) in a data stream, the Weight-Median Sketch captures the features that are most discriminative of one stream (or class) compared to another. The Weight-Median Sketch adopts the core data structure used in the Count-Sketch, but, instead of sketching counts, it captures sketched gradient updates to the model parameters. We provide a theoretical analysis that establishes recovery guarantees for batch and online learning, and demonstrate empirical improvements in memory-accuracy trade-offs over alternative memory-budgeted methods, including count-based sketches and feature hashing.

Deng, Lijin, Piao, Yan, Liu, Shuo.  2018.  Research on SIFT Image Matching Based on MLESAC Algorithm. Proceedings of the 2Nd International Conference on Digital Signal Processing. :17-21.

The difference of sensor devices and the camera position offset will lead the geometric differences of the matching images. The traditional SIFT image matching algorithm has a large number of incorrect matching point pairs and the matching accuracy is low during the process of image matching. In order to solve this problem, a SIFT image matching based on Maximum Likelihood Estimation Sample Consensus (MLESAC) algorithm is proposed. Compared with the traditional SIFT feature matching algorithm, SURF feature matching algorithm and RANSAC feature matching algorithm, the proposed algorithm can effectively remove the false matching feature point pairs during the image matching process. Experimental results show that the proposed algorithm has higher matching accuracy and faster matching efficiency.

Feng, Chenwei, Wang, Xianling, Zhang, Zewang.  2018.  Data Compression Scheme Based on Discrete Sine Transform and Lloyd-Max Quantization. Proceedings of the 3rd International Conference on Intelligent Information Processing. :46-51.

With the increase of mobile equipment and transmission data, Common Public Radio Interface (CPRI) between Building Base band Unit (BBU) and Remote Radio Unit (RRU) suffers amounts of increasing transmission data. It is essential to compress the data in CPRI if more data should be transferred without congestion under the premise of restriction of fiber consumption. A data compression scheme based on Discrete Sine Transform (DST) and Lloyd-Max quantization is proposed in distributed Base Station (BS) architecture. The time-domain samples are transformed by DST according to the characteristics of Orthogonal Frequency Division Multiplexing (OFDM) baseband signals, and then the coefficients after transformation are quantified by the Lloyd-Max quantizer. The simulation results show that the proposed scheme can work at various Compression Ratios (CRs) while the values of Error Vector Magnitude (EVM) are better than the limits in 3GPP.

Huang, Lilian, Zhu, Zhonghang.  2018.  Compressive Sensing Image Reconstruction Using Super-Resolution Convolutional Neural Network. Proceedings of the 2Nd International Conference on Digital Signal Processing. :80-83.

Compressed sensing (CS) can recover a signal that is sparse in certain representation and sample at the rate far below the Nyquist rate. But limited to the accuracy of atomic matching of traditional reconstruction algorithm, CS is difficult to reconstruct the initial signal with high resolution. Meanwhile, scholar found that trained neural network have a strong ability in settling such inverse problems. Thus, we propose a Super-Resolution Convolutional Neural Network (SRCNN) that consists of three convolutional layers. Every layer has a fixed number of kernels and has their own specific function. The process is implemented using classical compressed sensing algorithm to process the input image, afterwards, the output images are coded via SRCNN. We achieve higher resolution image by using the SRCNN algorithm proposed. The simulation results show that the proposed method helps improve PSNR value and promote visual effect.

Sun, Jie, Yu, Jiancheng, Zhang, Aiqun, Song, Aijun, Zhang, Fumin.  2018.  Underwater Acoustic Intensity Field Reconstruction by Kriged Compressive Sensing. Proceedings of the Thirteenth ACM International Conference on Underwater Networks & Systems. :5:1-5:8.

This paper presents a novel Kriged Compressive Sensing (KCS) approach for the reconstruction of underwater acoustic intensity fields sampled by multiple gliders following sawtooth sampling patterns. Blank areas in between the sampling trajectories may cause unsatisfying reconstruction results. The KCS method leverages spatial statistical correlation properties of the acoustic intensity field being sampled to improve the compressive reconstruction process. Virtual data samples generated from a kriging method are inserted into the blank areas. We show that by using the virtual samples along with real samples, the acoustic intensity field can be reconstructed with higher accuracy when coherent spatial patterns exist. Corresponding algorithms are developed for both unweighted and weighted KCS methods. By distinguishing the virtual samples from real samples through weighting, the reconstruction results can be further improved. Simulation results show that both algorithms can improve the reconstruction results according to the PSNR and SSIM metrics. The methods are applied to process the ocean ambient noise data collected by the Sea-Wing acoustic gliders in the South China Sea.

Cui, Wenxue, Jiang, Feng, Gao, Xinwei, Zhang, Shengping, Zhao, Debin.  2018.  An Efficient Deep Quantized Compressed Sensing Coding Framework of Natural Images. Proceedings of the 26th ACM International Conference on Multimedia. :1777-1785.

Traditional image compressed sensing (CS) coding frameworks solve an inverse problem that is based on the measurement coding tools (prediction, quantization, entropy coding, etc.) and the optimization based image reconstruction method. These CS coding frameworks face the challenges of improving the coding efficiency at the encoder, while also suffering from high computational complexity at the decoder. In this paper, we move forward a step and propose a novel deep network based CS coding framework of natural images, which consists of three sub-networks: sampling sub-network, offset sub-network and reconstruction sub-network that responsible for sampling, quantization and reconstruction, respectively. By cooperatively utilizing these sub-networks, it can be trained in the form of an end-to-end metric with a proposed rate-distortion optimization loss function. The proposed framework not only improves the coding performance, but also reduces the computational cost of the image reconstruction dramatically. Experimental results on benchmark datasets demonstrate that the proposed method is capable of achieving superior rate-distortion performance against state-of-the-art methods.

Braverman, Mark, Kol, Gillat.  2018.  Interactive Compression to External Information. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. :964-977.

We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants' private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) $\cdot$ loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost.

Huang, Xuping.  2018.  Mechanism and Implementation of Watermarked Sample Scanning Method for Speech Data Tampering Detection. Proceedings of the 2Nd International Workshop on Multimedia Privacy and Security. :54-60.

The integrity and reliability of speech data have been important issues to probative use. Watermarking technologies supplies an alternative solution to guarantee the the authenticity of multiple data besides digital signature. This work proposes a novel digital watermarking based on a reversible compression algorithm with sample scanning to detect tampering in time domain. In order to detect tampering precisely, the digital speech data is divided into length-fixed frames and the content-based hash information of each frame is calculated and embedded into the speech data for verification. Huffman compression algorithm is applied to each four sampling bits from least significant bit in each sample after pulse-code modulation processing to achieve low distortion and high capacity for hiding payload. Experimental experiments on audio quality, detection precision and robustness towards attacks are taken, and the results show the effectiveness of tampering detection with a precision with an error around 0.032 s for a 10 s speech clip. Distortion is imperceptible with an average 22.068 dB for Huffman-based and 24.139 dB for intDCT-based method in terms of signal-to-noise, and with an average MOS 3.478 for Huffman-based and 4.378 for intDCT-based method. The bit error rate (BER) between stego data and attacked stego data in both of time-domain and frequency domain is approximate 28.6% in average, which indicates the robustness of the proposed hiding method.

2019-12-09
Sandberg, Henrik.  2018.  Control Theory for Practical Cyber-Physical Security: Extended Abstract. Proceedings of the 4th ACM Workshop on Cyber-Physical System Security. :25–26.
In this talk, we discuss how control theory can contribute to the analysis and design of secure cyber-physical systems. We start by reviewing conditions for undetectable false-data injection attacks on feedback control systems. In particular, we highlight how a physical understanding of the controlled process can guide us in the allocation of protective measures. We show that protecting only a few carefully selected actuators or sensors can give indirect protection to many more components. We then illustrate how such analysis is exploited in the design of a resilient control scheme for a microgrid energy management system.
Kuznetsov, Petr, Rieutord, Thibault, He, Yuan.  2018.  An Asynchronous Computability Theorem for Fair Adversaries. Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing. :387–396.
This paper proposes a simple topological characterization of a large class of fair adversarial models via affine tasks: sub-complexes of the second iteration of the standard chromatic subdivision. We show that the task computability of a model in the class is precisely captured by iterations of the corresponding affine task. Fair adversaries include, but are not restricted to, the models of wait-freedom, t-resilience, and k-concurrency. Our results generalize and improve all previously derived topological characterizations of the ability of a model to solve distributed tasks.
Yifrach, Assaf, Mansour, Yishay.  2018.  Fair Leader Election for Rational Agents in Asynchronous Rings and Networks. Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing. :217–226.
We study a game theoretic model where a coalition of processors might collude to bias the outcome of the protocol, where we assume that the processors always prefer any legitimate outcome over a non-legitimate one. We show that the problems of Fair Leader Election and Fair Coin Toss are equivalent, and focus on Fair Leader Election. Our main focus is on a directed asynchronous ring of n processors, where we investigate the protocol proposed by Abraham et al. [4] and studied in Afek et al. [5]. We show that in general the protocol is resilient only to sub-linear size coalitions. Specifically, we show that Ω( p n logn) randomly located processors or Ω( 3 √ n) adversarially located processors can force any outcome. We complement this by showing that the protocol is resilient to any adversarial coalition of size O( 4 √ n). We propose a modification to the protocol, and show that it is resilient to every coalition of size ?( √ n), by exhibiting both an attack and a resilience result. For every k ≥ 1, we define a family of graphs Gk that can be simulated by trees where each node in the tree simulates at most k processors. We show that for every graph in Gk , there is no fair leader election protocol that is resilient to coalitions of size k. Our result generalizes a previous result of Abraham et al. [4] that states that for every graph, there is no fair leader election protocol which is resilient to coalitions of size ?n/2 ?.
Bangalore, Laasya, Choudhury, Ashish, Patra, Arpita.  2018.  Almost-Surely Terminating Asynchronous Byzantine Agreement Revisited. Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing. :295–304.
The problem of Byzantine Agreement (BA) is of interest to both distributed computing and cryptography community. Following well-known results from the distributed computing literature, BA problem in the asynchronous network setting encounters inevitable non-termination issues. The impasse is overcome via randomization that allows construction of BA protocols in two flavours of termination guarantee - with overwhelming probability and with probability one. The latter type termed as almost-surely terminating BAs are the focus of this paper. An eluding problem in the domain of almost-surely terminating BAs is achieving a constant expected running time. Our work makes progress in this direction. In a setting with n parties and an adversary with unbounded computing power controlling at most t parties in Byzantine fashion, we present two asynchronous almost-surely terminating BA protocols: With the optimal resilience of t \textbackslashtextless n3 , our first protocol runs for expected O(n) time. The existing protocols in the same setting either runs for expected O(n2) time (Abraham et al, PODC 2008) or requires exponential computing power from the honest parties (Wang, CoRR 2015). In terms of communication complexity, our construction outperforms all the known constructions that offer almost-surely terminating feature. With the resilience of t \textbackslashtextless n/3+ε for any ε \textbackslashtextgreater 0, our second protocol runs for expected O( 1 ε ) time. The expected running time of our protocol turns constant when ε is a constant fraction. The known constructions with constant expected running time either require ε to be at least 1 (Feldman-Micali, STOC 1988), implying t \textbackslashtextless n/4, or calls for exponential computing power from the honest parties (Wang, CoRR 2015). We follow the traditional route of building BA via common coin protocol that in turn reduces to asynchronous verifiable secretsharing (AVSS). Our constructions are built on a variant of AVSS that is termed as shunning. A shunning AVSS fails to offer the properties of AVSS when the corrupt parties strike, but allows the honest parties to locally detect and shun a set of corrupt parties for any future communication. Our shunning AVSS with t \textbackslashtextless n/3 and t \textbackslashtextless n 3+ε guarantee Ω(n) and respectively Ω(εt 2) conflicts to be revealed when failure occurs. Turning this shunning AVSS to a common coin protocol constitutes another contribution of our paper.
Sel, Daniel, Zhang, Kaiwen, Jacobsen, Hans-Arno.  2018.  Towards Solving the Data Availability Problem for Sharded Ethereum. Proceedings of the 2Nd Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers. :25–30.
The success and growing popularity of blockchain technology has lead to a significant increase in load on popular permissionless blockchains such as Ethereum. With the current design, these blockchain systems do not scale with additional nodes since every node executes every transaction. Further efforts are therefore necessary to develop scalable permissionless blockchain systems. In this paper, we provide an aggregated overview of the current research on the Ethereum blockchain towards solving the scalability challenge. We focus on the concept of sharding, which aims to break the restriction of every participant being required to execute every transaction and store the entire state. This concept however introduces new complexities in the form of stateless clients, which leads to a new challenge: how to guarantee that critical data is published and stays available for as long as it is relevant. We present an approach towards solving the data availability problem (DAP) that leverages synergy effects by reusing the validators from Casper. We then propose two distinct approaches for reliable collation proposal, state transition, and state verification in shard chains. One approach is based on verification by committees of Casper validators that execute transactions in proposed blocks using witness data provided by executors. The other approach relies on a proof of execution provided by the executor proposing the block and a challenge game, where other executors verify the proof. Both concepts rely on executors for long-term storage of shard chain state.
van der Veen, Rosa, Hakkerainen, Viola, Peeters, Jeroen, Trotto, Ambra.  2018.  Understanding Transformations Through Design: Can Resilience Thinking Help? Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction. :694–702.
The interaction design community increasingly addresses how digital technologies may contribute to societal transformations. This paper aims at understanding transformation ignited by a particular constructive design research project. This transformation will be discussed and analysed using resilience thinking, an established approach within sustainability science. By creating a common language between these two disciplines, we start to identify what kind of transformation took place, what factors played a role in the transformation, and which transformative qualities played a role in creating these factors. Our intention is to set out how the notion of resilience might provide a new perspective to understand how constructive design research may produce results that have a sustainable social impact. The findings point towards ways in which these two different perspectives on transformation the analytical perspective of resilience thinking and the generative perspective of constructive design research - may become complementary in both igniting and understanding transformations.
Correia, Andreia, Felber, Pascal, Ramalhete, Pedro.  2018.  Romulus: Efficient Algorithms for Persistent Transactional Memory. Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures. :271–282.
Byte addressable persistent memory eliminates the need for serialization and deserialization of data, to and from persistent storage, allowing applications to interact with it through common store and load instructions. In the event of a process or system failure, applications rely on persistent techniques to provide consistent storage of data in non-volatile memory (NVM). For most of these techniques, consistency is ensured through logging of updates, with consequent intensive cache line flushing and persistent fences necessary to guarantee correctness. Undo log based approaches require store interposition and persistence fences before each in-place modification. Redo log based techniques can execute transactions using just two persistence fences, although they require store and load interposition which may incur a performance penalty for large transactions. So far, these techniques have been difficult to integrate with known memory allocators, requiring allocators or garbage collectors specifically designed for NVM. We present Romulus, a user-level library persistent transactional memory (PTM) which provides durable transactions through the usage of twin copies of the data. A transaction in Romulus requires at most four persistence fences, regardless of the transaction size. Romulus uses only store interposition. Any sequential implementation of a memory allocator can be adapted to work with Romulus. Thanks to its lightweight design and low synchronization overhead, Romulus achieves twice the throughput of current state of the art PTMs in update-only workloads, and more than one order of magnitude in read-mostly scenarios.
2019-12-05
Chao, Chih-Min, Lee, Wei-Che, Wang, Cong-Xiang, Huang, Shin-Chung, Yang, Yu-Chich.  2018.  A Flexible Anti-Jamming Channel Hopping for Cognitive Radio Networks. 2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW). :549-551.

In cognitive radio networks (CRNs), secondary users (SUs) are vulnerable to malicious attacks because an SU node's opportunistic access cannot be protected from adversaries. How to design a channel hopping scheme to protect SU nodes from jamming attacks is thus an important issue in CRNs. Existing anti-jamming channel hopping schemes have some limitations: Some require SU nodes to exchange secrets in advance; some require an SU node to be either a receiver or a sender, and some are not flexible enough. Another issue for existing anti-jamming channel hopping schemes is that they do not consider different nodes may have different traffic loads. In this paper, we propose an anti-jamming channel hopping protocol, Load Awareness Anti-jamming channel hopping (LAA) scheme. Nodes running LAA are able to change their channel hopping sequences based on their sending and receiving traffic. Simulation results verify that LAA outperforms existing anti-jamming schemes.

Avila, J, Prem, S, Sneha, R, Thenmozhi, K.  2018.  Mitigating Physical Layer Attack in Cognitive Radio - A New Approach. 2018 International Conference on Computer Communication and Informatics (ICCCI). :1-4.

With the improvement in technology and with the increase in the use of wireless devices there is deficiency of radio spectrum. Cognitive radio is considered as the solution for this problem. Cognitive radio is capable to detect which communication channels are in use and which are free, and immediately move into free channels while avoiding the used ones. This increases the usage of radio frequency spectrum. Any wireless system is prone to attack. Likewise, the main two attacks in the physical layer of cognitive radio are Primary User Emulation Attack (PUEA) and replay attack. This paper focusses on mitigating these two attacks with the aid of authentication tag and distance calculation. Mitigation of these attacks results in error free transmission which in turn fallouts in efficient dynamic spectrum access.

Hussain, Muzzammil, Swami, Tulsi.  2018.  Primary User Authentication in Cognitive Radio Network Using Pre-Generated Hash Digest. 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :903-908.

The primary objective of Cognitive Radio Networks (CRN) is to opportunistically utilize the available spectrum for efficient and seamless communication. Like all other radio networks, Cognitive Radio Network also suffers from a number of security attacks and Primary User Emulation Attack (PUEA) is vital among them. Primary user Emulation Attack not only degrades the performance of the Cognitive Radio Networks but also dissolve the objective of Cognitive Radio Network. Efficient and secure authentication of Primary Users (PU) is an only solution to mitigate Primary User Emulation Attack but most of the mechanisms designed for this are either complex or make changes to the spectrum. Here, we proposed a mechanism to authenticate Primary Users in Cognitive Radio Network which is neither complex nor make any changes to spectrum. The proposed mechanism is secure and also has improved the performance of the Cognitive Radio Network substantially.

Mu, Li, Mianquan, Li, Yuzhen, Huang, Hao, Yin, Yan, Wang, Baoquan, Ren, Xiaofei, Qu, Rui, Yu.  2018.  Security Analysis of Overlay Cognitive Wireless Networks with an Untrusted Secondary User. 2018 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). :1-5.

In this article, we study the transmission secrecy performance of primary user in overlay cognitive wireless networks, in which an untrusted energy-limited secondary cooperative user assists the primary transmission to exchange for the spectrum resource. In the network, the information can be simultaneously transmitted through the direct and relay links. For the enhancement of primary transmission security, a maximum ratio combining (MRC) scheme is utilized by the receiver to exploit the two copies of source information. For the security analysis, we firstly derive the tight lower bound expression for secrecy outage probability (SOP). Then, three asymptotic expressions for SOP are also expressed to further analyze the impacts of the transmit power and the location of secondary cooperative node on the primary user information security. The findings show that the primary user information secrecy performance enhances with the improvement of transmit power. Moreover, the smaller the distance between the secondary node and the destination, the better the primary secrecy performance.

Sejaphala, Lanka, Velempini, Mthulisi, Dlamini, Sabelo Velemseni.  2018.  HCOBASAA: Countermeasure Against Sinkhole Attacks in Software-Defined Wireless Sensor Cognitive Radio Networks. 2018 International Conference on Advances in Big Data, Computing and Data Communication Systems (icABCD). :1-5.

Software-defined wireless sensor cognitive radio network is one of the emerging technologies which is simple, agile, and flexible. The sensor network comprises of a sink node with high processing power. The sensed data is transferred to the sink node in a hop-by-hop basis by sensor nodes. The network is programmable, automated, agile, and flexible. The sensor nodes are equipped with cognitive radios, which sense available spectrum bands and transmit sensed data on available bands, which improves spectrum utilization. Unfortunately, the Software-defined wireless sensor cognitive radio network is prone to security issues. The sinkhole attack is the most common attack which can also be used to launch other attacks. We propose and evaluate the performance of Hop Count-Based Sinkhole Attack detection Algorithm (HCOBASAA) using probability of detection, probability of false negative, and probability of false positive as the performance metrics. On average HCOBASAA managed to yield 100%, 75%, and 70% probability of detection.

Yadav, Kuldeep, Roy, Sanjay Dhar, Kundu, Sumit.  2018.  Total Error Reduction in Presence of Malicious User in a Cognitive Radio Network. 2018 2nd International Conference on Electronics, Materials Engineering Nano-Technology (IEMENTech). :1-4.

Primary user emulation (PUE) attack causes security issues in a cognitive radio network (CRN) while sensing the unused spectrum. In PUE attack, malicious users transmit an emulated primary signal in spectrum sensing interval to secondary users (SUs) to forestall them from accessing the primary user (PU) spectrum bands. In the present paper, the defense against such attack by Neyman-Pearson criterion is shown in terms of total error probability. Impact of several parameters such as attacker strength, attacker's presence probability, and signal-to-noise ratio on SU is shown. Result shows proposed method protect the harmful effects of PUE attack in spectrum sensing.