Visible to the public Biblio

Found 511 results

Filters: First Letter Of Title is R  [Clear All Filters]
A B C D E F G H I J K L M N O P Q [R] S T U V W X Y Z   [Show ALL]
R
Hamamreh, Rushdi A., Ayyad, Mohammad, Jamoos, Mohammad.  2019.  RAD: Reinforcement Authentication DYMO Protocol for MANET. 2019 International Conference on Promising Electronic Technologies (ICPET). :136–141.
Mobile ad hoc network (MANET) does not have fixed infrastructure centralized server which manage the connections between the nodes. Rather, the nodes in MANET move randomly. Thus, it is risky to exchange data between nodes because there is a high possibility of having malicious node in the path. In this paper, we will describe a new authentication technique using message digest 5 (MD5), hashing for dynamic MANET on demand protocol (DYMO) based on reinforcement learning. In addition, we will describe an encryption technique that can be used without the need for a third party to distribute a secret key. After implementing the suggested model, results showed a remarkable enhancement in securing the path by increasing the packet delivery ratio and average throughput. On the other hand, there was an increase in end to end delay due to time spent in cryptographic operations.
Cheng, Raymond, Scott, William, Ellenbogen, Paul, Howell, Jon, Roesner, Franziska, Krishnamurthy, Arvind, Anderson, Thomas.  2016.  Radiatus: A Shared-Nothing Server-Side Web Architecture. Proceedings of the Seventh ACM Symposium on Cloud Computing. :237–250.

Web applications are a frequent target of successful attacks. In most web frameworks, the damage is amplified by the fact that application code is responsible for security enforcement. In this paper, we design and evaluate Radiatus, a shared-nothing web framework where application-specific computation and storage on the server is contained within a sandbox with the privileges of the end-user. By strongly isolating users, user data and service availability can be protected from application vulnerabilities. To make Radiatus practical at the scale of modern web applications, we introduce a distributed capabilities system to allow fine-grained secure resource sharing across the many distributed services that compose an application. We analyze the strengths and weaknesses of a shared-nothing web architecture, which protects applications from a large class of vulnerabilities, but adds an overhead of 60.7% per server and requires an additional 31MB of memory per active user. We demonstrate that the system can scale to 20K operations per second on a 500-node AWS cluster.

Ji, Yang, Lee, Sangho, Downing, Evan, Wang, Weiren, Fazzini, Mattia, Kim, Taesoo, Orso, Alessandro, Lee, Wenke.  2017.  RAIN: Refinable Attack Investigation with On-Demand Inter-Process Information Flow Tracking. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :377–390.

As modern attacks become more stealthy and persistent, detecting or preventing them at their early stages becomes virtually impossible. Instead, an attack investigation or provenance system aims to continuously monitor and log interesting system events with minimal overhead. Later, if the system observes any anomalous behavior, it analyzes the log to identify who initiated the attack and which resources were affected by the attack and then assess and recover from any damage incurred. However, because of a fundamental tradeoff between log granularity and system performance, existing systems typically record system-call events without detailed program-level activities (e.g., memory operation) required for accurately reconstructing attack causality or demand that every monitored program be instrumented to provide program-level information. To address this issue, we propose RAIN, a Refinable Attack INvestigation system based on a record-replay technology that records system-call events during runtime and performs instruction-level dynamic information flow tracking (DIFT) during on-demand process replay. Instead of replaying every process with DIFT, RAIN conducts system-call-level reachability analysis to filter out unrelated processes and to minimize the number of processes to be replayed, making inter-process DIFT feasible. Evaluation results show that RAIN effectively prunes out unrelated processes and determines attack causality with negligible false positive rates. In addition, the runtime overhead of RAIN is similar to existing system-call level provenance systems and its analysis overhead is much smaller than full-system DIFT.

Chow, J., Li, X., Mountrouidou, X..  2017.  Raising flags: Detecting covert storage channels using relative entropy. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :25–30.

This paper focuses on one type of Covert Storage Channel (CSC) that uses the 6-bit TCP flag header in TCP/IP network packets to transmit secret messages between accomplices. We use relative entropy to characterize the irregularity of network flows in comparison to normal traffic. A normal profile is created by the frequency distribution of TCP flags in regular traffic packets. In detection, the TCP flag frequency distribution of network traffic is computed for each unique IP pair. In order to evaluate the accuracy and efficiency of the proposed method, this study uses real regular traffic data sets as well as CSC messages using coding schemes under assumptions of both clear text, composed by a list of keywords common in Unix systems, and encrypted text. Moreover, smart accomplices may use only those TCP flags that are ever appearing in normal traffic. Then, in detection, the relative entropy can reveal the dissimilarity of a different frequency distribution from this normal profile. We have also used different data processing methods in detection: one method summarizes all the packets for a pair of IP addresses into one flow and the other uses a sliding moving window over such a flow to generate multiple frames of packets. The experimentation results, displayed by Receiver Operating Characteristic (ROC) curves, have shown that the method is promising to differentiate normal and CSC traffic packet streams. Furthermore the delay of raising an alert is analyzed for CSC messages to show its efficiency.

Mohsen, Fadi, Jafaarian, Haadi.  2019.  Raising the Bar Really High: An MTD Approach to Protect Data in Embedded Browsers. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:786—794.
The safety of web browsers is essential to the privacy of Internet users and the security of their computing systems. In the last few years, there have been several cyber attacks geared towards compromising surfers' data and systems via exploiting browser-based vulnerabilities. Android and a number of mobile operating systems have been supporting a UI component called WebView, which can be embedded in any mobile application to render the web contents. Yet, this mini-browser component has been found to be vulnerable to various kinds of attacks. For instance, an attacker in her WebView-Embedded app can inject malicious JavaScripts into the WebView to modify the web contents or to steal user's input values. This kind of attack is particularly challenging due to the full control of attackers over the content of the loaded pages. In this paper, we are proposing and testing a server-side moving target defense technique to counter the risk of JavaScript injection attacks on mobile WebViews. The solution entails creating redundant HTML forms, randomizing their attributes and values, and asserting stealthy prompts for the user data. The solution does not dictate any changes to the browser or applications codes, neither it requires key sharing with benign clients. The results of our performance and security analysis suggest that our proposed approach protects the confidentiality and integrity of user input values with minimum overhead.
Hall, Chris, Puder, Doron, Sawin, William F..  2016.  Ramanujan Coverings of Graphs. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :533–541.

Let G be a finite connected graph, and let ρ be the spectral radius of its universal cover. For example, if G is k-regular then ρ=2√k−1. We show that for every r, there is an r-covering (a.k.a. an r-lift) of G where all the new eigenvalues are bounded from above by ρ. It follows that a bipartite Ramanujan graph has a Ramanujan r-covering for every r. This generalizes the r=2 case due to Marcus, Spielman and Srivastava (2013). Every r-covering of G corresponds to a labeling of the edges of G by elements of the symmetric group Sr. We generalize this notion to labeling the edges by elements of various groups and present a broader scenario where Ramanujan coverings are guaranteed to exist. In particular, this shows the existence of richer families of bipartite Ramanujan graphs than was known before. Inspired by Marcus-Spielman-Srivastava, a crucial component of our proof is the existence of interlacing families of polynomials for complex reflection groups. The core argument of this component is taken from Marcus-Spielman-Srivastava (2015). Another important ingredient of our proof is a new generalization of the matching polynomial of a graph. We define the r-th matching polynomial of G to be the average matching polynomial of all r-coverings of G. We show this polynomial shares many properties with the original matching polynomial. For example, it is real rooted with all its roots inside [−ρ,ρ].

Götzfried, Johannes, Müller, Tilo, Drescher, Gabor, Nürnberger, Stefan, Backes, Michael.  2016.  RamCrypt: Kernel-based Address Space Encryption for User-mode Processes. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :919–924.

We present RamCrypt, a solution that allows unmodified Linux processes to transparently work on encrypted data. RamCrypt can be deployed and enabled on a per-process basis without recompiling user-mode applications. In every enabled process, data is only stored in cleartext for the moment it is processed, and otherwise stays encrypted in RAM. In particular, the required encryption keys do not reside in RAM, but are stored in CPU registers only. Hence, RamCrypt effectively thwarts memory disclosure attacks, which grant unauthorized access to process memory, as well as physical attacks such as cold boot and DMA attacks. In its default configuration, RamCrypt exposes only up to 4 memory pages in cleartext at the same time. For the nginx web server serving encrypted HTTPS pages under heavy load, the necessary TLS secret key is hidden for 97% of its time.

Li, Z., Li, S..  2017.  Random forest algorithm under differential privacy. 2017 IEEE 17th International Conference on Communication Technology (ICCT). :1901–1905.

Trying to solve the risk of data privacy disclosure in classification process, a Random Forest algorithm under differential privacy named DPRF-gini is proposed in the paper. In the process of building decision tree, the algorithm first disturbed the process of feature selection and attribute partition by using exponential mechanism, and then meet the requirement of differential privacy by adding Laplace noise to the leaf node. Compared with the original algorithm, Empirical results show that protection of data privacy is further enhanced while the accuracy of the algorithm is slightly reduced.

Weedon, M., Tsaptsinos, D., Denholm-Price, J..  2017.  Random forest explorations for URL classification. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–4.

Phishing is a major concern on the Internet today and many users are falling victim because of criminal's deceitful tactics. Blacklisting is still the most common defence users have against such phishing websites, but is failing to cope with the increasing number. In recent years, researchers have devised modern ways of detecting such websites using machine learning. One such method is to create machine learnt models of URL features to classify whether URLs are phishing. However, there are varying opinions on what the best approach is for features and algorithms. In this paper, the objective is to evaluate the performance of the Random Forest algorithm using a lexical only dataset. The performance is benchmarked against other machine learning algorithms and additionally against those reported in the literature. Initial results from experiments indicate that the Random Forest algorithm performs the best yielding an 86.9% accuracy.

Sharma, Dilli P., Cho, Jin-Hee, Moore, Terrence J., Nelson, Frederica F., Lim, Hyuk, Kim, Dong Seong.  2019.  Random Host and Service Multiplexing for Moving Target Defense in Software-Defined Networks. ICC 2019 - 2019 IEEE International Conference on Communications (ICC). :1—6.

Moving target defense (MTD) is a proactive defense mechanism of changing the attack surface to increase an attacker's confusion and/or uncertainty, which invalidates its intelligence gained through reconnaissance and/or network scanning attacks. In this work, we propose software-defined networking (SDN)-based MTD technique using the shuffling of IP addresses and port numbers aiming to obfuscate both network and transport layers' real identities of the host and the service for defending against the network reconnaissance and scanning attacks. We call our proposed MTD technique Random Host and Service Multiplexing, namely RHSM. RHSM allows each host to use random, multiple virtual IP addresses to be dynamically and periodically shuffled. In addition, it uses short-lived, multiple virtual port numbers for an active service running on the host. Our proposed RHSM is novel in that we employ multiplexing (or de-multiplexing) to dynamically change and remap from all the virtual IPs of the host to the real IP or the virtual ports of the services to the real port, respectively. Via extensive simulation experiments, we prove how effectively and efficiently RHSM outperforms a baseline counterpart (i.e., a static network without RHSM) in terms of the attack success probability and defense cost.

Zadeh, B.Q., Handschuh, S..  2014.  Random Manhattan Indexing. Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on. :203-208.

Vector space models (VSMs) are mathematically well-defined frameworks that have been widely used in text processing. In these models, high-dimensional, often sparse vectors represent text units. In an application, the similarity of vectors -- and hence the text units that they represent -- is computed by a distance formula. The high dimensionality of vectors, however, is a barrier to the performance of methods that employ VSMs. Consequently, a dimensionality reduction technique is employed to alleviate this problem. This paper introduces a new method, called Random Manhattan Indexing (RMI), for the construction of L1 normed VSMs at reduced dimensionality. RMI combines the construction of a VSM and dimension reduction into an incremental, and thus scalable, procedure. In order to attain its goal, RMI employs the sparse Cauchy random projections.

K. Sathya, J. Premalatha, V. Rajasekar.  2015.  "Random number generation based on sensor with decimation method". 2015 IEEE Workshop on Computational Intelligence: Theories, Applications and Future Directions (WCI). :1-5.

Strength of security and privacy of any cryptographic mechanisms that use random numbers require that the random numbers generated have two important properties namely 1. Uniform distribution and 2. Independence. With the growth of Internet many devices are connected to Internet that host sensors. One idea proposed is to use sensor data as seed for Random Number Generator (RNG) since sensors measure the physical phenomena that exhibit randomness over time. The random numbers generated from sensor data can be used for cryptographic algorithms in Internet activities. These sensor data also pose weaknesses where sensors may be under adversarial control that may lead to generating expected random sequence which breaks the security and privacy. This paper proposes a wash-rinse-spin approach to process the raw sensor data that increases randomness in the seed value. The generated sequences from two sensors are combined by Decimation method to improve unpredictability. This makes the sensor data to be more secure in generating random numbers preventing attackers from knowing the random sequence through adversarial control.

Boyacı, O., Tantuğ, A. C..  2017.  A random number generation method based on discrete time chaotic maps. 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS). :1212–1215.

In this paper a random number generation method based on a piecewise linear one dimensional (PL1D) discrete time chaotic maps is proposed for applications in cryptography and steganography. Appropriate parameters are determined by examining the distribution of underlying chaotic signal and random number generator (RNG) is numerically verified by four fundamental statistical test of FIPS 140-2. Proposed design is practically realized on the field programmable analog and digital arrays (FPAA-FPGA). Finally it is experimentally verified that the presented RNG fulfills the NIST 800-22 randomness test without post processing.

Erbay, C., Ergïn, S..  2018.  Random Number Generator Based on Hydrogen Gas Sensor for Security Applications. 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS). :709–712.
Cryptographic applications need high-quality random number generator (RNG) for strong security and privacy measures. This paper presents RNG based on a hydrogen gas sensor that is fabricated by using microfabrication techniques. The proposed approach extracts the thermal noise information as an entropy source from the gas sensor that is non-deterministic during its operation and using hash function SHA-256 as post processing. This non-deterministic noise is then processed to acquire a random number set fulfilling the NIST 800-22 statistical randomness test suite and it demonstrates that a gas sensor based RNG can provide high-quality random numbers. Secure data transfer is possible by having this method directly without any other hardware where hydrogen gas sensor needs to be used such as petrochemical field, fuel cells, and nuclear reactors.
Ming, Z., Zheng-jiang, W., Liu, H..  2017.  Random Projection Data Perturbation Based Privacy Protection in WSNs. 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :493–498.

Wireless sensor networks are responsible for sensing, gathering and processing the information of the objects in the network coverage area. Basic data fusion technology generally does not provide data privacy protection mechanism, and the privacy protection mechanism in health care, military reconnaissance, smart home and other areas of the application is usually indispensable. In this paper, we consider the privacy, confidentiality, and the accuracy of fusion results, and propose a data fusion algorithm for privacy preserving. This algorithm relies on the characteristics of data fusion, and uses the method of pre-distribution random number in the node to get the privacy protection requirements of the original data. Theoretical analysis shows that the malicious attacker attempts to steal the difficulty of node privacy in PPND algorithm. At the same time in the TOSSIM simulation results also show that, compared with TAG, SMART algorithm, PPND algorithm in the data traffic, the convergence accuracy of the good performance.

Choi, Jungyong, Shin, WoonSeob, Kim, Jonghyun, Kim, Ki-Hyung.  2020.  Random Seed Generation For IoT Key Generation and Key Management System Using Blockchain. 2020 International Conference on Information Networking (ICOIN). :663–665.
Recently, the Internet of Things (IoT) is growing rapidly. IoT sensors are attached to various devices, and information is detected, collected and utilized through various wired and wireless communication environments. As the IoT is used in various places, IoT devices face a variety of malicious attacks such as MITM and reverse engineering. To prevent these, encryption is required for device-to-device communication, and keys required for encryption must be properly managed. We propose a scheme to generate seed needed for key generation and a scheme to manage the public key using blockchain.
Zhang, Kai, Liu, Chuanren, Zhang, Jie, Xiong, Hui, Xing, Eric, Ye, Jieping.  2017.  Randomization or Condensation?: Linear-Cost Matrix Sketching Via Cascaded Compression Sampling Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :615–623.
Matrix sketching is aimed at finding compact representations of a matrix while simultaneously preserving most of its properties, which is a fundamental building block in modern scientific computing. Randomized algorithms represent state-of-the-art and have attracted huge interest from the fields of machine learning, data mining, and theoretic computer science. However, it still requires the use of the entire input matrix in producing desired factorizations, which can be a major computational and memory bottleneck in truly large problems. In this paper, we uncover an interesting theoretic connection between matrix low-rank decomposition and lossy signal compression, based on which a cascaded compression sampling framework is devised to approximate an m-by-n matrix in only O(m+n) time and space. Indeed, the proposed method accesses only a small number of matrix rows and columns, which significantly improves the memory footprint. Meanwhile, by sequentially teaming two rounds of approximation procedures and upgrading the sampling strategy from a uniform probability to more sophisticated, encoding-orientated sampling, significant algorithmic boosting is achieved to uncover more granular structures in the data. Empirical results on a wide spectrum of real-world, large-scale matrices show that by taking only linear time and space, the accuracy of our method rivals those state-of-the-art randomized algorithms consuming a quadratic, O(mn), amount of resources.
Liu, Mingmou, Pan, Xiaoyin, Yin, Yitong.  2016.  Randomized Approximate Nearest Neighbor Search with Limited Adaptivity. Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures. :23–33.

We study the problem of approximate nearest neighbor search in \$d\$-dimensional Hamming space \0,1\d. We study the complexity of the problem in the famous cell-probe model, a classic model for data structures. We consider algorithms in the cell-probe model with limited adaptivity, where the algorithm makes k rounds of parallel accesses to the data structure for a given k. For any k ≥ 1, we give a simple randomized algorithm solving the approximate nearest neighbor search using k rounds of parallel memory accesses, with O(k(log d)1/k) accesses in total. We also give a more sophisticated randomized algorithm using O(k+(1/k log d)O(1/k)) memory accesses in k rounds for large enough k. Both algorithms use data structures of size polynomial in n, the number of points in the database. We prove an Ω(1/k(log d)1/k) lower bound for the total number of memory accesses required by any randomized algorithm solving the approximate nearest neighbor search within k ≤ (log log d)/(2 log log log d) rounds of parallel memory accesses on any data structures of polynomial size. This lower bound shows that our first algorithm is asymptotically optimal for any constant round k. And our second algorithm approaches the asymptotically optimal tradeoff between rounds and memory accesses, in a sense that the lower bound of memory accesses for any k1 rounds can be matched by the algorithm within k2=O(k1) rounds. In the extreme, for some large enough k=Θ((log log d)/(log log log d)), our second algorithm matches the Θ((log log d)/(log log log d)) tight bound for fully adaptive algorithms for approximate nearest neighbor search due to Chakrabarti and Regev.

Sun, Lin, Zhang, Lan, Ye, Xiaojun.  2018.  Randomized Bit Vector: Privacy-Preserving Encoding Mechanism. Proceedings of the 27th ACM International Conference on Information and Knowledge Management. :1263–1272.
Recently, many methods have been proposed to prevent privacy leakage in record linkage by encoding record pair data into another anonymous space. Nevertheless, they cannot perform well in some circumstances due to high computational complexities, low privacy guarantees or loss of data utility. In this paper, we propose distance-aware encoding mechanisms to compare numerical values in the anonymous space. We first embed numerical values into Hamming space by a low-computational encoding algorithm with randomized bit vector. To provide rigorous privacy guarantees, we use the random response based on differential privacy to keep global indistinguishability of original data and use Laplace noises via pufferfish mechanism to provide local indistinguishability. Besides, we provide an approach for embedding and privacy-related parameters selection to improve data utility. Experiments on datasets from different data distributions and application contexts validate that our approaches can be used efficiently in privacy-preserving record linkage tasks compared with previous works and have excellent performance even under very small privacy budgets.
Assadi, Sepehr, Khanna, Sanjeev.  2017.  Randomized Composable Coresets for Matching and Vertex Cover. Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures. :3–12.

A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say k, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and $\backslash$emph\round-optimal\ (requiring essentially no interaction between the machines) protocol. Some well-known examples of techniques for creating summaries include sampling, linear sketching, and composable coresets. These techniques have been successfully used to design communication efficient solutions for many fundamental graph problems. However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a modest approximation factor of $\backslash$polylog\(n)\ requires using representative summaries of size $\backslash$widetilde\$\backslash$Omega\(ntextasciicircum2) i.e. essentially no better summary exists than each machine simply sending its entire input graph. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit $\backslash$emph\randomized composable coresets\ of size $\backslash$widetildeØ\(n) that yield an $\backslash$widetildeØ\(1)-approximate solution$\backslash$footnote\Here and throughout the paper, we use $\backslash$Ot($\backslash$cdot) notation to suppress $\backslash$polylog\(n)\ factors, where n is the number of vertices in the graph. In other words, a small subgraph of the input graph at each machine can be identified as its representative summary and the final answer then is obtained by simply running any maximum matching or minimum vertex cover algorithm on these combined subgraphs. This results in an Õ(1)-approximation simultaneous protocol for these problems with Õ(nk) total communication when the input is randomly partitioned across k machines. We also prove our results are optimal in a very strong sense: we not only rule out existence of smaller randomized composable coresets for these problems but in fact show that our $\backslash$Ot(nk) bound for total communication is optimal for em any simultaneous communication protocol (i.e. not only for randomized coresets) for these two problems. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication, improving the previous best known round complexity for these problems.\vphantom\

Ma, C., Yang, X., Wang, H..  2018.  Randomized Online CP Decomposition. 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI). :414-419.

CANDECOMP/PARAFAC (CP) decomposition has been widely used to deal with multi-way data. For real-time or large-scale tensors, based on the ideas of randomized-sampling CP decomposition algorithm and online CP decomposition algorithm, a novel CP decomposition algorithm called randomized online CP decomposition (ROCP) is proposed in this paper. The proposed algorithm can avoid forming full Khatri-Rao product, which leads to boost the speed largely and reduce memory usage. The experimental results on synthetic data and real-world data show the ROCP algorithm is able to cope with CP decomposition for large-scale tensors with arbitrary number of dimensions. In addition, ROCP can reduce the computing time and memory usage dramatically, especially for large-scale tensors.

Al-Odat, Zeyad, Abbas, Assad, Khan, Samee U..  2019.  Randomness Analyses of the Secure Hash Algorithms, SHA-1, SHA-2 and Modified SHA. 2019 International Conference on Frontiers of Information Technology (FIT). :316–3165.
This paper introduces a security analysis scheme for the most famous secure hash algorithms SHA-1 and SHA-2. Both algorithms follow Merkle Damgård structure to compute the corresponding hash function. The randomness of the output hash reflects the strength and security of the generated hash. Therefore, the randomness of the internal rounds of the SHA-1 and SHA-2 hash functions is analyzed using Bayesian and odd ratio tests. Moreover, a proper replacement for both algorithms is proposed, which produces a hash output with more randomness level. The experiments were conducted using a high performance computing testbed and CUDA parallel computing platform.
Tuba, Eva, Tuba, Milan, Simian, Dana.  2016.  Range Based Wireless Sensor Node Localization Using Bat Algorithm. Proceedings of the 13th ACM Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks. :41–44.

For most wireless sensor networks applications it is necessary to know the locations of all sensor nodes. Since sensor nodes are usually cheap, it is impossible to equip them all with GPS devices, hence the localization process depends on few static or mobile anchor nodes with GPS devices. Range based localization methods use estimated distance between sensor and anchor nodes where the quality of estimation usually depends on the distance and angle of arrival. Localization based on such noisy data represents a hard optimization problem for which swarm intelligence algorithms have been successfully used. In this paper we propose a range based localization algorithm that uses recently developed bat algorithm. The two stage localization algorithm uses four semi-mobile anchors that are at first located at the corners of the area where sensors are deployed and after that the anchors move to their optimal positions with minimal distances to sensor nodes, but with maximal viewing angles. Our proposed algorithm is even at the first stage superior to other approaches from literature in minimizing the error between real and estimated sensor node positions and it is additionally improved at the second stage.

Ni, J., Cheng, W., Zhang, K., Song, D., Yan, T., Chen, H., Zhang, X..  2017.  Ranking Causal Anomalies by Modeling Local Propagations on Networked Systems. 2017 IEEE International Conference on Data Mining (ICDM). :1003–1008.

Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.

Cheng, Wei, Zhang, Kai, Chen, Haifeng, Jiang, Guofei, Chen, Zhengzhang, Wang, Wei.  2016.  Ranking Causal Anomalies via Temporal and Dynamical Analysis on Vanishing Correlations. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :805–814.

Modern world has witnessed a dramatic increase in our ability to collect, transmit and distribute real-time monitoring and surveillance data from large-scale information systems and cyber-physical systems. Detecting system anomalies thus attracts significant amount of interest in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be a powerful way in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: 1) fault propagation in the network is ignored; 2) the root casual anomalies may not always be the nodes with a high-percentage of vanishing correlations; 3) temporal patterns of vanishing correlations are not exploited for robust detection. To address these limitations, in this paper we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network, and can perform joint inference on both the structural, and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations, and can compensate for unstructured measurement noise in the system. Extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets demonstrate the effectiveness of our approach.