Visible to the public Biblio

Filters: Keyword is statistical analysis  [Clear All Filters]
2020-01-21
Iriqat, Yousef Mohammad, Ahlan, Abd Rahman, Molok, Nurul Nuha Abdul.  2019.  Information Security Policy Perceived Compliance Among Staff in Palestine Universities: An Empirical Pilot Study. 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT). :580–585.
In today's interconnected world, universities recognize the importance of protecting their information assets from internal and external threats. Being the possible insider threats to Information Security, employees are often coined as the weakest link. Both employees and organizations should be aware of this raising challenge. Understanding staff perception of compliance behaviour is critical for universities wanting to leverage their staff capabilities to mitigate Information Security risks. Therefore, this research seeks to get insights into staff perception based on factors adopted from several theories by using proposed constructs i.e. "perceived" practices/policies and "perceived" intention to comply. Drawing from the General Deterrence Theory, Protection Motivation Theory, Theory of Planned Behaviour and Information Reinforcement, within the context of Palestine universities, this paper integrates staff awareness of Information Security Policies (ISP) countermeasures as antecedents to ``perceived'' influencing factors (perceived sanctions, perceived rewards, perceived coping appraisal, and perceived information reinforcement). The empirical study is designed to follow a quantitative research approaches, use survey as a data collection method and questionnaires as the research instruments. Partial least squares structural equation modelling is used to inspect the reliability and validity of the measurement model and hypotheses testing for the structural model. The research covers ISP awareness among staff and seeks to assert that information security is the responsibility of all academic and administrative staff from all departments. Overall, our pilot study findings seem promising, and we found strong support for our theoretical model.
2019-11-26
Aiken, William, Kim, Hyoungshick, Ryoo, Jungwoo, Rosson, Mary Beth.  2018.  An Implementation and Evaluation of Progressive Authentication Using Multiple Level Pattern Locks. 2018 16th Annual Conference on Privacy, Security and Trust (PST). :1-6.
This paper presents a possible implementation of progressive authentication using the Android pattern lock. Our key idea is to use one pattern for two access levels to the device; an abridged pattern is used to access generic applications and a second, extended and higher-complexity pattern is used less frequently to access more sensitive applications. We conducted a user study of 89 participants and a consecutive user survey on those participants to investigate the usability of such a pattern scheme. Data from our prototype showed that for unlocking lowsecurity applications the median unlock times for users of the multiple pattern scheme and conventional pattern scheme were 2824 ms and 5589 ms respectively, and the distributions in the two groups differed significantly (Mann-Whitney U test, p-value less than 0.05, two-tailed). From our user survey, we did not find statistically significant differences between the two groups for their qualitative responses regarding usability and security (t-test, p-value greater than 0.05, two-tailed), but the groups did not differ by more than one satisfaction rating at 90% confidence.
2019-11-04
Altay, Osman, Ulas, Mustafa.  2018.  Location Determination by Processing Signal Strength of Wi-Fi Routers in the Indoor Environment with Linear Discriminant Classifier. 2018 6th International Symposium on Digital Forensic and Security (ISDFS). :1-4.

Location determination in the indoor areas as well as in open areas is important for many applications. But location determination in the indoor areas is a very difficult process compared to open areas. The Global Positioning System (GPS) signals used for position detection is not effective in the indoor areas. Wi-Fi signals are a widely used method for localization detection in the indoor area. In the indoor areas, localization can be used for many different purposes, such as intelligent home systems, locations of people, locations of products in the depot. In this study, it was tried to determine localization for with the classification method for 4 different areas by using Wi-Fi signal values obtained from different routers for indoor location determination. Linear discriminant analysis (LDA) classification was used for classification. In the test using 10k fold cross-validation, 97.2% accuracy value was calculated.

2019-07-01
Carrasco, A., Ropero, J., Clavijo, P. Ruiz de, Benjumea, J., Luque, A..  2018.  A Proposal for a New Way of Classifying Network Security Metrics: Study of the Information Collected through a Honeypot. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :633–634.

Nowadays, honeypots are a key tool to attract attackers and study their activity. They help us in the tasks of evaluating attacker's behaviour, discovering new types of attacks, and collecting information and statistics associated with them. However, the gathered data cannot be directly interpreted, but must be analyzed to obtain useful information. In this paper, we present a SSH honeypot-based system designed to simulate a vulnerable server. Thus, we propose an approach for the classification of metrics from the data collected by the honeypot along 19 months.

2019-03-25
Erbay, C., Ergïn, S..  2018.  Random Number Generator Based on Hydrogen Gas Sensor for Security Applications. 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS). :709–712.
Cryptographic applications need high-quality random number generator (RNG) for strong security and privacy measures. This paper presents RNG based on a hydrogen gas sensor that is fabricated by using microfabrication techniques. The proposed approach extracts the thermal noise information as an entropy source from the gas sensor that is non-deterministic during its operation and using hash function SHA-256 as post processing. This non-deterministic noise is then processed to acquire a random number set fulfilling the NIST 800-22 statistical randomness test suite and it demonstrates that a gas sensor based RNG can provide high-quality random numbers. Secure data transfer is possible by having this method directly without any other hardware where hydrogen gas sensor needs to be used such as petrochemical field, fuel cells, and nuclear reactors.
Ali-Tolppa, J., Kocsis, S., Schultz, B., Bodrog, L., Kajo, M..  2018.  SELF-HEALING AND RESILIENCE IN FUTURE 5G COGNITIVE AUTONOMOUS NETWORKS. 2018 ITU Kaleidoscope: Machine Learning for a 5G Future (ITU K). :1–8.
In the Self-Organizing Networks (SON) concept, self-healing functions are used to detect, diagnose and correct degraded states in the managed network functions or other resources. Such methods are increasingly important in future network deployments, since ultra-high reliability is one of the key requirements for the future 5G mobile networks, e.g. in critical machine-type communication. In this paper, we discuss the considerations for improving the resiliency of future cognitive autonomous mobile networks. In particular, we present an automated anomaly detection and diagnosis function for SON self-healing based on multi-dimensional statistical methods, case-based reasoning and active learning techniques. Insights from both the human expert and sophisticated machine learning methods are combined in an iterative way. Additionally, we present how a more holistic view on mobile network self-healing can improve its performance.
2019-03-22
Obert, J., Chavez, A., Johnson, J..  2018.  Behavioral Based Trust Metrics and the Smart Grid. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1490-1493.

To ensure reliable and predictable service in the electrical grid it is important to gauge the level of trust present within critical components and substations. Although trust throughout a smart grid is temporal and dynamically varies according to measured states, it is possible to accurately formulate communications and service level strategies based on such trust measurements. Utilizing an effective set of machine learning and statistical methods, it is shown that establishment of trust levels between substations using behavioral pattern analysis is possible. It is also shown that the establishment of such trust can facilitate simple secure communications routing between substations.

2018-11-19
Grinstein, E., Duong, N. Q. K., Ozerov, A., Pérez, P..  2018.  Audio Style Transfer. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :586–590.
``Style transfer'' among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.
2018-09-12
Montieri, A., Ciuonzo, D., Aceto, G., Pescape, A..  2017.  Anonymity Services Tor, I2P, JonDonym: Classifying in the Dark. 2017 29th International Teletraffic Congress (ITC 29). 1:81–89.

Traffic classification, i.e. associating network traffic to the application that generated it, is an important tool for several tasks, spanning on different fields (security, management, traffic engineering, R&D). This process is challenged by applications that preserve Internet users' privacy by encrypting the communication content, and even more by anonymity tools, additionally hiding the source, the destination, and the nature of the communication. In this paper, leveraging a public dataset released in 2017, we provide (repeatable) classification results with the aim of investigating to what degree the specific anonymity tool (and the traffic it hides) can be identified, when compared to the traffic of the other considered anonymity tools, using machine learning approaches based on the sole statistical features. To this end, four classifiers are trained and tested on the dataset: (i) Naïve Bayes, (ii) Bayesian Network, (iii) C4.5, and (iv) Random Forest. Results show that the three considered anonymity networks (Tor, I2P, JonDonym) can be easily distinguished (with an accuracy of 99.99%), telling even the specific application generating the traffic (with an accuracy of 98.00%).

2018-09-05
Takbiri, N., Houmansadr, A., Goeckel, D. L., Pishro-Nik, H..  2017.  Limits of location privacy under anonymization and obfuscation. 2017 IEEE International Symposium on Information Theory (ISIT). :764–768.

The prevalence of mobile devices and location-based services (LBS) has generated great concerns regarding the LBS users' privacy, which can be compromised by statistical analysis of their movement patterns. A number of algorithms have been proposed to protect the privacy of users in such systems, but the fundamental underpinnings of such remain unexplored. Recently, the concept of perfect location privacy was introduced and its achievability was studied for anonymization-based LBS systems, where user identifiers are permuted at regular intervals to prevent identification based on statistical analysis of long time sequences. In this paper, we significantly extend that investigation by incorporating the other major tool commonly employed to obtain location privacy: obfuscation, where user locations are purposely obscured to protect their privacy. Since anonymization and obfuscation reduce user utility in LBS systems, we investigate how location privacy varies with the degree to which each of these two methods is employed. We provide: (1) achievability results for the case where the location of each user is governed by an i.i.d. process; (2) converse results for the i.i.d. case as well as the more general Markov Chain model. We show that, as the number of users in the network grows, the obfuscation-anonymization plane can be divided into two regions: in the first region, all users have perfect location privacy; and, in the second region, no user has location privacy.

2018-05-30
Price-Williams, M., Heard, N., Turcotte, M..  2017.  Detecting Periodic Subsequences in Cyber Security Data. 2017 European Intelligence and Security Informatics Conference (EISIC). :84–90.

Anomaly detection for cyber-security defence hasgarnered much attention in recent years providing an orthogonalapproach to traditional signature-based detection systems.Anomaly detection relies on building probability models ofnormal computer network behaviour and detecting deviationsfrom the model. Most data sets used for cyber-security havea mix of user-driven events and automated network events,which most often appears as polling behaviour. Separating theseautomated events from those caused by human activity is essentialto building good statistical models for anomaly detection. This articlepresents a changepoint detection framework for identifyingautomated network events appearing as periodic subsequences ofevent times. The opening event of each subsequence is interpretedas a human action which then generates an automated, periodicprocess. Difficulties arising from the presence of duplicate andmissing data are addressed. The methodology is demonstrated usingauthentication data from Los Alamos National Laboratory'senterprise computer network.

2018-05-09
Atli, A. V., Uluderya, M. S., Tatlicioglu, S., Gorkemli, B., Balci, A. M..  2017.  Protecting SDN controller with per-flow buffering inside OpenFlow switches. 2017 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). :1–5.

Software Defined Networking (SDN) is a paradigm shift that changes the working principles of IP networks by separating the control logic from routers and switches, and logically centralizing it within a controller. In this architecture the control plane (controller) communicates with the data plane (switches) through a control channel using a standards-compliant protocol, that is, OpenFlow. While having a centralized controller creates an opportunity to monitor and program the entire network, as a side effect, it causes the control plane to become a single point of failure. Denial of service (DoS) attacks or even heavy control traffic conditions can easily become real threats to the proper functioning of the controller, which indirectly detriments the entire network. In this paper, we propose a solution to reduce the control traffic generated primarily during table-miss events. We utilize the buffer\_id feature of the OpenFlow protocol, which has been designed to identify individually buffered packets within a switch, reusing it to identify flows buffered as a series of packets during table-miss, which happens when there is no related rule in the switch flow tables that matches the received packet. Thus, we allow the OpenFlow switch to send only the first packet of a flow to the controller for a table-miss while buffering the rest of the packets in the switch memory until the controller responds or time out occurs. The test results show that OpenFlow traffic is significantly reduced when the proposed method is used.

2018-05-01
Cogranne, R., Sedighi, V., Fridrich, J..  2017.  Practical Strategies for Content-Adaptive Batch Steganography and Pooled Steganalysis. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2122–2126.

This paper investigates practical strategies for distributing payload across images with content-adaptive steganography and for pooling outputs of a single-image detector for steganalysis. Adopting a statistical model for the detector's output, the steganographer minimizes the power of the most powerful detector of an omniscient Warden, while the Warden, informed by the payload spreading strategy, detects with the likelihood ratio test in the form of a matched filter. Experimental results with state-of-the-art content-adaptive additive embedding schemes and rich models are included to show the relevance of the results.

2018-04-11
Ghanem, K., Aparicio-Navarro, F. J., Kyriakopoulos, K. G., Lambotharan, S., Chambers, J. A..  2017.  Support Vector Machine for Network Intrusion and Cyber-Attack Detection. 2017 Sensor Signal Processing for Defence Conference (SSPD). :1–5.

Cyber-security threats are a growing concern in networked environments. The development of Intrusion Detection Systems (IDSs) is fundamental in order to provide extra level of security. We have developed an unsupervised anomaly-based IDS that uses statistical techniques to conduct the detection process. Despite providing many advantages, anomaly-based IDSs tend to generate a high number of false alarms. Machine Learning (ML) techniques have gained wide interest in tasks of intrusion detection. In this work, Support Vector Machine (SVM) is deemed as an ML technique that could complement the performance of our IDS, providing a second line of detection to reduce the number of false alarms, or as an alternative detection technique. We assess the performance of our IDS against one-class and two-class SVMs, using linear and non- linear forms. The results that we present show that linear two-class SVM generates highly accurate results, and the accuracy of the linear one-class SVM is very comparable, and it does not need training datasets associated with malicious data. Similarly, the results evidence that our IDS could benefit from the use of ML techniques to increase its accuracy when analysing datasets comprising of non- homogeneous features.

2018-03-26
Hasslinger, G., Kunbaz, M., Hasslinger, F., Bauschert, T..  2017.  Web Caching Evaluation from Wikipedia Request Statistics. 2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). :1–6.

Wikipedia is one of the most popular information platforms on the Internet. The user access pattern to Wikipedia pages depends on their relevance in the current worldwide social discourse. We use publically available statistics about the top-1000 most popular pages on each day to estimate the efficiency of caches for support of the platform. While the data volumes are moderate, the main goal of Wikipedia caches is to reduce access times for page views and edits. We study the impact of most popular pages on the achievable cache hit rate in comparison to Zipf request distributions and we include daily dynamics in popularity.

2018-03-19
Thankaraj, A., Nair, A. J., Vasudevan, N., Pathari, V..  2017.  Misclassifications: The Missing Link. 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :1719–1722.

The notion of style is pivotal to literature. The choice of a certain writing style moulds and enhances the overall character of a book. Stylometry uses statistical methods to analyze literary style. This work aims to build a recommendation system based on the similarity in stylometric cues of various authors. The problem at hand is in close proximity to the author attribution problem. It follows a supervised approach with an initial corpus of books labelled with their respective authors as training set and generate recommendations based on the misclassified books. Results in book similarity are substantiated by domain experts.

2018-02-15
Hibshi, H., Breaux, T. D..  2017.  Reinforcing Security Requirements with Multifactor Quality Measurement. 2017 IEEE 25th International Requirements Engineering Conference (RE). :144–153.
Choosing how to write natural language scenarios is challenging, because stakeholders may over-generalize their descriptions or overlook or be unaware of alternate scenarios. In security, for example, this can result in weak security constraints that are too general, or missing constraints. Another challenge is that analysts are unclear on where to stop generating new scenarios. In this paper, we introduce the Multifactor Quality Method (MQM) to help requirements analysts to empirically collect system constraints in scenarios based on elicited expert preferences. The method combines quantitative statistical analysis to measure system quality with qualitative coding to extract new requirements. The method is bootstrapped with minimal analyst expertise in the domain affected by the quality area, and then guides an analyst toward selecting expert-recommended requirements to monotonically increase system quality. We report the results of applying the method to security. This include 550 requirements elicited from 69 security experts during a bootstrapping stage, and subsequent evaluation of these results in a verification stage with 45 security experts to measure the overall improvement of the new requirements. Security experts in our studies have an average of 10 years of experience. Our results show that using our method, we detect an increase in the security quality ratings collected in the verification stage. Finally, we discuss how our proposed method helps to improve security requirements elicitation, analysis, and measurement.
Fraser, J. G., Bouridane, A..  2017.  Have the security flaws surrounding BITCOIN effected the currency's value? 2017 Seventh International Conference on Emerging Security Technologies (EST). :50–55.

When Bitcoin was first introduced to the world in 2008 by an enigmatic programmer going by the pseudonym Satoshi Nakamoto, it was billed as the world's first decentralized virtual currency. Offering the first credible incarnation of a digital currency, Bitcoin was based on the principal of peer to peer transactions involving a complex public address and a private key that only the owner of the coin would know. This paper will seek to investigate how the usage and value of Bitcoin is affected by current events in the cyber environment. Is an advancement in the digital security of Bitcoin reflected by the value of the currency and conversely does a major security breech have a negative effect? By analyzing statistical data of the market value of Bitcoin at specific points where the currency has fluctuated dramatically, it is believed that trends can be found. This paper proposes that based on the data analyzed, the current integrity of the Bitcoin security is trusted by general users and the value and usage of the currency is growing. All the major fluctuations of the currency can be linked to significant events within the digital security environment however these fluctuations are beginning to decrease in frequency and severity. Bitcoin is still a volatile currency but this paper concludes that this is a result of security flaws in Bitcoin services as opposed to the Bitcoin protocol itself.

2018-02-02
You, J., Shangguan, J., Sun, Y., Wang, Y..  2017.  Improved trustworthiness judgment in open networks. 2017 International Smart Cities Conference (ISC2). :1–2.

The collaborative recommendation mechanism is beneficial for the subject in an open network to find efficiently enough referrers who directly interacted with the object and obtain their trust data. The uncertainty analysis to the collected trust data selects the reliable trust data of trustworthy referrers, and then calculates the statistical trust value on certain reliability for any object. After that the subject can judge its trustworthiness and further make a decision about interaction based on the given threshold. The feasibility of this method is verified by three experiments which are designed to validate the model's ability to fight against malicious service, the exaggeration and slander attack. The interactive success rate is significantly improved by using the new model, and the malicious entities are distinguished more effectively than the comparative model.

Chen, L., May, J..  2017.  Theoretical Feasibility of Statistical Assurance of Programmable Systems Based on Simulation Tests. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :630–631.

This presents a new model to support empirical failure probability estimation for a software-intensive system. The new element of the approach is that it combines the results of testing using a simulated hardware platform with results from testing on the real platform. This approach addresses a serious practical limitation of a technique known as statistical testing. This limitation will be called the test time expansion problem (or simply the 'time problem'), which is that the amount of testing required to demonstrate useful levels of reliability over a time period T is many orders of magnitude greater than T. The time problem arises whether the aim is to demonstrate ultra-high reliability levels for protection system, or to demonstrate any (desirable) reliability levels for continuous operation ('high demand') systems. Specifically, the theoretical feasibility of a platform simulation approach is considered since, if this is not proven, questions of practical implementation are moot. Subject to the assumptions made in the paper, theoretical feasibility is demonstrated.

2018-01-23
Kolosnjaji, B., Eraisha, G., Webster, G., Zarras, A., Eckert, C..  2017.  Empowering convolutional networks for malware classification and analysis. 2017 International Joint Conference on Neural Networks (IJCNN). :3838–3845.

Performing large-scale malware classification is increasingly becoming a critical step in malware analytics as the number and variety of malware samples is rapidly growing. Statistical machine learning constitutes an appealing method to cope with this increase as it can use mathematical tools to extract information out of large-scale datasets and produce interpretable models. This has motivated a surge of scientific work in developing machine learning methods for detection and classification of malicious executables. However, an optimal method for extracting the most informative features for different malware families, with the final goal of malware classification, is yet to be found. Fortunately, neural networks have evolved to the state that they can surpass the limitations of other methods in terms of hierarchical feature extraction. Consequently, neural networks can now offer superior classification accuracy in many domains such as computer vision and natural language processing. In this paper, we transfer the performance improvements achieved in the area of neural networks to model the execution sequences of disassembled malicious binaries. We implement a neural network that consists of convolutional and feedforward neural constructs. This architecture embodies a hierarchical feature extraction approach that combines convolution of n-grams of instructions with plain vectorization of features derived from the headers of the Portable Executable (PE) files. Our evaluation results demonstrate that our approach outperforms baseline methods, such as simple Feedforward Neural Networks and Support Vector Machines, as we achieve 93% on precision and recall, even in case of obfuscations in the data.

2018-01-16
Miramirkhani, N., Appini, M. P., Nikiforakis, N., Polychronakis, M..  2017.  Spotless Sandboxes: Evading Malware Analysis Systems Using Wear-and-Tear Artifacts. 2017 IEEE Symposium on Security and Privacy (SP). :1009–1024.

Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the "wear and tear" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system i- ages that exhibit a realistic wear-and-tear state.

He, Z., Zhang, T., Lee, R. B..  2017.  Machine Learning Based DDoS Attack Detection from Source Side in Cloud. 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing (CSCloud). :114–120.

Denial of service (DOS) attacks are a serious threat to network security. These attacks are often sourced from virtual machines in the cloud, rather than from the attacker's own machine, to achieve anonymity and higher network bandwidth. Past research focused on analyzing traffic on the destination (victim's) side with predefined thresholds. These approaches have significant disadvantages. They are only passive defenses after the attack, they cannot use the outbound statistical features of attacks, and it is hard to trace back to the attacker with these approaches. In this paper, we propose a DOS attack detection system on the source side in the cloud, based on machine learning techniques. This system leverages statistical information from both the cloud server's hypervisor and the virtual machines, to prevent network packages from being sent out to the outside network. We evaluate nine machine learning algorithms and carefully compare their performance. Our experimental results show that more than 99.7% of four kinds of DOS attacks are successfully detected. Our approach does not degrade performance and can be easily extended to broader DOS attacks.

2017-12-28
Stanić, B., Afzal, W..  2017.  Process Metrics Are Not Bad Predictors of Fault Proneness. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :493–499.

The correct prediction of faulty modules or classes has a number of advantages such as improving the quality of software and assigning capable development resources to fix such faults. There have been different kinds of fault/defect prediction models proposed in literature, but a great majority of them makes use of static code metrics as independent variables for making predictions. Recently, process metrics have gained a considerable attention as alternative metrics to use for making trust-worthy predictions. The objective of this paper is to investigate different combinations of static code and process metrics for evaluating fault prediction performance. We have used publicly available data sets, along with a frequently used classifier, Naive Bayes, to run our experiments. We have, both statistically and visually, analyzed our experimental results. The statistical analysis showed evidence against any significant difference in fault prediction performances for a variety of different combinations of metrics. This reinforced earlier research results that process metrics are as good as predictors of fault proneness as static code metrics. Furthermore, the visual inspection of box plots revealed that the best set of metrics for fault prediction is a mix of both static code and process metrics. We also presented evidence in support of some process metrics being more discriminating than others and thus making them as good predictors to use.

2017-12-27
Arivazhagan, S., Jebarani, W. S. L., Kalyani, S. V., Abinaya, A. Deiva.  2017.  Mixed chaotic maps based encryption for high crypto secrecy. 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN). :1–6.

In recent years, the chaos based cryptographic algorithms have enabled some new and efficient ways to develop secure image encryption techniques. In this paper, we propose a new approach for image encryption based on chaotic maps in order to meet the requirements of secure image encryption. The chaos based image encryption technique uses simple chaotic maps which are very sensitive to original conditions. Using mixed chaotic maps which works based on simple substitution and transposition techniques to encrypt the original image yields better performance with less computation complexity which in turn gives high crypto-secrecy. The initial conditions for the chaotic maps are assigned and using that seed only the receiver can decrypt the message. The results of the experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for image encryption.