Visible to the public Biblio

Found 12290 results

2014
Zhou, Jian, Sun, Liyan, Zhou, Xianwei, Song, Junde.  2014.  High Performance Group Merging/Splitting Scheme for Group Key Management. Wirel. Pers. Commun.. 75:1529–1545.

The group merging/splitting event is different to the joining/leaving events in which only a member joins or leaves group, but in the group merging/splitting event two small groups merge together into a group or a group is divided into two independent parts. Rekeying is an importance issue for key management whose target is to guarantee forward security and backward security in case of membership changes, however rekeying efficiency is related to group scale in most existing group key management schemes, so as to those schemes are not suitable to the applications whose rekeying time delay is limited strictly. In particular, multiple members are involved in the group merging/splitting event, thus the rekeying performance becomes a worried problem. In this paper, a high performance group merging/splitting group key management scheme is proposed based on an one-encryption-key multi-decryption-key key protocol, in the proposed scheme each member has an unique decryption key that is corresponding to a common encryption key so as to only the common encryption key is updated when the group merging/splitting event happens, however the secret decryption key still keeps unchanged. In efficiency aspect, since no more than a message on merging/splitting event is sent, at time the network load is reduced since only a group member’s key material is enough for other group members to agree a fresh common encryption key. In security aspect, our proposed scheme achieves the key management security requirements including passive security, forward security, backward security and key independence. Therefore, our proposed scheme is suitable to the dynamitic networks that the rekeying time delay is limited strictly such as tolerate delay networks.

Williams, Laurie A., Nicol, David M., Singh, Munindar P..  2014.  HotSoS '14: Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. Symposium and Bootcamp on the Science of Security.

The Symposium and Bootcamp on the Science of Security (HotSoS), is a research event centered on the Science of Security (SoS). Following a successful invitational SoS Community Meeting in December 2012, HotSoS 2014 was the first open research event in what we expect will be a continuing series of such events. The key motivation behind developing a Science of Security is to address the fundamental problems of cybersecurity in a principled manner. Security has been intensively studied, but a lot of previous research emphasizes the engineering of specific solutions without first developing the scientific understanding of the problem domain. All too often, security research conveys the flavor of identifying specific threats and removing them in an apparently ad hoc manner. The motivation behind the nascent Science of Security is to understand how computing systems are architected, built, used, and maintained with a view to understanding and addressing security challenges systematically across their life cycle. In particular, two features distinguish the Science of Security from previous research programs on cybersecurity. Scope. The Science of Security considers not just computational artifacts but also incorporates the human, social, and organizational aspects of computing within its purview. Approach. The Science of Security takes a decidedly scientific approach, based on the understanding of empirical evaluation and theoretical foundations as developed in the natural and social sciences, but adapted as appropriate for the "artificial science" (paraphrasing Herb Simon's term) that is computing.

Layman, Lucas, Diffo, Sylvain David, Zazworka, Nico.  2014.  Human Factors in Webserver Log File Analysis: A Controlled Experiment on Investigating Malicious Activity. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :9:1–9:11.

While automated methods are the first line of defense for detecting attacks on webservers, a human agent is required to understand the attacker's intent and the attack process. The goal of this research is to understand the value of various log fields and the cognitive processes by which log information is grouped, searched, and correlated. Such knowledge will enable the development of human-focused log file investigation technologies. We performed controlled experiments with 65 subjects (IT professionals and novices) who investigated excerpts from six webserver log files. Quantitative and qualitative data were gathered to: 1) analyze subject accuracy in identifying malicious activity; 2) identify the most useful pieces of log file information; and 3) understand the techniques and strategies used by subjects to process the information. Statistically significant effects were observed in the accuracy of identifying attacks and time taken depending on the type of attack. Systematic differences were also observed in the log fields used by high-performing and low-performing groups. The findings include: 1) new insights into how specific log data fields are used to effectively assess potentially malicious activity; 2) obfuscating factors in log data from a human cognitive perspective; and 3) practical implications for tools to support log file investigations.

Gary Wang, University of Illinois at Urbana-Champaign, Zachary J. Estrada, University of Illinois at Urbana-Champaign, Cuong Pham, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2014.  Hypervisor Introspection: Exploiting Timing Side-Channels against VM Monitoring. 44th International Conference on Dependable Systems and Networks.

Hypervisor activity is designed to be hidden from guest Virtual Machines (VM) as well as external observers. In this paper, we demonstrate that this does not always occur. We present a method by which an external observer can learn sensitive information about hypervisor internals, such as VM scheduling or hypervisor-level monitoring schemes, by observing a VM. We refer to this capability as Hypervisor Introspection (HI).

HI can be viewed as the inverse process of the well-known Virtual Machine Introspection (VMI) technique. VMI is a technique to extract VMs’ internal state from the hypervi- sor, facilitating the implementation of reliability and security monitors[1]. Conversely, HI is a technique that allows VMs to autonomously extract hypervisor information. This capability enables a wide range of attacks, for example, learning a hypervisor’s properties (version, configuration, etc.), defeating hypervisor-level monitoring systems, and compromising the confidentiality of co-resident VMs. This paper focuses on the discovery of a channel to implement HI, and then leveraging that channel for a novel attack against traditional VMI.

In order to perform HI, there must be a method of extracting information from the hypervisor. Since this information is intentionally hidden from a VM, we make use of a side channel. When the hypervisor checks a VM using VMI, VM execution (e.g. network communication between a VM and a remote system) must pause. Therefore, information regarding the hypervisor’s activity can be leaked through this suspension of execution. We call this side channel the VM suspend side channel, illustrated in Fig. 1. As a proof of concept, this paper presents how correlating the results of in-VM micro- benchmarking and out-of-VM reference monitoring can be used to determine when hypervisor-level monitoring tools are vulnerable to attacks.

Yang, Wei, Xiao, Xusheng, Pandita, Rahul, Enck, William, Xie, Tao.  2014.  Improving Mobile Application Security via Bridging User Expectations and Application Behaviors. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :32:1–32:2.

To keep malware out of mobile application markets, existing techniques analyze the security aspects of application behaviors and summarize patterns of these security aspects to determine what applications do. However, user expectations (reflected via user perception in combination with user judgment) are often not incorporated into such analysis to determine whether application behaviors are within user expectations. This poster presents our recent work on bridging the semantic gap between user perceptions of the application behaviors and the actual application behaviors.

Namdari, Mehdi, Jazayeri-Rad, Hooshang.  2014.  Incipient Fault Diagnosis Using Support Vector Machines Based on Monitoring Continuous Decision Functions. Eng. Appl. Artif. Intell.. 28:22–35.

Support Vector Machine (SVM) as an innovative machine learning tool, based on statistical learning theory, is recently used in process fault diagnosis tasks. In the application of SVM to a fault diagnosis problem, typically a discrete decision function with discrete output values is utilized in order to solely define the label of the fault. However, for incipient faults in which fault steadily progresses over time and there is a changeover from normal operation to faulty operation, using discrete decision function does not reveal any evidence about the progress and depth of the fault. Numerous process faults, such as the reactor fouling and degradation of catalyst, progress slowly and can be categorized as incipient faults. In this work a continuous decision function is anticipated. The decision function values not only define the fault label, but also give qualitative evidence about the depth of the fault. The suggested method is applied to incipient fault diagnosis of a continuous binary mixture distillation column and the result proves the practicability of the proposed approach. In incipient fault diagnosis tasks, the proposed approach outperformed some of the conventional techniques. Moreover, the performance of the proposed approach is better than typical discrete based classification techniques employing some monitoring indexes such as the false alarm rate, detection time and diagnosis time.

Lei Xu, Chunxiao Jiang, Jian Wang, Jian Yuan, Yong Ren.  2014.  Information Security in Big Data: Privacy and Data Mining. Access, IEEE. 2:1149-1176.

The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.

Maass, Michael, Scherlis, William L., Aldrich, Jonathan.  2014.  In-nimbo Sandboxing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :1:1–1:12.

Sandboxes impose a security policy, isolating applications and their components from the rest of a system. While many sandboxing techniques exist, state of the art sandboxes generally perform their functions within the system that is being defended. As a result, when the sandbox fails or is bypassed, the security of the surrounding system can no longer be assured. We experiment with the idea of in-nimbo sandboxing, encapsulating untrusted computations away from the system we are trying to protect. The idea is to delegate computations that may be vulnerable or malicious to virtual machine instances in a cloud computing environment. This may not reduce the possibility of an in-situ sandbox compromise, but it could significantly reduce the consequences should that possibility be realized. To achieve this advantage, there are additional requirements, including: (1) A regulated channel between the local and cloud environments that supports interaction with the encapsulated application, (2) Performance design that acceptably minimizes latencies in excess of the in-situ baseline. To test the feasibility of the idea, we built an in-nimbo sandbox for Adobe Reader, an application that historically has been subject to significant attacks. We undertook a prototype deployment with PDF users in a large aerospace firm. In addition to thwarting several examples of existing PDF-based malware, we found that the added increment of latency, perhaps surprisingly, does not overly impair the user experience with respect to performance or usability.

Davis, Agnes, Shashidharan, Ashwin, Liu, Qian, Enck, William, McLaughlin, Anne, Watson, Benjamin.  2014.  Insecure Behaviors on Mobile Devices Under Stress. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :31:1–31:2.

One of the biggest challenges in mobile security is human behavior. The most secure password may be useless if it is sent as a text or in an email. The most secure network is only as secure as its most careless user. Thus, in the current project we sought to discover the conditions under which users of mobile devices were most likely to make security errors. This scaffolds a larger project where we will develop automatic ways of detecting such environments and eventually supporting users during these times to encourage safe mobile behaviors.

Alqahtani, Saeed M., Balushi, Maqbool Al, John, Robert.  2014.  An Intelligent Intrusion Detection System for Cloud Computing (SIDSCC). Proceedings of the 2014 International Conference on Computational Science and Computational Intelligence - Volume 02. :135–141.

Cloud computing is a distributed architecture that has shared resources, software, and information. There exists a great number of implementations and research for Intrusion Detection Systems (IDS) in grid and cloud environments, however they are limited in addressing the requirements for an ideal intrusion detection system. Security issues in Cloud Computing (CC) have become a major concern to its users, availability being one of the key security issues. Distributed Denial of Service (DDoS) is one of these security issues that poses a great threat to the availability of the cloud services. The aim of this research is to evaluate the performance of IDS in CC when the DDoS attack is detected in a private cloud, named Saa SCloud. A model has been implemented on three virtual machines, Saa SCloud Model, DDoS attack Model, and IDSServer Model. Through this implementation, Service Intrusion Detection System in Cloud Computing (SIDSCC) will be proposed, investigated and evaluated.

Chang, She-I, Yen, David C., Chang, I-Cheng, Jan, Derek.  2014.  Internal Control Framework for a Compliant ERP System. Inf. Manage.. 51:187–205.

After the occurrence of numerous worldwide financial scandals, the importance of related issues such as internal control and information security has greatly increased. This study develops an internal control framework that can be applied within an enterprise resource planning (ERP) system. A literature review is first conducted to examine the necessary forms of internal control in information technology (IT) systems. The control criteria for the establishment of the internal control framework are then constructed. A case study is conducted to verify the feasibility of the established framework. This study proposes a 12-dimensional framework with 37 control items aimed at helping auditors perform effective audits by inspecting essential internal control points in ERP systems. The proposed framework allows companies to enhance IT audit efficiency and mitigates control risk. Moreover, companies that refer to this framework and consider the limitations of their own IT management can establish a more robust IT management mechanism.

Zhenqi Huang, University of Illinois at Urbana-Champaign, Chuchu Fan, University of Illinois at Urbana-Champaign, Alexandru Mereacre, University of Oxford, Sayan Mitra, University of Illinois at Urbana-Champaign, Marta Kwiatkowska, University of Oxford.  2014.  Invariant Verification of Nonlinear Hybrid Automata Networks of Cardiac Cells. 26th International Conference on Computer Aided Verification (CAV 2014).

Verification algorithms for networks of nonlinear hybrid automata (HA) can aid us understand and control biological processes such as cardiac arrhythmia, formation of memory, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. First, it uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network A for constructing a low-dimensional model M. Simulations of both A and M are then used to compute the reach tubes for A. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells, where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with 5 cells (20 continuous variables, 295 locations) typically in less than 15 minutes for up to reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.

Layman, Lucas, Zazworka, Nico.  2014.  InViz: Instant Visualization of Security Attacks. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :15:1–15:2.

The InViz tool is a functional prototype that provides graphical visualizations of log file events to support real-time attack investigation. Through visualization, both experts and novices in cybersecurity can analyze patterns of application behavior and investigate potential cybersecurity attacks. The goal of this research is to identify and evaluate the cybersecurity information to visualize that reduces the amount of time required to perform cyber forensics.

Rao, Ashwini, Hibshi, Hanan, Breaux, Travis, Lehker, Jean-Michel, Niu, Jianwei.  2014.  Less is More?: Investigating the Role of Examples in Security Studies Using Analogical Transfer Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :7:1–7:12.

Information system developers and administrators often overlook critical security requirements and best practices. This may be due to lack of tools and techniques that allow practitioners to tailor security knowledge to their particular context. In order to explore the impact of new security methods, we must improve our ability to study the impact of security tools and methods on software and system development. In this paper, we present early findings of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. To motivate this research, we formulate hypotheses from analogical transfer theory in psychology. The independent variables include number of problem surfaces and schemas, and the dependent variable is the answer accuracy. Our study results do not show a statistically significant difference in performance when the number and types of examples are varied. We discuss the limitations, threats to validity and opportunities for future studies in this area.

Kästner, Christian, Pfeffer, Jürgen.  2014.  Limiting Recertification in Highly Configurable Systems: Analyzing Interactions and Isolation Among Configuration Options. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :23:1–23:2.

In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.

King, Jason, Williams, Laurie.  2014.  Log Your CRUD: Design Principles for Software Logging Mechanisms. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :5:1–5:10.

According to a 2011 survey in healthcare, the most commonly reported breaches of protected health information involved employees snooping into medical records of friends and relatives. Logging mechanisms can provide a means for forensic analysis of user activity in software systems by proving that a user performed certain actions in the system. However, logging mechanisms often inconsistently capture user interactions with sensitive data, creating gaps in traces of user activity. Explicit design principles and systematic testing of logging mechanisms within the software development lifecycle may help strengthen the overall security of software. The objective of this research is to observe the current state of logging mechanisms by performing an exploratory case study in which we systematically evaluate logging mechanisms by supplementing the expected results of existing functional black-box test cases to include log output. We perform an exploratory case study of four open-source electronic health record (EHR) logging mechanisms: OpenEMR, OSCAR, Tolven eCHR, and WorldVistA. We supplement the expected results of 30 United States government-sanctioned test cases to include log output to track access of sensitive data. We then execute the test cases on each EHR system. Six of the 30 (20%) test cases failed on all four EHR systems because user interactions with sensitive data are not logged. We find that viewing protected data is often not logged by default, allowing unauthorized views of data to go undetected. Based on our results, we propose a set of principles that developers should consider when developing logging mechanisms to ensure the ability to capture adequate traces of user activity.

Mezzour, Ghita, Carley, L. Richard, Carley, Kathleen M..  2014.  Longitudinal analysis of a large corpus of cyber threat descriptions. Journal of Computer Virology and Hacking Techniques. :1-12.

Online cyber threat descriptions are rich, but little research has attempted to systematically analyze these descriptions. In this paper, we process and analyze two of Symantec’s online threat description corpora. The Anti-Virus (AV) corpus contains descriptions of more than 12,400 threats detected by Symantec’s AV, and the Intrusion Prevention System (IPS) corpus contains descriptions of more than 2,700 attacks detected by Symantec’s IPS. In our analysis, we quantify the over time evolution of threat severity and type in the corpora. We also assess the amount of time Symantec takes to release signatures for newly discovered threats. Our analysis indicates that a very small minority of threats in the AV corpus are high-severity, whereas the majority of attacks in the IPS corpus are high-severity. Moreover, we find that the prevalence of different threat types such as worms and viruses in the corpora varies considerably over time. Finally, we find that Symantec prioritizes releasing signatures for fast propagating threats.

Balkesen, C., Teubner, J., Alonso, G., Ozsu, M.T..  2014.  Main-Memory Hash Joins on Modern Processor Architectures. Knowledge and Data Engineering, IEEE Transactions on. PP:1-1.

Existing main-memory hash join algorithms for multi-core can be classified into two camps. Hardware-oblivious hash join variants do not depend on hardware-specific parameters. Rather, they consider qualitative characteristics of modern hardware and are expected to achieve good performance on any technologically similar platform. The assumption behind these algorithms is that hardware is now good enough at hiding its own limitations-through automatic hardware prefetching, out-of-order execution, or simultaneous multi-threading (SMT)-to make hardware-oblivious algorithms competitive without the overhead of carefully tuning to the underlying hardware. Hardware-conscious implementations, such as (parallel) radix join, aim to maximally exploit a given architecture by tuning the algorithm parameters (e.g., hash table sizes) to the particular features of the architecture. The assumption here is that explicit parameter tuning yields enough performance advantages to warrant the effort required. This paper compares the two approaches under a wide range of workloads (relative table sizes, tuple sizes, effects of sorted data, etc.) and configuration parameters (VM page sizes, number of threads, number of cores, SMT, SIMD, prefetching, etc.). The results show that hardware-conscious algorithms generally outperform hardware-oblivious ones. However, on specific workloads and special architectures with aggressive simultaneous multi-threading, hardware-oblivious algorithms are competitive. The main conclusion of the paper is that, in existing multi-core architectures, it is still important to carefully tailor algorithms to the underlying hardware to get the necessary performance. But processor developments may require to revisit this conclusion in the future.
 

Ken Keefe, University of Illinois at Urbana-Champaign.  2014.  Making Sound Design Decisions Using Quantitative Security Metrics.

Presented at the Illinois SoS Bi-weekly Meeting, December 2014.

Welzel, Arne, Rossow, Christian, Bos, Herbert.  2014.  On Measuring the Impact of DDoS Botnets. Proceedings of the Seventh European Workshop on System Security. :3:1–3:6.

Miscreants use DDoS botnets to attack a victim via a large number of malware-infected hosts, combining the bandwidth of the individual PCs. Such botnets have thus a high potential to render targeted services unavailable. However, the actual impact of attacks by DDoS botnets has never been evaluated. In this paper, we monitor C&C servers of 14 DirtJumper and Yoddos botnets and record the DDoS targets of these networks. We then aim to evaluate the availability of the DDoS victims, using a variety of measurements such as TCP response times and analyzing the HTTP content. We show that more than 65% of the victims are severely affected by the DDoS attacks, while also a few DDoS attacks likely failed.

Jun Moon, University of Illinois at Urbana-Champaign, Tamer Başar, University of Illinois at Urbana-Champaign.  2014.  Minimax Control of MIMO Systems Over Multiple TCP-like Lossy Networks. 19th IFAC World Congress (IFAC 2014).

This paper considers a minimax control problem over multiple packet dropping channels. The channel losses are assumed to be Bernoulli processes, and operate under the transmission control protocol (TCP); hence acknowledgments of control and measurement drops are available at each time. Under this setting, we obtain an output feedback minimax controller, which are implicitly dependent on rates of control and measurement losses. For the infinite-horizon case, we first characterize achievable Hdisturbance attenuation levels, and then show that the underlying condition is a function of packet loss rates. We also address the converse part by showing that the condition of the minimum attainable loss rates for closed-loop system stability is a function of H disturbance attenuation parameter. Hence, those conditions are coupled with each other. Finally, we show the limiting behavior of the minimax controller under the disturbance attenuation parameter.

Sardana, Noel, Cohen, Robin.  2014.  Modeling Agent Trustworthiness with Credibility for Message Recommendation in Social Networks. Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems. :1423–1424.

This paper presents a framework for multiagent systems trust modeling that reasons about both user credibility and user similarity. Through simulation, we are able to show that our approach works well in social networking environments by presenting messages to users with high predicted benefit.

David Nicol, University of Illinois at Urbana-Champaign, Vikas Mallapura, University of Illinois at Urbana-Champaign.  2014.  Modeling and Analysis of Stepping Stone Attacks. 2014 Winter Simulation Conference.

Computer exploits often involve an attacker being able to compromise a sequence of hosts, creating a chain of "stepping stones" from his source to ultimate target. Stepping stones are usually necessary to access well-protected resources, and also serve to mask the attacker’s location. This paper describes means of constructing models of networks and the access control mechanisms they employ to approach the problem of finding which stepping stone paths are easiest for an attacker to find. While the simplest formulation of the problem can be addressed with deterministic shortest-path algorithms, we argue that consideration of what and how an attacker may (or may not) launch from a compromised host pushes one towards solutions based on Monte Carlo sampling. We describe the sampling algorithm and some preliminary results obtained using it.