Visible to the public Biblio

Found 1564 results

Filters: First Letter Of Last Name is D  [Clear All Filters]
2014-09-17
Denning, Dorothy E..  1976.  A Lattice Model of Secure Information Flow. Commun. ACM. 19:236–243.
This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.
Da, Gaofeng, Xu, Maochao, Xu, Shouhuai.  2014.  A New Approach to Modeling and Analyzing Security of Networked Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :6:1–6:12.

Modeling and analyzing security of networked systems is an important problem in the emerging Science of Security and has been under active investigation. In this paper, we propose a new approach towards tackling the problem. Our approach is inspired by the shock model and random environment techniques in the Theory of Reliability, while accommodating security ingredients. To the best of our knowledge, our model is the first that can accommodate a certain degree of adaptiveness of attacks, which substantially weakens the often-made independence and exponential attack inter-arrival time assumptions. The approach leads to a stochastic process model with two security metrics, and we attain some analytic results in terms of the security metrics.

Layman, Lucas, Diffo, Sylvain David, Zazworka, Nico.  2014.  Human Factors in Webserver Log File Analysis: A Controlled Experiment on Investigating Malicious Activity. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :9:1–9:11.

While automated methods are the first line of defense for detecting attacks on webservers, a human agent is required to understand the attacker's intent and the attack process. The goal of this research is to understand the value of various log fields and the cognitive processes by which log information is grouped, searched, and correlated. Such knowledge will enable the development of human-focused log file investigation technologies. We performed controlled experiments with 65 subjects (IT professionals and novices) who investigated excerpts from six webserver log files. Quantitative and qualitative data were gathered to: 1) analyze subject accuracy in identifying malicious activity; 2) identify the most useful pieces of log file information; and 3) understand the techniques and strategies used by subjects to process the information. Statistically significant effects were observed in the accuracy of identifying attacks and time taken depending on the type of attack. Systematic differences were also observed in the log fields used by high-performing and low-performing groups. The findings include: 1) new insights into how specific log data fields are used to effectively assess potentially malicious activity; 2) obfuscating factors in log data from a human cognitive perspective; and 3) practical implications for tools to support log file investigations.

Das, Anupam, Borisov, Nikita, Caesar, Matthew.  2014.  Analyzing an Adaptive Reputation Metric for Anonymity Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :11:1–11:11.

Low-latency anonymity systems such as Tor rely on intermediate relays to forward user traffic; these relays, however, are often unreliable, resulting in a degraded user experience. Worse yet, malicious relays may introduce deliberate failures in a strategic manner in order to increase their chance of compromising anonymity. In this paper we propose using a reputation metric that can profile the reliability of relays in an anonymity system based on users' past experience. The two main challenges in building a reputation-based system for an anonymity system are: first, malicious participants can strategically oscillate between good and malicious nature to evade detection, and second, an observed failure in an anonymous communication cannot be uniquely attributed to a single relay. Our proposed framework addresses the former challenge by using a proportional-integral-derivative (PID) controller-based reputation metric that ensures malicious relays adopting time-varying strategic behavior obtain low reputation scores over time, and the latter by introducing a filtering scheme based on the evaluated reputation score to effectively discard relays mounting attacks. We collect data from the live Tor network and perform simulations to validate the proposed reputation-based filtering scheme. We show that an attacker does not gain any significant benefit by performing deliberate failures in the presence of the proposed reputation framework.

Biswas, Trisha, Lesser, Kendra, Dutta, Rudra, Oishi, Meeko.  2014.  Examining Reliability of Wireless Multihop Network Routing with Linear Systems. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :19:1–19:2.

In this study, we present a control theoretic technique to model routing in wireless multihop networks. We model ad hoc wireless networks as stochastic dynamical systems where, as a base case, a centralized controller pre-computes optimal paths to the destination. The usefulness of this approach lies in the fact that it can help obtain bounds on reliability of end-to-end packet transmissions. We compare this approach with the reliability achieved by some of the widely used routing techniques in multihop networks.

He, Xiaofan, Dai, Huaiyu, Shen, Wenbo, Ning, Peng.  2014.  Channel Correlation Modeling for Link Signature Security Assessment. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :25:1–25:2.

It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in link signature (LS) for security assurance. However, we believe that this channel correlation model is questionable, and will lead to false sense of security. In this project, we focus on establishing correct modeling of channel correlation so as to facilitate proper guard zone designs for LS security in various wireless environments of interest.

Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

Davis, Agnes, Shashidharan, Ashwin, Liu, Qian, Enck, William, McLaughlin, Anne, Watson, Benjamin.  2014.  Insecure Behaviors on Mobile Devices Under Stress. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :31:1–31:2.

One of the biggest challenges in mobile security is human behavior. The most secure password may be useless if it is sent as a text or in an email. The most secure network is only as secure as its most careless user. Thus, in the current project we sought to discover the conditions under which users of mobile devices were most likely to make security errors. This scaffolds a larger project where we will develop automatic ways of detecting such environments and eventually supporting users during these times to encourage safe mobile behaviors.

Dora, Robert A., Schalk, Patrick D., McCarthy, John E., Young, Scott A..  2013.  Remote suspect identification and the impact of demographic features on keystroke dynamics. Proc. SPIE. 8757:87570B-87570B-14.
This paper describes the research, development, and analysis performed during the Remote Suspect Identification (RSID) effort. The effort produced a keystroke dynamics sensor capable of authenticating, continuously verifying, and identifying masquerading users with equal error rates (EER) of approximately 0.054, 0.050, and 0.069, respectively. This sensor employs 11 distinct algorithms, each using between one and five keystroke features, that are fused (across features and algorithms) using a weighted majority ballot algorithm to produce rapid and accurate measurements. The RSID sensor operates discretely, quickly (using few keystrokes), and requires no additional hardware. The researchers also analyzed the difference in sensor performance across 10 demographic features using a keystroke dynamics dataset consisting of data from over 2,200 subjects. This analysis indicated that there are significant and discernible differences across age groups, ethnicities, language, handedness, height, occupation, sex, typing frequency, and typing style.
2014-09-26
Rossow, C., Dietrich, C.J., Grier, C., Kreibich, C., Paxson, V., Pohlmann, N., Bos, H., van Steen, M..  2012.  Prudent Practices for Designing Malware Experiments: Status Quo and Outlook. Security and Privacy (SP), 2012 IEEE Symposium on. :65-79.

Malware researchers rely on the observation of malicious code in execution to collect datasets for a wide array of experiments, including generation of detection models, study of longitudinal behavior, and validation of prior research. For such research to reflect prudent science, the work needs to address a number of concerns relating to the correct and representative use of the datasets, presentation of methodology in a fashion sufficiently transparent to enable reproducibility, and due consideration of the need not to harm others. In this paper we study the methodological rigor and prudence in 36 academic publications from 2006-2011 that rely on malware execution. 40% of these papers appeared in the 6 highest-ranked academic security conferences. We find frequent shortcomings, including problematic assumptions regarding the use of execution-driven datasets (25% of the papers), absence of description of security precautions taken during experiments (71% of the articles), and oftentimes insufficient description of the experimental setup. Deficiencies occur in top-tier venues and elsewhere alike, highlighting a need for the community to improve its handling of malware datasets. In the hope of aiding authors, reviewers, and readers, we frame guidelines regarding transparency, realism, correctness, and safety for collecting and using malware datasets.

Dyer, K.P., Coull, S.E., Ristenpart, T., Shrimpton, T..  2012.  Peek-a-Boo, I Still See You: Why Efficient Traffic Analysis Countermeasures Fail. Security and Privacy (SP), 2012 IEEE Symposium on. :332-346.

We consider the setting of HTTP traffic over encrypted tunnels, as used to conceal the identity of websites visited by a user. It is well known that traffic analysis (TA) attacks can accurately identify the website a user visits despite the use of encryption, and previous work has looked at specific attack/countermeasure pairings. We provide the first comprehensive analysis of general-purpose TA countermeasures. We show that nine known countermeasures are vulnerable to simple attacks that exploit coarse features of traffic (e.g., total time and bandwidth). The considered countermeasures include ones like those standardized by TLS, SSH, and IPsec, and even more complex ones like the traffic morphing scheme of Wright et al. As just one of our results, we show that despite the use of traffic morphing, one can use only total upstream and downstream bandwidth to identify – with 98% accuracy - which of two websites was visited. One implication of what we find is that, in the context of website identification, it is unlikely that bandwidth-efficient, general-purpose TA countermeasures can ever provide the type of security targeted in prior work.

2014-11-26
Denning, Dorothy E..  1976.  A Lattice Model of Secure Information Flow. Commun. ACM. 19:236–243.

This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper.

The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

2015-01-09
Liang Zhang, Dave Choffnes, Tudor Dumitras, Dave Levin, Alan Mislove, Aaron Schulman, Christo Wilson.  2014.  Analysis of SSL Certificate Reissues and Revocations in the Wake of Heartbleed.

Central to the secure operation of a public key infrastructure (PKI) is the ability to revoke certificates. While much of users' security rests on this process taking place quickly, in practice, revocation typically requires a human to decide to reissue a new certificate and revoke the old one. Thus, having a proper understanding of how often systems administrators reissue and revoke certificates is crucial to understanding the integrity of a PKI. Unfortunately, this is typically difficult to measure: while it is relatively easy to determine when a certificate is revoked, it is difficult to determine whether and when an administrator should have revoked.

In this paper, we use a recent widespread security vulnerability as a natural experiment. Publicly announced in April 2014, the Heartbleed OpenSSL bug, potentially (and undetectably) revealed servers' private keys. Administrators of servers that were susceptible to Heartbleed should have revoked their certificates and reissued new ones, ideally as soon as the vulnerability was publicly announced.

Using a set of all certificates advertised by the Alexa Top 1 Million domains over a period of six months, we explore the patterns of reissuing and revoking certificates in the wake of Heartbleed. We find that over 73% of vulnerable certificates had yet to be reissued and over 87% had yet to be revoked three weeks after Heartbleed was disclosed. Moreover, our results show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication.

2015-01-13
Dong Jin, Illinois Institute of Technology, Yi Ning, Illinois Institute of Technology.  2014.  Securing Industrial Control Systems with a Simulation-based Verification System. ACM SIGSIM Conference on Principles of Advanced Discrete Simulation.

Today’s quality of life is highly dependent on the successful operation of many large-scale industrial control systems. To enhance their protection against cyber-attacks and operational errors, we develop a simulation-based verification framework with cross-layer verification techniques that allow comprehensive analysis of the entire ICS-specific stack, including application, protocol, and network layers.

Work in progress paper.

2015-04-07
Titus Barik, Arpan Chakraborty, Brent Harrison, David L. Roberts, Robert St. Amant.  2013.  Modeling the Concentration Game with ACT-R. The 12th International Conference on Cognitive Modeling.

This paper describes the development of subsymbolic ACT-R models for the Concentration game. Performance data is taken from an experiment in which participants played the game un- der two conditions: minimizing the number of mismatches/ turns during a game, and minimizing the time to complete a game. Conflict resolution and parameter tuning are used to implement an accuracy model and a speed model that capture the differences for the two conditions. Visual attention drives exploration of the game board in the models. Modeling re- sults are generally consistent with human performance, though some systematic differences can be seen. Modeling decisions, model limitations, and open issues are discussed. 

Ignacio X. Dominguez, Alok Goel, David L. Roberts, Robert St. Amant.  2015.  Detecting Abnormal User Behavior Through Pattern-mining Input Device Analytics. Proceedings of the 2015 Symposium and Bootcamp on the Science of Security (HotSoS-15).
Robert St. Amant, Prairie Rose Goodwin, Ignacio Dominguez, David L. Roberts.  2015.  Toward Expert Typing in ACT-R. Proceedings of the 2015 International Conference on Cognitive Modeling (ICCM 15).
2015-04-30
Djouadi, S.M., Melin, A.M., Ferragut, E.M., Laska, J.A., Jin Dong.  2014.  Finite energy and bounded attacks on control system sensor signals. American Control Conference (ACC), 2014. :1716-1722.

Control system networks are increasingly being connected to enterprise level networks. These connections leave critical industrial controls systems vulnerable to cyber-attacks. Most of the effort in protecting these cyber-physical systems (CPS) from attacks has been in securing the networks using information security techniques. Effort has also been applied to increasing the protection and reliability of the control system against random hardware and software failures. However, the inability of information security techniques to protect against all intrusions means that the control system must be resilient to various signal attacks for which new analysis methods need to be developed. In this paper, sensor signal attacks are analyzed for observer-based controlled systems. The threat surface for sensor signal attacks is subdivided into denial of service, finite energy, and bounded attacks. In particular, the error signals between states of attack free systems and systems subject to these attacks are quantified. Optimal sensor and actuator signal attacks for the finite and infinite horizon linear quadratic (LQ) control in terms of maximizing the corresponding cost functions are computed. The closed-loop systems under optimal signal attacks are provided. Finally, an illustrative numerical example using a power generation network is provided together with distributed LQ controllers.

Li Yumei, Voos, H., Darouach, M..  2014.  Robust H #x221E; cyber-attacks estimation for control systems. Control Conference (CCC), 2014 33rd Chinese. :3124-3129.

This paper deals with the robust H∞ cyber-attacks estimation problem for control systems under stochastic cyber-attacks and disturbances. The focus is on designing a H∞ filter which maximize the attack sensitivity and minimize the effect of disturbances. The design requires not only the disturbance attenuation, but also the residual to remain the attack sensitivity as much as possible while the effect of disturbance is minimized. A stochastic model of control system with stochastic cyber-attacks which satisfy the Markovian stochastic process is constructed. And we also present the stochastic attack models that a control system is possibly exposed to. Furthermore, applying H∞ filtering technique-based on linear matrix inequalities (LMIs), the paper obtains sufficient conditions that ensure the filtering error dynamic is asymptotically stable and satisfies a prescribed ratio between cyber-attack sensitivity and disturbance sensitivity. Finally, the results are applied to the control of a Quadruple-tank process (QTP) under a stochastic cyber-attack and a stochastic disturbance. The simulation results underline that the designed filters is effective and feasible in practical application.

Grilo, A.M., Chen, J., Diaz, M., Garrido, D., Casaca, A..  2014.  An Integrated WSAN and SCADA System for Monitoring a Critical Infrastructure. Industrial Informatics, IEEE Transactions on. 10:1755-1764.

Wireless sensor and actuator networks (WSAN) constitute an emerging technology with multiple applications in many different fields. Due to the features of WSAN (dynamism, redundancy, fault tolerance, and self-organization), this technology can be used as a supporting technology for the monitoring of critical infrastructures (CIs). For decades, the monitoring of CIs has centered on supervisory control and data acquisition (SCADA) systems, where operators can monitor and control the behavior of the system. The reach of the SCADA system has been hampered by the lack of deployment flexibility of the sensors that feed it with monitoring data. The integration of a multihop WSAN with SCADA for CI monitoring constitutes a novel approach to extend the SCADA reach in a cost-effective way, eliminating this handicap. However, the integration of WSAN and SCADA presents some challenges which have to be addressed in order to comprehensively take advantage of the WSAN features. This paper presents a solution for this joint integration. The solution uses a gateway and a Web services approach together with a Web-based SCADA, which provides an integrated platform accessible from the Internet. A real scenario where this solution has been successfully applied to monitor an electrical power grid is presented.

Fawzi, H., Tabuada, P., Diggavi, S..  2014.  Secure Estimation and Control for Cyber-Physical Systems Under Adversarial Attacks. Automatic Control, IEEE Transactions on. 59:1454-1467.

The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.

Baofeng Wu, Qingfang Jin, Zhuojun Liu, Dongdai Lin.  2014.  Constructing Boolean functions with potentially optimal algebraic immunity based on additive decompositions of finite fields (extended abstract). Information Theory (ISIT), 2014 IEEE International Symposium on. :1361-1365.

We propose a general approach to construct cryptographic significant Boolean functions of (r + 1)m variables based on the additive decomposition F2rm × F2m of the finite field F2(r+1)m, where r ≥ 1 is odd and m ≥ 3. A class of unbalanced functions is constructed first via this approach, which coincides with a variant of the unbalanced class of generalized Tu-Deng functions in the case r = 1. Functions belonging to this class have high algebraic degree, but their algebraic immunity does not exceed m, which is impossible to be optimal when r > 1. By modifying these unbalanced functions, we obtain a class of balanced functions which have optimal algebraic degree and high nonlinearity (shown by a lower bound we prove). These functions have optimal algebraic immunity provided a combinatorial conjecture on binary strings which generalizes the Tu-Deng conjecture is true. Computer investigations show that, at least for small values of number of variables, functions from this class also behave well against fast algebraic attacks.

Dai, Y. S., Xiang, Y. P., Pan, Y..  2014.  Bionic Autonomic Nervous Systems for Self-Defense Against DoS, Spyware, Malware, Virus, and Fishing. ACM Trans. Auton. Adapt. Syst.. 9:4:1–4:20.

Computing systems and networks become increasingly large and complex with a variety of compromises and vulnerabilities. The network security and privacy are of great concern today, where self-defense against different kinds of attacks in an autonomous and holistic manner is a challenging topic. To address this problem, we developed an innovative technology called Bionic Autonomic Nervous System (BANS). The BANS is analogous to biological nervous system, which consists of basic modules like cyber axon, cyber neuron, peripheral nerve and central nerve. We also presented an innovative self-defense mechanism which utilizes the Fuzzy Logic, Neural Networks, and Entropy Awareness, etc. Equipped with the BANS, computer and network systems can intelligently self-defend against both known and unknown compromises/attacks including denial of services (DoS), spyware, malware, and virus. BANS also enabled multiple computers to collaboratively fight against some distributed intelligent attacks like DDoS. We have implemented the BANS in practice. Some case studies and experimental results exhibited the effectiveness and efficiency of the BANS and the self-defense mechanism.

Fachkha, C., Bou-Harb, E., Debbabi, M..  2014.  Fingerprinting Internet DNS Amplification DDoS Activities. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.

Hammi, B., Khatoun, R., Doyen, G..  2014.  A Factorial Space for a System-Based Detection of Botcloud Activity. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Today, beyond a legitimate usage, the numerous advantages of cloud computing are exploited by attackers, and Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. Such a phenomena is a major issue since it strongly increases the power of distributed massive attacks while involving the responsibility of cloud service providers that do not own appropriate solutions. In this paper, we present an original approach that enables a source-based de- tection of UDP-flood DDoS attacks based on a distributed system behavior analysis. Based on a principal component analysis, our contribution consists in: (1) defining the involvement of system metrics in a botcoud's behavior, (2) showing the invariability of the factorial space that defines a botcloud activity and (3) among several legitimate activities, using this factorial space to enable a botcloud detection.