Visible to the public Biblio

Found 2293 results

Filters: First Letter Of Last Name is K  [Clear All Filters]
2005
Katz, Jonathan, Shin, Ji Sun.  2005.  Modeling Insider Attacks on Group Key-exchange Protocols. Proceedings of the 12th ACM Conference on Computer and Communications Security. :180–189.

Protocols for authenticated key exchange (AKE) allow parties within an insecure network to establish a common session key which can then be used to secure their future communication. It is fair to say that group AKE is currently less well understood than the case of two-party AKE; in particular, attacks by malicious insiders –- a concern specific to the group setting –- have so far been considered only in a relatively "ad-hoc" fashion. The main contribution of this work is to address this deficiency by providing a formal, comprehensive model and definition of security for group AKE which automatically encompasses insider attacks. We do so by defining an appropriate ideal functionality for group AKE within the universal composability (UC) framework. As a side benefit, any protocol secure with respect to our definition is secure even when run concurrently with other protocols, and the key generated by any such protocol may be used securely in any subsequent application.In addition to proposing this definition, we show that the resulting notion of security is strictly stronger than the one proposed by Bresson, et al. (termed "AKE-security"), and that our definition implies all previously-suggested notions of security against insider attacks. We also show a simple technique for converting any AKE-secure protocol into one secure with respect to our definition.

2008
Phillips, B. J., Schmidt, C. D., Kelly, D. R..  2008.  Recovering Data from USB Flash Memory Sticks That Have Been Damaged or Electronically Erased. Proceedings of the 1st International Conference on Forensic Applications and Techniques in Telecommunications, Information, and Multimedia and Workshop. :19:1–19:6.

In this paper we consider recovering data from USB Flash memory sticks after they have been damaged or electronically erased. We describe the physical structure and theory of operation of Flash memories; review the literature of Flash memory data recovery; and report results of new experiments in which we damage USB Flash memory sticks and attempt to recover their contents. The experiments include smashing and shooting memory sticks, incinerating them in petrol and cooking them in a microwave oven.

2009
Vyetrenko, S., Khosla, A., Ho, T..  2009.  On combining information-theoretic and cryptographic approaches to network coding security against the pollution attack. 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers. :788–792.
In this paper we consider the pollution attack in network coded systems where network nodes are computationally limited. We consider the combined use of cryptographic signature based security and information theoretic network error correction and propose a fountain-like network error correction code construction suitable for this purpose.
2010
Chrysikos, T., Dagiuklas, T., Kotsopoulos, S..  2010.  Wireless Information-Theoretic Security for moving users in autonomic networks. 2010 IFIP Wireless Days. :1–5.
This paper studies Wireless Information-Theoretic Security for low-speed mobility in autonomic networks. More specifically, the impact of user movement on the Probability of Non-Zero Secrecy Capacity and Outage Secrecy Capacity for different channel conditions has been investigated. This is accomplished by establishing a link between different user locations and the boundaries of information-theoretic secure communication. Human mobility scenarios are considered, and its impact on physical layer security is examined, considering quasi-static Rayleigh channels for the fading phenomena. Simulation results have shown that the Secrecy Capacity depends on the relative distance of legitimate and illegitimate (eavesdropper) users in reference to the given transmitter.
Kessel, Ronald.  2010.  The positive force of deterrence: Estimating the quantitative effects of target shifting. 2010 International WaterSide Security Conference. :1–5.
The installation of a protection system can provide protection by either deterring or stopping an attacker. Both modes of effectiveness-deterring and stopping-are uncertain. Some have guessed that deterrence plays a much bigger role than stopping force. The force of deterrence should therefore be of considerable interest, especially if its effect could be estimated and incorporated into a larger risk analysis and business case for developing and buying new systems, but nowhere has it been estimated quantitatively. The effect of one type of deterrence, namely, influencing an attacker's choice of targets-or target shifting, biasing an attacker away from some targets toward others-is assessed quantitatively here using a game-theoretic approach. It is shown that its positive effects are significant. It features as a force multiplier on the order of magnitude or more, even for low-performance security countermeasures whose effectiveness may be compromised somewhat, of necessity, in order to keep the number of false alarms serviceably low. The analysis furthermore implies that there are certain minimum levels of stopping performance that a protection should provide in order to avoid attracting the choice of attackers (under deterrence). Nothing in the analysis argues for complacency in security. Developers must still design the best affordable systems. The analysis enters into the middle ground of security, between no protection and impossibly perfect protection. It counters the criticisms that some raise about lower-level, affordable, sustainable measures that security providers naturally gravitate toward. Although these measures might in some places be defeated in ways that a non-expert can imagine, the measures are not for that reason irresponsible or to be dismissed. Their effectiveness can be much greater than they first appear.
2011
Kashyap, V., Wiedermann, B., Hardekopf, B..  2011.  Timing- and Termination-Sensitive Secure Information Flow: Exploring a New Approach. Security and Privacy (SP), 2011 IEEE Symposium on. :413-428.

Secure information flow guarantees the secrecy and integrity of data, preventing an attacker from learning secret information (secrecy) or injecting untrusted information (integrity). Covert channels can be used to subvert these security guarantees, for example, timing and termination channels can, either intentionally or inadvertently, violate these guarantees by modifying the timing or termination behavior of a program based on secret or untrusted data. Attacks using these covert channels have been published and are known to work in practiceâ as techniques to prevent non-covert channels are becoming increasingly practical, covert channels are likely to become even more attractive for attackers to exploit. The goal of this paper is to understand the subtleties of timing and termination-sensitive noninterference, explore the space of possible strategies for enforcing noninterference guarantees, and formalize the exact guarantees that these strategies can enforce. As a result of this effort we create a novel strategy that provides stronger security guarantees than existing work, and we clarify claims in existing work about what guarantees can be made.

2012
Shepherd, Morgan M., Klein, Gary.  2012.  Using Deterrence to Mitigate Employee Internet Abuse. 2012 45th Hawaii International Conference on System Sciences. :5261–5266.
This study looks at the question of how to reduce/eliminate employee Internet Abuse. Companies have used acceptable use policies (AUP) and technology in an attempt to mitigate employees' personal use of company resources. Research shows that AUPs do not do a good job at this but that technology does. Research also shows that while technology can be used to greatly restrict personal use of the internet in the workplace, employee satisfaction with the workplace suffers when this is done. In this research experiment we used technology not to restrict employee use of company resources for personal use, but to make the employees more aware of the current Acceptable Use Policy, and measured the decrease in employee internet abuse. The results show that this method can result in a drop from 27 to 21 percent personal use of the company networks.
Kloft, Marius, Laskov, Pavel.  2012.  Security Analysis of Online Centroid Anomaly Detection. J. Mach. Learn. Res.. 13:3681–3724.

Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). In such cases, learning algorithms may have to cope with manipulated data aimed at hampering decision making. Although some previous work addressed the issue of handling malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method–online centroid anomaly detection–in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, and analysis of attack efficiency and limitations. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly detection under different conditions: attacker's full or limited control over the traffic and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation, carried out on real traces of HTTP and exploit traffic, confirms the tightness of our theoretical bounds and the practicality of our protection mechanisms.

Rossow, C., Dietrich, C.J., Grier, C., Kreibich, C., Paxson, V., Pohlmann, N., Bos, H., van Steen, M..  2012.  Prudent Practices for Designing Malware Experiments: Status Quo and Outlook. Security and Privacy (SP), 2012 IEEE Symposium on. :65-79.

Malware researchers rely on the observation of malicious code in execution to collect datasets for a wide array of experiments, including generation of detection models, study of longitudinal behavior, and validation of prior research. For such research to reflect prudent science, the work needs to address a number of concerns relating to the correct and representative use of the datasets, presentation of methodology in a fashion sufficiently transparent to enable reproducibility, and due consideration of the need not to harm others. In this paper we study the methodological rigor and prudence in 36 academic publications from 2006-2011 that rely on malware execution. 40% of these papers appeared in the 6 highest-ranked academic security conferences. We find frequent shortcomings, including problematic assumptions regarding the use of execution-driven datasets (25% of the papers), absence of description of security precautions taken during experiments (71% of the articles), and oftentimes insufficient description of the experimental setup. Deficiencies occur in top-tier venues and elsewhere alike, highlighting a need for the community to improve its handling of malware datasets. In the hope of aiding authors, reviewers, and readers, we frame guidelines regarding transparency, realism, correctness, and safety for collecting and using malware datasets.

2013
Mazurek, Michelle L., Komanduri, Saranga, Vidas, Timothy, Bauer, Lujo, Christin, Nicolas, Cranor, Lorrie Faith, Kelley, Patrick Gage, Shay, Richard, Ur, Blase.  2013.  Measuring Password Guessability for an Entire University. Proceedings of the 2013 ACM SIGSAC Conference on Computer &\#38; Communications Security. :173–186.
Despite considerable research on passwords, empirical studies of password strength have been limited by lack of access to plaintext passwords, small data sets, and password sets specifically collected for a research study or from low-value accounts. Properties of passwords used for high-value accounts thus remain poorly understood. We fill this gap by studying the single-sign-on passwords used by over 25,000 faculty, staff, and students at a research university with a complex password policy. Key aspects of our contributions rest on our (indirect) access to plaintext passwords. We describe our data collection methodology, particularly the many precautions we took to minimize risks to users. We then analyze how guessable the collected passwords would be during an offline attack by subjecting them to a state-of-the-art password cracking algorithm. We discover significant correlations between a number of demographic and behavioral factors and password strength. For example, we find that users associated with the computer science school make passwords more than 1.5 times as strong as those of users associated with the business school. while users associated with computer science make strong ones. In addition, we find that stronger passwords are correlated with a higher rate of errors entering them. We also compare the guessability and other characteristics of the passwords we analyzed to sets previously collected in controlled experiments or leaked from low-value accounts. We find more consistent similarities between the university passwords and passwords collected for research studies under similar composition policies than we do between the university passwords and subsets of passwords leaked from low-value accounts that happen to comply with the same policies.
Fahl, Sascha, Harbach, Marian, Perl, Henning, Koetter, Markus, Smith, Matthew.  2013.  Rethinking SSL Development in an Appified World. Proceedings of the 2013 ACM SIGSAC Conference on Computer &\#38; Communications Security. :49–60.
The Secure Sockets Layer (SSL) is widely used to secure data transfers on the Internet. Previous studies have shown that the state of non-browser SSL code is catastrophic across a large variety of desktop applications and libraries as well as a large selection of Android apps, leaving users vulnerable to Man-in-the-Middle attacks (MITMAs). To determine possible causes of SSL problems on all major appified platforms, we extended the analysis to the walled-garden ecosystem of iOS, analyzed software developer forums and conducted interviews with developers of vulnerable apps. Our results show that the root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm. Based on our findings, we derive a proposal to rethink the handling of SSL in the appified world and present a set of countermeasures to improve the handling of SSL using Android as a blueprint for other platforms. Our countermeasures prevent developers from willfully or accidentally breaking SSL certificate validation, offer support for extended features such as SSL Pinning and different SSL validation infrastructures, and protect users. We evaluated our solution against 13,500 popular Android apps and conducted developer interviews to judge the acceptance of our approach and found that our solution works well for all investigated apps and developers.
Omar, Cyrus, Chung, Benjamin, Kurilova, Darya, Potanin, Alex, Aldrich, Jonathan.  2013.  Type-directed, whitespace-delimited parsing for embedded DSLs. Proceedings of the First Workshop on the Globalization of Domain Specific Languages. :8–11.
Domain-specific languages improve ease-of-use, expressiveness and verifiability, but defining and using different DSLs within a single application remains difficult. We introduce an approach for embedded DSLs where 1) whitespace delimits DSL-governed blocks, and 2) the parsing and type checking phases occur in tandem so that the expected type of the block determines which domain-specific parser governs that block. We argue that this approach occupies a sweet spot, providing high expressiveness and ease-of-use while maintaining safe composability. We introduce the design, provide examples and describe an ongoing implementation of this strategy in the Wyvern programming language. We also discuss how a more conventional keyword-directed strategy for parsing of DSLs can arise as a special case of this type-directed strategy.
Nistor, Ligia, Kurilova, Darya, Balzer, Stephanie, Chung, Benjamin, Potanin, Alex, Aldrich, Jonathan.  2013.  Wyvern: A Simple, Typed, and Pure Object-oriented Language. Proceedings of the 5th Workshop on MechAnisms for SPEcialization, Generalization and inHerItance. :9–16.
The simplest and purest practical object-oriented language designs today are seen in dynamically-typed languages, such as Smalltalk and Self. Static types, however, have potential benefits for productivity, security, and reasoning about programs. In this paper, we describe the design of Wyvern, a statically typed, pure object-oriented language that attempts to retain much of the simplicity and expressiveness of these iconic designs. Our goals lead us to combine pure object-oriented and functional abstractions in a simple, typed setting. We present a foundational object-based language that we believe to be as close as one can get to simple typed lambda calculus while keeping object-orientation. We show how this foundational language can be translated to the typed lambda calculus via standard encodings. We then define a simple extension to this language that introduces classes and show that classes are no more than sugar for the foundational object-based language. Our future intention is to demonstrate that modules and other object-oriented features can be added to our language as not more than such syntactical extensions while keeping the object-oriented core as pure as possible. The design of Wyvern closely follows both historical and modern ideas about the essence of object-orientation, suggesting a new way to think about a minimal, practical, typed core language for objects.
2014
Kothari, Vijay, Blythe, Jim, Smith, Sean, Koppel, Ross.  2014.  Agent-based Modeling of User Circumvention of Security. 1st International Workshop on Agents and CyberSecurity. :5:1–5:4.

Security subsystems are often designed with flawed assumptions arising from system designers' faulty mental models. Designers tend to assume that users behave according to some textbook ideal, and to consider each potential exposure/interface in isolation. However, fieldwork continually shows that even well-intentioned users often depart from this ideal and circumvent controls in order to perform daily work tasks, and that "incorrect" user behaviors can create unexpected links between otherwise "independent" interfaces. When it comes to security features and parameters, designers try to find the choices that optimize security utility–-except these flawed assumptions give rise to an incorrect curve, and lead to choices that actually make security worse, in practice. We propose that improving this situation requires giving designers more accurate models of real user behavior and how it influences aggregate system security. Agent-based modeling can be a fruitful first step here. In this paper, we study a particular instance of this problem, propose user-centric techniques designed to strengthen the security of systems while simultaneously improving the usability of them, and propose further directions of inquiry.

Miller, Andrew, Hicks, Michael, Katz, Jonathan, Shi, Elaine.  2014.  Authenticated Data Structures, Generically. SIGPLAN Not.. 49:411–423.

An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations.

This paper presents a generic method, using a simple extension to a ML-like functional programming language we call λ• (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of λ• we prove that all well-typed λ• programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented λ• as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.

He, Yu-Lin, Wang, Ran, Kwong, Sam, Wang, Xi-Zhao.  2014.  Bayesian Classifiers Based on Probability Density Estimation and Their Applications to Simultaneous Fault Diagnosis. Inf. Sci.. 259:252–268.

A key characteristic of simultaneous fault diagnosis is that the features extracted from the original patterns are strongly dependent. This paper proposes a new model of Bayesian classifier, which removes the fundamental assumption of naive Bayesian, i.e., the independence among features. In our model, the optimal bandwidth selection is applied to estimate the class-conditional probability density function (p.d.f.), which is the essential part of joint p.d.f. estimation. Three well-known indices, i.e., classification accuracy, area under ROC curve, and probability mean square error, are used to measure the performance of our model in simultaneous fault diagnosis. Simulations show that our model is significantly superior to the traditional ones when the dependence exists among features.

Forget, Alain, Komanduri, Saranga, Acquisti, Alessandro, Christin, Nicolas, Cranor, Lorrie Faith, Telang, Rahul.  2014.  Building the Security Behavior Observatory: An Infrastructure for Long-term Monitoring of Client Machines. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :24:1–24:2.

We present an architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security of the collected data, which will inevitably include intimate participant behavioral data. Third, the SBO must serve our research interests, which will inevitably change as collected data is analyzed and interpreted. This short paper summarizes some of our design and implementation benefits and discusses a few hurdles and trade-offs to consider when designing such a data collection system.

Perzyk, Marcin, Kochanski, Andrzej, Kozlowski, Jacek, Soroczynski, Artur, Biernacki, Robert.  2014.  Comparison of Data Mining Tools for Significance Analysis of Process Parameters in Applications to Process Fault Diagnosis. Inf. Sci.. 259:380–392.

This paper presents an evaluation of various methodologies used to determine relative significances of input variables in data-driven models. Significance analysis applied to manufacturing process parameters can be a useful tool in fault diagnosis for various types of manufacturing processes. It can also be applied to building models that are used in process control. The relative significances of input variables can be determined by various data mining methods, including relatively simple statistical procedures as well as more advanced machine learning systems. Several methodologies suitable for carrying out classification tasks which are characteristic of fault diagnosis were evaluated and compared from the viewpoint of their accuracy, robustness of results and applicability. Two types of testing data were used: synthetic data with assumed dependencies and real data obtained from the foundry industry. The simple statistical method based on contingency tables revealed the best overall performance, whereas advanced machine learning models, such as ANNs and SVMs, appeared to be of less value.

Sarikaya, Y., Ercetin, O., Koksal, C.E..  2014.  Confidentiality-Preserving Control of Uplink Cellular Wireless Networks Using Hybrid ARQ. Networking, IEEE/ACM Transactions on. PP:1-1.

We consider the problem of cross-layer resource allocation with information-theoretic secrecy for uplink transmissions in time-varying cellular wireless networks. Particularly, each node in an uplink cellular network injects two types of traffic, confidential and open at rates chosen in order to maximize a global utility function while keeping the data queues stable and meeting a constraint on the secrecy outage probability. The transmitting node only knows the distribution of channel gains. Our scheme is based on Hybrid Automatic Repeat Request (HARQ) transmission with incremental redundancy. We prove that our scheme achieves a utility, arbitrarily close to the maximum achievable. Numerical experiments are performed to verify the analytical results and to show the efficacy of the dynamic control algorithm.
 

Kanwal, Ayesha, Masood, Rahat, Shibli, Muhammad Awais.  2014.  Evaluation and Establishment of Trust in Cloud Federation. Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication. :12:1–12:8.

Cloud federation is a future evolution of Cloud computing, where Cloud Service Providers (CSP) collaborate dynamically to share their virtual infrastructure for load balancing and meeting the Quality of Service during the demand spikes. Today, one of the major obstacles in adoption of federation is the lack of trust between Cloud providers participating in federation. In order to ensure the security of critical and sensitive data of customers, it is important to evaluate and establish the trust between Cloud providers, before redirecting the customer's requests from one provider to other provider. We are proposing a trust evaluation model and underlying protocol that will facilitate the cloud providers to evaluate the trustworthiness of each other and hence participate in federation to share their infrastructure in a trusted and reliable way.

Li, Bo, Vorobeychik, Yevgeniy.  2014.  Feature Cross-Substitution in Adversarial Classification. Advances in Neural Information Processing Systems 27. :2087–2095.

The success of machine learning, particularly in supervised settings, has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to evade the classifiers deployed to detect them. We investigate both the problem of modeling the objectives of such adversaries, as well as the algorithmic problem of accounting for rational, objective-driven adversaries. In particular, we demonstrate severe shortcomings of feature reduction in adversarial settings using several natural adversarial objective functions, an observation that is particularly pronounced when the adversary is able to substitute across similar features (for example, replace words with synonyms or replace letters in words). We offer a simple heuristic method for making learning more robust to feature cross-substitution attacks. We then present a more general approach based on mixed-integer linear programming with constraint generation, which implicitly trades off overfitting and feature selection in an adversarial setting using a sparse regularizer along with an evasion model. Our approach is the first method for combining an adversarial classification algorithm with a very general class of models of adversarial classifier evasion. We show that our algorithmic approach significantly outperforms state-of-the-art alternatives.

Kästner, Christian, Pfeffer, Jürgen.  2014.  Limiting Recertification in Highly Configurable Systems: Analyzing Interactions and Isolation Among Configuration Options. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :23:1–23:2.

In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.

King, Jason, Williams, Laurie.  2014.  Log Your CRUD: Design Principles for Software Logging Mechanisms. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :5:1–5:10.

According to a 2011 survey in healthcare, the most commonly reported breaches of protected health information involved employees snooping into medical records of friends and relatives. Logging mechanisms can provide a means for forensic analysis of user activity in software systems by proving that a user performed certain actions in the system. However, logging mechanisms often inconsistently capture user interactions with sensitive data, creating gaps in traces of user activity. Explicit design principles and systematic testing of logging mechanisms within the software development lifecycle may help strengthen the overall security of software. The objective of this research is to observe the current state of logging mechanisms by performing an exploratory case study in which we systematically evaluate logging mechanisms by supplementing the expected results of existing functional black-box test cases to include log output. We perform an exploratory case study of four open-source electronic health record (EHR) logging mechanisms: OpenEMR, OSCAR, Tolven eCHR, and WorldVistA. We supplement the expected results of 30 United States government-sanctioned test cases to include log output to track access of sensitive data. We then execute the test cases on each EHR system. Six of the 30 (20%) test cases failed on all four EHR systems because user interactions with sensitive data are not logged. We find that viewing protected data is often not logged by default, allowing unauthorized views of data to go undetected. Based on our results, we propose a set of principles that developers should consider when developing logging mechanisms to ensure the ability to capture adequate traces of user activity.

Ken Keefe, University of Illinois at Urbana-Champaign.  2014.  Making Sound Design Decisions Using Quantitative Security Metrics.

Presented at the Illinois SoS Bi-weekly Meeting, December 2014.

Okada, Kazuya, Hazeyama, Hiroaki, Kadobayashi, Youki.  2014.  Oblivious DDoS Mitigation with Locator/ID Separation Protocol. Proceedings of The Ninth International Conference on Future Internet Technologies. :8:1–8:6.

The need to keep an attacker oblivious of an attack mitigation effort is a very important component of a defense against denial of services (DoS) and distributed denial of services (DDoS) attacks because it helps to dissuade attackers from changing their attack patterns. Conceptually, DDoS mitigation can be achieved by two components. The first is a decoy server that provides a service function or receives attack traffic as a substitute for a legitimate server. The second is a decoy network that restricts attack traffic to the peripheries of a network, or which reroutes attack traffic to decoy servers. In this paper, we propose the use of a two-stage map table extension Locator/ID Separation Protocol (LISP) to realize a decoy network. We also describe and demonstrate how LISP can be used to implement an oblivious DDoS mitigation mechanism by adding a simple extension on the LISP MapServer. Together with decoy servers, this method can terminate DDoS traffic on the ingress end of an LISP-enabled network. We verified the effectiveness of our proposed mechanism through simulated DDoS attacks on a simple network topology. Our evaluation results indicate that the mechanism could be activated within a few seconds, and that the attack traffic can be terminated without incurring overhead on the MapServer.