2012 Science of Security Community Meeting Posters

The results of the poster session informal voting competition are as follows:

#1 Cybersecurity Career Engagement - 11 votes
Diane Burley (George Washington/National CyberWatch), Portia Pusey (National CyberWatch), David Tobey (NBISE)

#2 Compositional Security - 10 votes
Anupam Datta (CMU), Limin Jia (CMU), Jeanette Wing (CMU)

#3 Side Channel Vulnerability Factor - 9 votes
John Demme (Columbia University), Robert Martin (Columbia University), Simha Sethumadhavan (Columbia University), Adam Waksman (Columbia University)

#3 Automated Code Diversification - 9 votes
Michael Franz (University of California, Irvine)

A Comparative Analysis of Software Liability Policies

ABSTRACT

Fig. 1: Comparing the effects on security investment and social welfare for each of three policies: software security standards, liability on patching costs, and liability on zero-day losses. Panel (a) illustrates how each policy affects the equilibrium investment in security made by the vendor. Panel (b) illustrates the impact of each policy on social welfare.

I. INTRODUCTION

In the current network environment, there are serious incen- tive problems among various actors whose decisions impact the overall security of the cyber infrastructure; the risks associ- ated with attacks on this infrastructure are growing in number and potential impact; and the importance of the role of regu- lation is increasingly understood and debated (see, e.g., [1]- [4]). However, answering how regulation can actuate a shift toward preferable outcomes, such as an increasingly secure cyber infrastructure and higher social surplus associated with these public resources, is not well understood and requires formal analysis. We begin to explore this important question by analyzing an economic model that captures both security interdependence and the primary underlying incentives of actors.

One corrective means to address the underlying incentive problems which has received intense debate in the security community is the ownership of liability for network security losses. We investigate how liability policies can be used to increase Internet security considering the effects of intercon- nectivity and the resulting interdependence of users' security actions on one another. There are two economic factors, the roles of which we will be focusing on when we explore the

the impact of each policy on social

I. INTRODUCTION

In the current network environment, there are serious incentive problems among various actors whose decisions impact the overall security of the cyber infrastructure; the risks associated with attacks on this infrastructure are growing in number and potential impact; and the importance of the role of regulation is increasingly understood and debated (see, e.g., [1]- [4]). However, answering how regulation can actuate a shift toward preferable outcomes, such as an increasingly secure cyber infrastructure and higher social surplus associated with these public resources, is not well understood and requires formal analysis. We begin to explore this important question by analyzing an economic model that captures both security interdependence and the primary underlying incentives of actors.

One corrective means to address the underlying incentive problems which has received intense debate in the security community is the ownership of liability for network security losses. We investigate how liability policies can be used to increase Internet security considering the effects of intercon- nectivity and the resulting interdependence of users' security actions on one another. There are two economic factors, the roles of which we will be focusing on when we explore the effectiveness of the mechanisms. First, users who do not patch impose negative network externalities on other users - the larger the population of users who do not patch the software, the larger the network risk associated with vulnerabilities. Second, patching is a costly undertaking. If a vendor makes a patch available when there is a vulnerability and a user decides to patch her system, testing and installing that patch brings a cost burden on the user. Thus, many users do not patch their systems even if a patch is available (see, e.g., [5] and the references therein). Combining this behavior with the network effects discussed above, we see that the costliness of patching and negative security externalities increase the security risk faced by all users, hence, both the value of the software product for users and, consequently, social welfare are reduced.

II. APPROACH

In light of these factors, we construct a microeconomic model accounting for government, firm, and user incentives. Using this model, we begin by studying the economics of the network environment in a short-run setting, where the security level of the software is taken as given. In the long run, the role of liability on increasing Internet security also impacts the incentives of software providers to adjust the security quality

Fig. 2: A characterization of the optimal patch liability share as it depends on the likelihood of zero-day attacks and the potential magnitude of security losses.

of their products at the design and development stage. There- fore, we subsequently study a long-run setting where vendors can invest in security quality in response to policy. We analyze and contrast three classes of security liability policies, namely (i) Loss Liability Policies, where the software vendor is liable to partially or fully compensate users' losses incurred in case of a zero-day attack; (ii) Patch Liability Policies, where the software vendor is held liable to compensate patching costs incurred by users if a vulnerability is discovered before it is exploited; and (iii) Security Standards Policies, where regula- tion enforces a certain standard of security to be achieved by the vendor during software development to mitigate security vulnerabilities. The former two policies can be applicable in both short-run and long-run settings, while the last one is only applicable in the long run. We explore the role of zero-day attacks by studying and comparing these policies in two security environments, with low and high zero-day attack likelihood respectively. For each class of policies and each zero-day environment, we characterize the optimal policy and study its effectiveness. We then compare and contrast their effectiveness and generate policy recommendations based on our analysis. Our goal is to answer the question whether vendor security liability can be an effective method to improve security and the net social value generated by software, and, if so, how.

III. RESULTS

As can be seen in panel (b) of Fig. 1, a main takeaway from our analysis is that it is advisable to social planners to refrain from imposing loss liability on vendors of soft- ware with interdependent risks. We find that whether or not software vendors have the opportunity to invest in security in response to loss liability makes little difference in the net impact of such a policy: welfare mostly decreases. On the surface, one may think that holding the vendor liable for a portion of consumers' losses would provide increased incentives for the vendor to improve security, and, in some cases, the vendor's optimal investment indeed rises. However, even if investments positively respond, the vendor tends to restrict usage through pricing to limit its exposure to liability on these losses. Hence, loss liability is often an inefficient policy to apply in the software industry. On the other hand, we find that both patch liability and security standards should play an important role in software policy. Security standards are recommended for environments with low zero-day attack risk because, under these conditions, software vendors lack incentives to produce software that is sufficiently secure. On the other hand, sharing of patching costs is recommended for environments with higher zero-day attack risk because of its ability to increase security in riskier settings by incentivizing appropriate user behaviors. Fig. 2 illustrates how the optimal patching cost share (as prescribed by a social planner) varies with economic costs and security characteristics. There are many ways to implement the essence of a patch liability policy. For example, one could have software vendors offer customers who agree to adhere to an appropriate patching strategy with a price discount because patching behavior is verifiable in such interconnected environments.

ACKNOWLEDGMENTS

This material is based upon work supported by the National Science Foundation under Grant No. CNS-0954234.

REFERENCES

[1] M. Chertoff, "Remarks by Homeland Security Secretary Michael Chertoff at the Chamber of Commerce on cybersecurity," Office of the Press Secretary, Oct 2008.

[2] B. Krebs, "Bush approves cybersecurity strategy," Washington Post, Jan 2003.

[3] F. B. Schneider, "It depends on what you pay," IEEE Security & Privacy, vol. 3, no. 3, p. 3, 2005.

[4] J. Lewis, "Innovation and cybersecurity regulation," CircleID, Mar 2009. [5] T. August and T. I. Tunca, "Network software security and user incen- tives," Management Science, vol. 52, no. 11, pp. 1703-1720, 2006.

BIO

Terrence August is an assistant professor of Innovation, Technology and Operations Management at the Rady School of Management, University of California, San Diego. He received his Ph.D. from the Graduate School of Business at Stanford University in 2007. His research interests broadly span information systems and operations management, including the economics of network software, production and service management, pricing and policy associated with network goods, and the interaction of digital piracy and security risk.

Currently, as part of an NSF supported research project, he is examining the control of information security risk using economic incentives. Specifically, he aims to develop an understanding of the relationship between government policy, economic incentives of firms and consumers, and software security risks of networks by studying three important aspects: software liability, the impact of software deployment models, and open source software incentives for security.

Terrence August
Tunay I. Tunca

Other available formats:

A Comparative Analysis of Software Liability Policies
Switch to experimental viewer

An Investigation of Scientific Principles Involved in Software Security Engineering

License: 
Creative Commons 2.5
Mladen A. Vouk
Laurie Williams

Other available formats:

An Investigation of Scientific Principles Involved in Software Security Engineering
Switch to experimental viewer

Architecture-based Self-securing Systems

ABSTRACT

Despite our best attempts to ensure that software systems are secure by design and construction, deployed systems must inevitably cope with unanticipated attacks and latent vulnerabilities. Hence, a critical component of a comprehensive science for security is the ability to support run- time security enforcement, problem detection, and repair. However, today's run-time mechanisms for handling security problems are often an ad hoc mixture of single point solutions unsupported by a unifying set of design and analysis principles. It is virtually impossible to make rigorous and assurable decisions about the kinds and levels of run time detection and prevention needed in a particular context. Our research contributes directly to this aspect of a science of security - namely, assurable run-time security enforcement and repair. Specifically, our approach recognizes that the problem is essentially one of developing closed-loop control systems that provide a supervisory level responsible for detecting and repairing security problems. It builds on prior research in architecture-based self-adaptive systems, where architecture models provide the foundation for analysis and repair.

License: 
Creative Commons 2.5
David Garlan

Other available formats:

Architecture-based Self-securing Systems
Switch to experimental viewer

Compositional Security

Anupam Datta is an Assistant Research Professor at Carnegie Mellon University, where he has appointments in the CyLab, Electrical and Computer Engineering, and (by courtesy) Computer Science departments. His research focuses on the scientific foundations of security and privacy. Datta's work has led to new principles for securely composing network protocols and software systems; applications of these principles have influenced several IEEE and IETF standards. His work on privacy protection has led to novel audit algorithms for detecting inappropriate information flows by insiders and their applications in healthcare and Web privacy. Datta has authored a book and over 40 other publications on these topics. He serves on the Steering Committee and as the 2013-14 Program Co-Chair of the IEEE Computer Security Foundations Symposium. He is a co-PI of AFOSR MURIs on Science of Cybersecurity (2012-17) and Collaborative Policies and Assured Information Sharing (2008-13) as well as the HHS SHARPS center on healthcare security (2010-13), and a faculty associate on the NSF TRUST center on security (2007-15). Datta obtained Ph.D. (2005) and M.S. (2002) degrees from Stanford University and a B.Tech. (2000) from IIT Kharagpur, all in Computer Science.

License: 
Creative Commons 2.5
Anupam Datta
Limin Jia
Jeanette Wing

Other available formats:

Compositional Security
Switch to experimental viewer

Cybersecurity Career Engagement

ABSTRACT

Proponents of immersive, cybersecurity competitions suggest that they are effective tools for increasing participant awareness, interest and engagement in cybersecurity careers. Millions of dollars in federal and private investment have been spent on the development of competitions such as: CyberCIEGE from Naval Postgraduate School; CyberPatriot sponsored by the Air Force Association using CyberNEXS developed by SAIC; Cyber Quest and NetWars developed by The SANS Institute; DC3 Digital Forensics Challenge from the DoD Cyber Crime Center; and Panopoly developed by University of Texas, San Antonio. Recently, several NSF Advanced Technological Education Centers collaborated to launch the National Cyber League, which provides weekly competitions leading to an NCAA March--Madness style playoff system and a national championship.

Although anecdotal evidence suggests that competitions may be a positive approach to providing enriched, authentic learning experiences and identifying talent, we lack systematic empirical analysis of the role of cybersecurity competitions for increasing awareness, interest, and engagement in cybersecurity careers. Furthermore, it is uncertain how well the competitions serve to develop a pipeline of talent for cybersecurity professions among underrepresented populations.

The purpose of this study is to fill this gap in our understanding by exploring cybersecurity career engagement among participants in the National Cyber League Fall 2012 pilot season.

The poster will:

  1. Provide data and statistics on cybersecurity competitions in general and NCL specifically;

  2. Summarize current empirical research on the topic;

  3. Describe the approach taken in the current research--in--progress; and

  4. Solicit feedback on the specific project and general research priorities for examining cybersecurity competitions.

BIO

Diana L. Burley is Associate Professor in the Graduate School of Education and Human Development at The George Washington University (GW). Dr. Burley joined GW in 2007 and has served as the inaugural Chair of the Human and Organizational Learning Department and as Director of the Executive Leadership Doctoral Program. Prior to joining the GW faculty, she served as a Program Officer in the Directorate for Education and Human Resources at The National Science Foundation (NSF) where she managed multi-million dollar grant programs designed to increase the capacity of the U.S. higher education enterprise to produce scientists. At NSF, she served as the lead program officer of the Cyber Corps program and based on her work, Dr. Burley was honored by the Federal Chief Information Officers Council and the Colloquium on Information Systems Security Education for outstanding efforts toward the development of the federal cyber security workforce. Most recently, Dr. Burley has been appointed to the 2012 State of Virginia Joint Commission on Technology & Science (JCOTS) Cyber Security Committee and as the co-chair of the National Research Council's Cybersecurity Professionalization for the Nation: Criteria for Future Decision Making for Cybersecurity Committee. She is the co-PI for research of the NSF-funded National CyberWatch Center. In addition, she serves as a GW representative to the Institute for Information Infrastructure Protection (I3P) - a consortium of leading institutions dedicated to strengthening the cyber infrastructure of the United States; and a Research Scholar in the GW Institute for Public Policy. She holds a BA in Economics from the Catholic University of America; an M.S. in Public Management and Policy, an M.S. in Organization Science, and a Ph.D. in Organization Science and Information Technology from Carnegie Mellon University where she studied as a Woodrow Wilson Foundation Fellow in Public Policy.

Diane Burley
David Tobey
Portia Pusey

Other available formats:

Cybersecurity Career Engagement
Switch to experimental viewer

Exploring Perceptions of Phishing: The Role of Social Engineering in the Science of Security

ABSTRACT

One hundred fifty-five participants completed a survey on Amazon's Mechanical Turk that assessed characteristics of phishing attacks and requested participants to describe their previous experiences and the related consequences. Results indicated almost all participants had been targets of a phishing with 22% reporting these attempts were successful. Participants reported actively engaging in efforts to protect themselves online by noticing the "padlock icon" and seeking additional information to verify the legitimacy of e-retailers. Moreover, participants indicated that phishers most frequently pose as members of organizations and that phishing typically occurs via email yet they are aware that other media might also make them susceptible to phishing scams. The reported consequences of phishing attacks go beyond financial loss, with many participants describing social ramifications such as embarrassment and of reduced trust. Implications for research in risk communication and design roles by human factors/ergonomics (HF/E) professionals are discussed.

Dr. Christopher B. Mayhorn, Associate Professor and Program Coordinator of the Human Factors and Ergonomics Psychology program, joined the faculty at North Carolina State University in 2002. He earned a B.A. from The Citadel (1992), an M.S. (1995), a Graduate Certificate in Gerontology (1995), and a Ph.D. (1999) from the University of Georgia. He also completed a Postdoctoral Fellowship at the Georgia Institute of Technology. Dr. Mayhorn's current research interests include everyday memory, decision-making, human-computer interaction, safety and risk communication for older adult populations. Dr. Mayhorn has more than 30 peer-reviewed publications to his credit and his research has been funded by government agencies such as the National Science Foundation and the National Security Agency. Currently, Chris is serving on the Human Factors and Ergonomics Society (HFES) Government Relations Committee and as the Chair of the Technical Program Committee of HFES.

License: 
Creative Commons 2.5
Christopher B. Mayhorn
Emerson Murphy-Hill
Christopher M. Kelley
Kyung Wha Hong

Other available formats:

Exploring Perceptions of Phishing: The Role of Social Engineering in the Science of Security
Switch to experimental viewer

Foundations and Analysis of Protocol Indistinguishability

ABSTRACT

Intuitively, two protocols P1 and P2 are indistinguishable if an attacker cannot tell the difference between interactions with P1 and with P2. In this paper we: (i) propose an intuitive notion of indistin- guishability that applies to protocols whose equational theories satisfy the finite variant (FV) property, a wide class of algebraic theories for cryptographic protocols; (ii) formalize such a notion in terms of what we call their synchronous product P1 P2 and of two simpler notions, the indistiguishable messages (IM ) property, and the indistiguishable at- tacker event sequences (IAES) property; (iii) prove theorems showing how the (IM ) and (IAES ) properties can be checked by the Maude-NPA tool; and (iv) illustrate such verification with concrete examples. To the best of our knowledge, this work makes possible for the first time the automatic verification of indistinguishability modulo as wide a class of algebraic properties as (FV ).

License: 
Creative Commons 2.5
Jose Meseguer
Catherine Meadows
S. Escobar

Other available formats:

Foundations and Analysis of Protocol Indistinguishability
Switch to experimental viewer

Inductive Inference of Security

ABSTRACT

We model security games by combining methods of epistemic game theory and algorithmic information theory. Remarkably, the combination of these two technically and conceptually challenging mathematical theories turns out to be technically simpler than either of them, and perhaps even conceptually clearer. The resulting model turns out to be closely related to the mathematical models of science as inductive inference of algorithms. Introduced in Solomonoff's seminal work in the 1960s, such models are nowadays widely used in algorithmic learning and occasionally in experiment design. The observation that the process of designing secure systems and adapting them after attacks yields to the models of scientific theory formation, testing and refinement brings some concrete problems of cyber security, such as reasoning about security by obscurity, on the well ploughed ground of scientific methods.

Dusko Pavlovic is Professor of Information Security at Royal Holloway, University of London, where he leads ASECOLAB, the Adaptive Security and Economics Laboratory. His current research interests concern theory of security, network and quantum computation, and information extraction.

License: 
Creative Commons 2.5
Dusko Pavlovic
Christian Janson

Other available formats:

Inductive Inference of Security
Switch to experimental viewer

Lessons Learned from Experimenting with Machine Learning in Intrusion Analysis

ABSTRACT

Intrusion analysis, i.e., the process of combing through IDS alerts and audit logs to identify real successful and attempted attacks, remains a difficult problem in practical network security defense. The major contributing cause to this problem is the large false-positive rate in the sensors used by IDS systems to detect malicious activities. The goal from this work is to examine whether a machine-learned classifier can help a human analyst filter out non-interesting scenarios reported by an IDS alert correlator, so that analysts' time can be saved. This research is conducted in the open source SnIPS intrusion analysis framework. Our goal is to classify the correlation graphs produced from SnIPS into "interesting" and "non-interesting", where "interesting" means that a human analyst would want to conduct further analysis on the events. We spent significant amount of time manually labeling SnIPS output correlation graphs, and built prediction models using both supervised and semi-supervised learning approaches. Our experiments revealed a number of interesting observations that give insights into the pitfalls and challenges of applying machine learning to intrusion analysis. The experimentation results also indicate that semi-supervised learning is a promising approach towards practical machine learning-based tools that can aid human analysts, when a limited amount of labeled data is available.

The Problem and Our Approach

The IDS sensors that we have to rely on for the intrusion analysis often suffer from a large false-positive rate. For example, we run the well-known open-source IDS system Snort on our departmental network containing just a couple hundred machines and Snort produces hundreds of thousands of alerts every day, most of which happen to be false alarms. The reason for this is well-known: to prevent false negatives, i.e. detection misses from overly specific attack signatures, the signatures that are loaded in the IDS are often as general as possible, so that an activity with even a remote possibility of indicating an attack will trigger an alert. It then becomes the responsibility of a human analyst monitoring the IDS system to distinguish the true alarms from the enormous number of false ones. How to deal with the overwhelming prevalence of false positives is the primary challenge in making IDS sensors useful, given that the amount of attack-relevant data is minuscule compared to the titanic volume of data produced from an enterprise network. The dilemma created by this base-rate fallacy, first pointed out by Axelsson [1], has made it virtually impossible to accurately detect intrusion by a single sensor. Due to the lack of effective techniques to handle the false-positive problem, it is common among practitioners to altogether disable IDS signatures that tend to trigger large number of false positives. In our own campus network, the security analysts did not use the standard Snort signatures at all, but rather resorted to secret attack signatures that are highly specific to their experience and environment, and have small false-positive rates. However, as we were told by the security analysts, the secret signatures can only help capture some "low-hanging fruits" and many attacks are likely missed due to the disabled more generic signatures.

Alert correlation - the reconstruction of high-level incidents from low-level events - has been used to remediate this problem. By looking at multiple observation points and correlate alerts, one can potentially reduce the false-positive and in- crease the confidence in intrusion analysis.

SnIPS is an open source intrusion analysis framework [11] which provides an alert-correlation module [17] and priori- tizing module [19]. The prioritizing module ranks the correlations by calculating a belief value for each hypothesis that occurs in the correlation. The hypotheses are then ranked by the belief value, with higher-belief hypotheses presented to human analysts first.

When the ranked SnIPS correlation graphs are presented to security analysts, the analysts can "browse" the graph to explore its structure as well as the details of the supporting evidence. If the correlation proves significantly interesting, the analysts will conduct further forensics analysis on data that is outside SnIPS, to confirm or rule out the scenario. This process is manual and even with the help of the belief values, the human analyst still needs to look at the evi- dential details of the correlation to determine if it is worth further investigation, as the belief values do not always exactly match the priority determined by the human. Given this, the question is whether anything can be done to fur- ther automate the prioritization process, so that the human analysts' time can be saved.

Throughout our experimentation with SnIPS on our depart- mental network, we found that the user needs to do many repetitive tasks in the analysis of the SnIPS output to determine whether further investigation is warranted. We hypothesize that such repetitive tasks can be inferred through a machine-learning approach, so that the prioritization process can be further automated. We adopt machine learning as a candidate technique to further help prioritizing intrusion analysis since it seems that a human, after examining the SnIPS output, can make a decision on whether to further investigate the incident or not. Thus, there is basis to believe that the SnIPS output can yield a set of predictive features indicating whether a correlation is interesting or not. Furthermore, the fact that a human analyst will have to eventually look at the correlation graph to make the final decision, implies that there will be labeled samples (albeit small in amount) available for machine learning. This would be similar to the approach taken in spam filtering, where machine learning has proved to be quite successful [3, 5, 8].

While it is possible that this prioritization process could also be automated through other means such as a rule-based system, we think it is more cost effective if the machine can learn the rules automatically. In the long run, the machine-learned models could provide insights into how to build a non machine learning-based system to do the job automatically.

There has been a long line of work on applying machine learning in anomaly-based intrusion detection [2, 4, 6, 7, 9, 10, 12, 13, 14, 15, 18]. It has been pointed out that significant challenges exist in applying machine learning in this area [16]. Our application of machine learning has a different goal than these past works. Our machine-learned model will help a human analyst to prioritize output from an intrusion analysis system, which relies upon (multiple) IDS systems. Our method is not to build an intrusion detector through machine learning. Our application of machine learning is justified due to the nature of the problem described above.

Acknowledgment

This material is based upon work supported by U.S. National Science Foundation under grant no. 0954138 and 1018703, AFOSR under Award No. FA9550-09-1-0138, and HP Labs Innovation Research Program. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, AFOSR, or Hewlett-Packard Development Company, L.P.

References

[1] Stefan Axelsson. The base-rate fallacy and the difficulty of intrusion detection. ACM Trans. Inf. Syst. Secur., 3(3):186-205, 2000.

  1. [2] Dorothy Denning. An intrusion-detection model. IEEE Transactions on Software Engineering, 13(2), 1987.

  2. [3] J. Goodman, D. Heckerman, and R. Rounthwaite. Stopping spam. Scientific American, 292(4), 2005.

  3. [4] Nico Go rnitz, Marius Kloft, Konrad Rieck, and Ulf Brefeld. Active learning for network intrusion detection. In Proceedings of the 2nd ACM workshop on Security and artificial intelligence, 2009.

  4. [5] Paul Graham. Hackers and Painters: Big Ideas from the Computer Age. O'Reilly, 2004.

  5. [6] Steven Andrew Hofmeyr. An Immunological Model of Distributed Detection and Its Application to Computer Security. PhD thesis, University of New Mexico, 1999.

  6. [7] Wenjie Hu, Yihua Liao, and V. Rao Vemuri. Ro- bust anomaly detection using support vector machines. In Proceedings of the International Conference on Machine Learning (ICML), 2003.

  7. [8] G. Hulten and J. Goodman. Tutorial on junk e-mail filtering, 2004.

  8. [9] Harlod. S. Javitz and Alfonso Valdes. The NIDES statistical component: Description and justification. Tech- nical report, SRI International, 1993.

[10] PavelLaskov,PatrickDu ssel,ChristinScha fer,and Konrad Rieck. Learning intrusion detection: Supervised or unsupervised? In Image Analysis and Process- ing - ICIAP. Springer Berlin / Heidelberg, 2005.

[11] Xinming Ou, S. Raj Rajagopalan, and Sakthiyuvaraja Sakthivelmurugan. An empirical approach to modeling uncertainty in intrusion analysis. In Annual Computer Security Applications Conference (ACSAC), Dec 2009.

[12] Konrad Rieck. Machine Learning for Application-Layer Intrusion Detection. PhD thesis, Technische Univer- sita t, Berlin, 2009.

[13] Konrad Rieck. Self-learning network intrusion detec- tion. Information Technology IT, 53(3), 2011.

[14] William Robertson, Federico Maggi, Christopher Kruegel, and Giovanni Vigna. Effective Anomaly De- tection with Scarce Training Data. In Proceedings of the Network and Distributed System Security Sympo- sium (NDSS), San Diego, CA USA, 02 2010.

[15] Chris Sinclair, Lyn Pierce, and Sara Matzner. An ap- plication of machine learning to network intrusion de- tection. In Proceedings of the 15th Annual Computer Security Applications Conference, ACSAC, 1999.

[16] Robin Sommer and Vern Paxson. Outside the closed world: On using machine learning for network intrusion detection. In 31st IEEE Symposium on Security and Privacy (S&P), 2010.

[17] Sathya Chandran Sundaramurthy, Loai Zomlot, and Xinming Ou. Practical IDS alert correlation in the face of dynamic threats. In the 2011 International Confer- ence on Security and Management (SAM'11), Las Ve- gas, USA, July 2011.

[18] Zheng Zhang, Jun Li, C. N. Manikopoulos, Jay Jor- genson, and Jose Ucles. Hide: a hierarchical network intrusion detection system using statistical preprocess- ing and neural network classification. In Proceedings IEEE Workshop on Information Assurance and Secu- rity, 2001.

[19] Loai Zomlot, Sathya Chandran Sundaramurthy, Kui Luo, Xinming Ou, and S.Raj Rajagopalan. Prioritiz- ing intrusion analysis using DempsterShafer theory. In 4TH ACM Workshop on Artificial Intelligence and Se- curity (AISec), 2011.

BIO

Loai Zomlot is a PhD. candidate in the Department of Computing and Information Sciences at Kansas State University (K-State). He joined the PhD. program in Fall 2008 after receiving the Master Degree in Software Engineering from K-State through the Fulbright program. Mr. Zomlot research focuses on enterprise network security defense, especially in intrusion detection and analysis. His main goal is to handle uncertainty in the network intrusion analysis, i.e. the process of combing through IDS alerts and audit logs to identify and remediate true successful and attempted attacks. This includes discovering and applying some of the reasoning about uncertainty approaches that help in mitigating this problem.

License: 
Creative Commons 2.5
Loai Zomlot
Xinming Ou
Doina Caragea
Sathya Chandran

Other available formats:

Lessons Learned from Experimenting with Machine Learning in Intrusion Analysis
Switch to experimental viewer

Making Physical Inferences to Enhance Wireless Security

ABSTRACT

Securing wireless communication remains challenging in dynamic environments due to the shared nature of wireless medium and lacking of fixed key management infrastructures. Generating secret keys using physical layer information thus becomes an attractive alternative to complement traditional cryptographic-based methods. Although recent work has demonstrated that Received Signal Strength (RSS) based secret key extraction is practical, existing RSS-based key generation techniques can only obtain coarse-grained information of the radio channel and are hence largely limited in the rate they generate secret bits. In this project, we show that exploiting the channel response from multiple Orthogonal Frequency-Division Multiplexing (OFDM) subcarriers can provide fine-grained channel information and achieve higher bit generation rate for both static and mobile cases in real-world scenarios. We further develop a Channel Gain Complement (CGC) assisted secret key extraction scheme to cope with channel non-reciprocity encountered in practice. Our extensive experiments using WiFi networks in both indoor as well as outdoor environments demonstrate the feasibility of the practical application of utilizing Channel State Information to perform secret key extraction. The experimental results also show that our proposed approach is resilient to malicious attacks identified to be harmful to RSS-based techniques including predictable channel attack and stalking attack. Thus, our preliminary work shows that it is practical to enhance wireless security by making inferences using unique physical properties.

Yingying (Jennifer) Chen is an Associate Professor in the Department of Electrical and Computer Engineering at Stevens Institute of Technology. Her research interests include cyber security and privacy, wireless and sensor networks, mobile social networks and pervasive computing. She received her Ph.D. degree in Computer Science from Rutgers University. Prior to joining Stevens Institute of Technology, she was with Alcatel-Lucent at Holmdel & Murray Hill, New Jersey. She has co-authored the book Securing Emerging Wireless Systems Springer 2009 and published over 70 journal articles and referred conference papers. She is the director of Data Analysis and Information Security (DAISY) Lab at Stevens. She is the recipient of the NSF CAREER Award 2010 and Google Research Award 2010. She received Stevens Board of Trustees Award for Scholarly Excellence in 2010. She is also the recipient of the Best Paper Awards from ACM International Conference on Mobile Computing and Networking (MobiCom) 2011 and International Conference on Wireless On-demand Network Systems and Services (WONS) 2009, as well as the Best Technological Innovation Award from the International TinyOS Technology Exchange 2006. She also received the IEEE Outstanding Contribution Award from IEEE New Jersey Coast Section each year 2005-2009. She received New Jersey Inventors Hall of Fame Innovators Award 2012. Her research has been reported in numerous media outlets including the Wall Street Journal, MIT Technology Review, Inside Science, NPR, and CNET.

License: 
Creative Commons 2.5
YingYing Chen
Jie Yang

Other available formats:

Making Physical Inferences to Enhance Wireless Security
Switch to experimental viewer

Privacy and Security in Distributed Control: Differentially Private Consensus

ABSTRACT

Towards understanding the limits and the performance costs of privacy and security in database-driven control systems, we study the problem of achieving consensus in a group while preserving privacy of individuals. The iterative consensus problem requires a set of processes or agents with different initial values, to interact and update their states to eventually converge to a common value. Protocols solving iterative consensus serve as building blocks in a variety of systems where distributed coordination is required for load balancing, data aggregation, sensor fusion, filtering, and synchronization. In this paper, we introduce the private iterative consensus problem where agents are required to converge while protecting the privacy of their initial values from honest but curious adversaries. Protecting the initial states, in many applications, suffice to protect all subsequent states of the individual participants.

We adapt the notion of differential privacy in this setting of iterative computation. Next, we present a server-based and a completely distributed randomized mechanism for solving differentially private iterative consensus with adversaries who can observe the messages as well as the internal states of the server and a subset of the clients. Our analysis establishes the tradeoff between privacy and the accuracy in iterative consensus: for given epsilon, b >0, the epsilon-differentially private mechanism for N agents, is guaranteed to convergence to a value within O(1/(epsilon.sqr(bN)) of the average of the initial values, with probability at least (1-b).

Sayan Mitra is an Assistant Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign and is affiliated with the Department of Computer Science, the Coordinated Science Lab, and the Information Trust Institute at Illinois. His research is on foundations of distributed and cyber-physical systems. He has authored more than 60 articles in international journals and conference proceedings and his work has led to the creation of several mathematical, algorithmic, and software tools for verification and synthesis. Sayan graduated from MIT in 2007 and spent one year as a post-doctoral researcher at the Center for Mathematics of Information of CalTech. Earlier, he completed MSc from the Indian Institute of Science, Bangalore and undergraduate degree in EE from Jadavpur University, Kolkata. He recently received the National Science Foundation's CAREER award (2011), AFOSR Young Investigator Research Program Award (2012), a Samsung GRO award (2012), and a best paper award at the IFIP International Conference on Formal Techniques for Distributed Systems (FMOODS/FORTE '12).

Sayan Mitra
Zhenqi Huang
Geir Dullerud

Other available formats:

Privacy and Security in Distributed Control: Differentially Private Consensus
Switch to experimental viewer

Privacy-by-ReDesign: Alleviating Users’ Privacy Concerns for Third-Party Applications on Facebook

ABSTRACT

The extensive disclosure of personal information by users of online social networks (OSNs) has made privacy concerns particularly salient. A number of studies have been conducted to investigate users' privacy attitudes and possible risks that users face when they fail to adequately protect their information. An additional dimension that represents the complexity of studying privacy in the context of OSNs is added by the large amount of data collection and transmission by third-party applications ("apps"). Such particularly aggressive way of data access and transmission raises a new set of privacy challenges, because users' private information can be easily revealed by their and even their friends' use of apps. A heightened need for empowering user privacy control for third-party apps arises due to the inability to monitor the data use of app providers within and outside of the Facebook platform and the inherent uncertainty about their data practices.

In this work, we first systematically study apps' current practices for privacy notice and consent by: i) collecting data from the 1800 most popular Facebook apps to record their data collection practices concerning users and their friends, and ii) developing our own Facebook app to conduct a number of tests to identify problems that exist in the current design of privacy authentication dialogs for third-party apps on Facebook. We find that in the current design of the privacy authentication dialogs, there is no way for users to limit apps' information access or publishing abilities during the installation process. Even the post installation settings cannot sufficiently help users to control what information they share with apps.

To address these problems, the approach of Privacy by ReDesign is employed to investigate whether users can more adequately represent their preferences for sharing and releasing personal information with these two improved designs of privacy authorization dialogues. Specifically, we propose two enhanced versions of interfaces (shown in Figure 1 and Figure 2), which highlight control and awareness as the essential factors of privacy concerns in the context of third-party apps on Facebook. These two new designs aim to fulfill the following two design principles: 1) the authorization dialog should provide options for a user to control the information accessibility or publishing ability before adding the app to the user's Facebook profile; 2) the authorization dialog should provide alert signals for a user when the app asks for the user's sensitive private information. A field experiment with 150 Facebook users was conducted to investigate whether users can more adequately represent their preferences for sharing and releasing personal information with these newly designed privacy authorization dialogues. We believe that this work provides both conceptual and empirical insights in terms of design recommendations to address privacy concerns toward third-party apps on Facebook.

Publications:

  • - Wang, N., Grossklags, J. and Xu, H. (2013). An Online Experiment of Social Applications' Privacy Authorization Dialogues, Proceedings of 16th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW), San Antonio, TX.

  • - Xu, H., Wang, N., and Grossklags, J. (2012). Privacy-by-ReDesign: Alleviating Privacy Concerns for Third-Party Applications, Proceedings of 33rd Annual International Conference on Information Systems (ICIS), Orlando, FL.

  • - Wang, N., Xu, H., and Grossklags, J. (2011). Third-Party Apps on Facebook: Privacy and the Illusion of Control, Proceedings of the ACM Symposium on Computer Human Interaction for Management of Information Technology (CHIMIT), Boston, MA.

Dr. Heng Xu is a tenured associate professor of Information Sciences and Technology at the Pennsylvania State University where she is a recipient of the endowed PNC Technologies Career Development Professorship. She leads the Privacy Assurance Lab (PAL), an inter-disciplinary research group working on a diverse set of projects related to understanding and assuring information privacy. She is also the associate director of the Center for Cyber-Security, Information Privacy and Trust (LIONS Center) at Penn State.

Her current research focus is on the interplay between social and technological issues associated with information privacy. She approaches privacy issues through a combination of empirical, theoretical, and technical research efforts. Her research projects have been dealing with individuals' information privacy concerns and behaviors, strategic management of organizational privacy and security practices, and design and empirical evaluations of privacy-enhancing technologies. At Penn State, she teaches courses on security and risk analysis, integration of privacy and security, human information behavior, and organizational informatics. Dr. Xu has been granted a Faculty Early Career Development (CAREER) Award, which is the National Science Foundation's most prestigious awards in support of junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research.

Dr. Xu has authored over 70 research papers on information privacy, security management, human-computer interaction, and technology innovation adoption. Her research has appeared in Decision Support Systems, Information Systems Research, Information & Management, Journal of Management Information Systems, Journal of the American Society for Information Science and Technology, Journal of the Association for Information Systems, MIS Quarterly, and in other journals. Her interdisciplinary privacy research has been recently featured in the Wall Street Journal Digits, on the Kathleen Dunn Show that airs live on Wisconsin Public Radio, and on TV Show - To the Best of My Knowledge that airs live in Central Pennsylvania.

License: 
Creative Commons 2.5
Heng Xu
Jens Grossklags
Na Wang
Pan Shi
Switch to experimental viewer

Side-channel Vulnerability Factor

ABSTRACT

There have been many attacks that exploit information leakage and many proposed countermeasures to protect against these attacks. Currently, however, there is no systematic, holistic methodology for understanding leakage or making performance-security trade-offs. As a result, it is not well studied or known how various system design decisions affect information leakage or their vulnerability to side- channel attacks.

In this paper, we propose a new metric for measuring information leakage called the Side-channel Vulnerability Factor (SVF). SVF is based on our observation that all side-channel attacks ranging from physical to microarchitectural to software rely on recognizing leaked execution patterns. Accordingly, SVF examines patterns in attackers' observations and measures their correlation to the victim's actual execution patterns.

SVF can be used by designers to evaluate security options and create robust systems, and by at- tackers to pick out leaky targets. As a detailed case study, we show how SVF can be used to evaluate microarchitectural design choices. This study demonstrates the danger of evaluating ad hoc side-channel countermeasures in a vacuum and without quantitative leakage metrics. A whole-system security met- ric like SVF has the potential to establish a quantitative basis for evaluating system security decisions pertaining to side-channel leaks.

1 Introduction

Computing is steadily moving to a model where most of the data lives on the cloud and personal electronic devices are used to access this data from multiple locations. The aggregation of services in the backend provides increased efficiency but also creates a security risk because of information leakage through shared resources such as processors, caches and networks. These information leaks, known as side-channel leaks, can compromise security by allowing a user to obtain information from an unauthorized security level or even by simply violating observability restrictions.

In a side-channel attack, an attacker is able to deduce secret data by observing the indirect effects of that data. For instance, in Figure 1 Alice runs a program on a shared system. The inputs to that program may include URLs she is requesting, or sensitive information like encryption keys for an HTTPS connection. Even though the shared system is secure enough that attackers cannot directly read Alice's inputs, they can observe and leverage the inputs' indirect effects on the system which leave unique signatures. For instance, web pages have different sizes and fetch latencies. Different bits in the encryption key affect processor cache and core usage in different ways. All of these network and processor effects can and have been measured by attackers. Through complex post-processing, attackers are able to gain a surprising amount of information from this data.

While defenses to many side-channels have been proposed, currently no metrics exist to quantitatively capture the vulnerability of a system to side-channel attacks.

Existing security analyses offers only existence proofs that a specific attack on a particular machine is possible or that it can be defeated. As a result, it is largely unknown what level of protection (or conversely, vulnerability) modern computing architectures provide. Does turning off simultaneous multi-threading or partitioning the caches truly plug the information leaks? Does a particular network feature obscure information needed by an attacker? Although each of these modifications can be tested easily enough and they are likely to defeat existing, documented attacks, it is extremely difficult to show that they increase resiliency to future attacks or even that they increase difficulty for the attacker using novel improvements to known attacks.

To solve this problem, we present a quantitative metric for measuring side channel vulnerability. We observe a commonality in all side-channel attacks: the attacker always uses patterns in the victims program behavior to carry out the attack. These patterns arise from the structure of programs used, typical user behavior, and user inputs. For instance, memory access patterns in OpenSSL (a commonly used crypto library) have been used to deduce secret encryption keys [4]. These accesses were indirectly observed through a shared cache between the victim and the attacker process. As another example, crypto keys on smart cards have been compromised by measuring power profiles patterns arising from repeating crypto operations.

In addition to being central to side channels, patterns have the useful property of being computationally recognizable. In fact, pattern recognition in the form of phase detection [3, 5] is well known and used in computer architecture. In light of our observation about patterns, it seems obvious that side-channel attackers actually do no more than recognize execution phase shifts over time in victim applications. In the case of encryption, computing with a 1 bit from the key is one phase, whereas computing with a 0 bit is another. By detecting shifts from one phase to the other, an attacker can reconstruct the original key [4, 2]. Even HTTPS attacks work similarly - the attacker detects the network phase transitions from "request" to "waiting" to "transferring" and times each phase. The timing of each phase is, in many cases, sufficient to identify a surprising amount and variety of information about the request and user session [1]. Given this commonality of side-channel attacks, our key insight is that side-channel information leakage can be characterized entirely by recognition of patterns through the channel. Figure 2 shows an example of pattern leakage through two microarchitectures, one of which transmits patterns readily and one of which does not.

Accordingly, we can measure information leakage by computing the correlation between ground-truth patterns and attacker observed patterns. We call this correlation Side-channel Vulnerability Factor (SVF). SVF measures the signal-to-noise ratio in an attacker's observations. While any amount of leakage could compromise a system, a low signal-to-noise ratio means that the attacker must either make do with inaccurate results (and thus make many observations to create an accurate result) or become much more intelligent about recovering the original signal. This assertion appears to be historically true, as evidenced by the example SVFs given in Table ??. While the attack [4] on the 0.73 SVF system was relatively simple, the 0.27 system's attack [2] required a trained artificial neural network to filter noisy observations.

As a case study for SVF, we examine the side-channel vulnerability of processor memory microarchi- tecture, specifically caches which have been shown to be vulnerable in previous studies. Our case study show that design features can interact and affect system leakage in odd, non-linear manners; evaluation of a system's security, therefore, must take into account all components. Further, we observed that features designed or thought to protect against side-channels (such as dynamic cache partitioning) can themselves leak information. These results indicate the potential and value of embedding our evaluation method into traditional architecture design cycle.

SVF can be useful to architects, auditors and attackers. A security minded computer architect can use SVF to determine microarchitecture configurations that reduce information leakage. Security auditors can use the vulnerability metric to classify systems according to CC or TCSEC criteria. Attackers can use SVF methodology to determine vulnerable programs and microarchitectures. SVF can be used to identify components in a system that leak sensitive information. These components can then be targeted for attack or defense.

To summarize, we propose a metric and methodology for measuring information leakage in systems; this metric represents a step in the direction of a quantitative approach to system security, a direction that has not been explored before. We also evaluate cache design parameters for their effect on side-channel vulnerability and present several surprising results, motivating the use of a quantitative approach for security evaluation. Finally, we briefly discuss how our measurement methods can be extended to network, disk, or entire systems.

References

  1. [1] S. Chen, R. Wang, X. Wang, and K. Zhang. Side-channel leaks in web applications: A reality today, a challenge tomorrow. In Security and Privacy (SP), 2010 IEEE Symposium on, pages 191 -206, may 2010.

  2. [2] D. Gullasch, E. Bangerter, and S. Krenn. Cache games - bringing access-based cache attacks on aes to practice. In Security and Privacy (SP), 2011 IEEE Symposium on, pages 490-505, may 2011.

  3. [3] M. J. Hind, V. T. Rajan, and P. F. Sweeney. Phase shift detection: A problem classification, 2003.

  4. [4] C. Percival. Cache missing for fun and profit, 2005.

  5. [5] T. Sherwood, E. Perelman, G. Hamerly, S. Sair, and B. Calder. Discovering and exploiting program phases. Micro, IEEE, 23(6):84 - 93, nov.-dec. 2003.

License: 
Creative Commons 2.5
John Demme
Robert Martin
Adam Waksman
Simha Sethumadhavan

Other available formats:

Side-channel Vulnerability Factor
Switch to experimental viewer

Software Security Metrics

ABSTRACT

A well-defined and fully validated suite of software security metrics are desirable to take into account software internal attributes, developers who develop the software, attackers who attack the software, and users who use the software. We aim to investigate existing and new security metrics to predict which code locations are likely to contain vulnerabilities. In particular, we outlined following broad categories to advance existing software engineering research in areas of software security metrics: 1) Security Metrics for Incorporating Global Attack Trends, 2) Grounded Theory for the Identification of Security Metrics, and 3) Security Metrics for Incorporating Run-time Information from Deployed Systems.

License: 
Creative Commons 2.5
Tao Xie
Laurie Williams

Other available formats:

Software Security Metrics
Switch to experimental viewer