Visible to the public Biblio

Filters: First Letter Of Title is S  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R [S] T U V W X Y Z   [Show ALL]
S
Eric Yuan, Naeem Esfahani, Sam Malek.  2014.  A Systematic Survey of Self-Protecting Software Systems. ACM Transactions on Autonomous and Adaptive Systems (TAAS) - Special Section on Best Papers from SEAMS 2012 . 8(4)

Self-protecting software systems are a class of autonomic systems capable of detecting and mitigating security threats at runtime. They are growing in importance, as the stovepipe static methods of securing software systems have been shown to be inadequate for the challenges posed by modern software systems. Self-protection, like other self-* properties, allows the system to adapt to the changing environment through autonomic means without much human intervention, and can thereby be responsive, agile, and cost effective. While existing research has made significant progress towards autonomic and adaptive security, gaps and challenges remain. This article presents a significant extension of our preliminary study in this area. In particular, unlike our preliminary study, here we have followed a systematic literature review process, which has broadened the scope of our study and strengthened the validity of our conclusions. By proposing and applying a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area, we have identified key patterns, trends and challenges in the existing approaches, which reveals a number of opportunities that will shape the focus of future research efforts.

Michael Maass, Adam Sales, Benjamin Chung, Joshua Sunshine.  2016.  A systematic analysis of the science of sandboxing. PeerJ Computer Science. 2

Sandboxes are increasingly important building materials for secure software systems. In recognition of their potential to improve the security posture of many systems at various points in the development lifecycle, researchers have spent the last several decades developing, improving, and evaluating sandboxing techniques. What has been done in this space? Where are the barriers to advancement? What are the gaps in these efforts? We systematically analyze a decade of sandbox research from five top-tier security and systems conferences using qualitative content analysis, statistical clustering, and graph-based metrics to answer these questions and more. We find that the term “sandbox” currently has no widely accepted or acceptable definition. We use our broad scope to propose the first concise and comprehensive definition for “sandbox” that consistently encompasses research sandboxes. We learn that the sandboxing landscape covers a range of deployment options and policy enforcement techniques collectively capable of defending diverse sets of components while mitigating a wide range of vulnerabilities. Researchers consistently make security, performance, and applicability claims about their sandboxes and tend to narrowly define the claims to ensure they can be evaluated. Those claims are validated using multi-faceted strategies spanning proof, analytical analysis, benchmark suites, case studies, and argumentation. However, we find two cases for improvement: (1) the arguments researchers present are often ad hoc and (2) sandbox usability is mostly uncharted territory. We propose ways to structure arguments to ensure they fully support their corresponding claims and suggest lightweight means of evaluating sandbox usability.

Limin Jia, Shayak Sen, Deepak Garg, Anupam Datta.  2015.  System M: A Program Logic for Code Sandboxing and Identification.

Security-sensitive applications that execute untrusted code often check the code’s integrity by comparing its syntax to a known good value or sandbox the code to contain its effects. System M is a new program logic for reasoning about such security-sensitive applications. System M extends Hoare Type Theory (HTT) to trace safety properties and, additionally, contains two new reasoning principles. First, its type system internalizes logical equality, facilitating reasoning about applications that check code integrity. Second, a con- finement rule assigns an effect type to a computation based solely on knowledge of the computation’s sandbox. We prove the soundness of System M relative to a step-indexed trace-based semantic model. We illustrate both new reasoning principles of System M by verifying the main integrity property of the design of Memoir, a previously proposed trusted computing system for ensuring state continuity of isolated security-sensitive applications. 

Kim, Donghoon, Ning, Peng, Vouk, Mladen A..  2014.  A survey of common security vulnerabilities and corresponding countermeasures for SaaS. IEEE Globecom Workshop on Cloud Computing Systems, Networks and Applications (CCSNA 2014). :59-63.

Software as a Service (SaaS) is the most prevalent service delivery mode for cloud systems. This paper surveys common security vulnerabilities and corresponding countermeasures for SaaS. It is primarily focused on the work published in the last five years. We observe current SaaS security trends and a lack of sufficiently broad and robust countermeasures in some of the SaaS security area such as Identity and Access management due to the growth of SaaS applications.

Donghoo Kim, Mladen Vouk.  2014.  A survey of common security vulnerabilities and corresponding countermeasures for SaaS. Second IEEE International workshop on Cloud Computing Systems, Networks, and Applications (CCSNA-2014). :59-63.
Kim, Donghoon, Vouk, Mladen A..  2014.  A survey of common security vulnerabilities and corresponding countermeasures for SaaS. Workshop on Cloud Computing Systems, Networks, and Applications (CCSNA), Globecom. :59-63.

Software as a Service (SaaS) is the most prevalent service delivery mode for cloud systems. This paper surveys common security vulnerabilities and corresponding countermeasures for SaaS. It is primarily focused on the work published in the last five years. We observe current SaaS security trends and a lack of sufficiently broad and robust countermeasures in some of the SaaS security area such as Identity and Access management due to the growth of SaaS applications.

Rui Shu, Xiaohui Gu, William Enck.  2017.  A Study of Security Vulnerabilities on Docker Hub. Proceedings of the ACM Conference on Data and Application Security and Privacy (CODASPY).

Docker containers have recently become a popular approach to provision multiple applications over shared physical hosts in a more lightweight fashion than traditional virtual machines. This popularity has led to the creation of the Docker Hub registry, which distributes a large number of official and community images. In this paper, we study the state of security vulnerabilities in Docker Hub images. We create a scalable Docker image vulnerability analysis (DIVA) framework that automatically discovers, downloads, and analyzes both official and community images on Docker Hub. Using our framework, we have studied 356,218 images and made the following findings: (1) both official and community images contain more than 180 vulnerabilities on average when considering all versions; (2) many images have not been updated for hundreds of days; and (3) vulnerabilities commonly propagate from parent images to child images. These findings demonstrate a strong need for more automated and systematic methods of applying security updates to Docker images and our current Docker image analysis framework provides a good foundation for such automatic security update.

Rui Shu, Xiaohui Gu, William Enck.  2017.  A Study of Security Vulnerabilities on Docker Hub. Proceedings of the ACM Conference on Data and Application Security and Privacy (CODASPY).
Sunshine, Joshua, Herbsleb, James, Aldrich, Jonathan.  2014.  Structuring Documentation to Support State Search: A Laboratory Experiment about Protocol Programming. . European Conference on Object-Oriented Programming (ECOOP), 2014.

Application Programming Interfaces (APIs) often define object
protocols. Objects with protocols have a finite number of states and
in each state a different set of method calls is valid. Many researchers
have developed protocol verification tools because protocols are notoriously
difficult to follow correctly. However, recent research suggests that
a major challenge for API protocol programmers is effectively searching
the state space. Verification is an ineffective guide for this kind of
search. In this paper we instead propose Plaiddoc, which is like Javadoc
except it organizes methods by state instead of by class and it includes
explicit state transitions, state-based type specifications, and rich state
relationships. We compare Plaiddoc to a Javadoc control in a betweensubjects
laboratory experiment. We find that Plaiddoc participants complete
state search tasks in significantly less time and with significantly
fewer errors than Javadoc participants.

Joshua Sunshine, James Herbsleb, Jonathan Aldrich.  2014.   Structuring Documentation to Support State Search: A Laboratory Experiment about Protocol Programming. Proceedings of the 28th European Conference on ECOOP 2014 --- Object-Oriented Programming. 8586

Application Programming Interfaces APIs often define object protocols. Objects with protocols have a finite number of states and in each state a different set of method calls is valid. Many researchers have developed protocol verification tools because protocols are notoriously difficult to follow correctly. However, recent research suggests that a major challenge for API protocol programmers is effectively searching the state space. Verification is an ineffective guide for this kind of search. In this paper we instead propose Plaiddoc, which is like Javadoc except it organizes methods by state instead of by class and it includes explicit state transitions, state-based type specifications, and rich state relationships. We compare Plaiddoc to a Javadoc control in a between-subjects laboratory experiment. We find that Plaiddoc participants complete state search tasks in significantly less time and with significantly fewer errors than Javadoc participants.

Fulton, Nathan, Omar, Cyrus, Aldrich, Jonathan.  2014.  Statically Typed String Sanitation Inside a Python. Workshop on Privacy and Security in Programming (PSP), 2014. .

Web applications must ultimately command systems like web browsers and database engines using strings. Strings derived from improperly sanitized user input can thus be a vector for command injection attacks. In this paper, we introduce regular string types, which classify strings known statically to be in a specified regular language. These types come equipped with common operations like concatenation, substitution and coercion, so they can be used to implement, in a conventional manner, the portions of a web application or application framework that must directly construct com- mand strings. Simple type annotations at key interfaces can be used to statically verify that sanitization has been per- formed correctly without introducing redundant run-time checks. We specify this type system in a minimal typed lambda calculus, λRS.

To be practical, adopting a specialized type system like this should not require the adoption of a new programming language. Instead, we advocate for extensible type systems: new type system fragments like this should be implemented as libraries atop a mechanism that guarantees that they can be safely composed. We support this with two contribu- tions. First, we specify a translation from λRS to a language fragment containing only standard strings and regular ex- pressions. Second, taking Python as a language with these constructs, we implement the type system together with the translation as a library using atlang, an extensible static type system for Python being developed by the authors.

Nathan Fulton, Cyrus Omar, Jonathan Aldrich.  2014.  Statically typed string sanitation inside a python. PSP '14 Proceedings of the 2014 International Workshop on Privacy & Security in Programming.

Web applications must ultimately command systems like web browsers and database engines using strings. Strings derived from improperly sanitized user input can as a result be a vector for command injection attacks. In this paper, we introduce regular string types, which classify strings constrained statically to be in a regular language specified by a regular expression. Regular strings support standard string operations like concatenation and substitution, as well as safe coercions, so they can be used to implement, in an essentially conventional manner, the pieces of a web application or framework that handle strings arising from user input. Simple type annotations at function interfaces can be used to statically verify that sanitization has been performed correctly without introducing redundant run-time checks. We specify this type system first as a minimal typed lambda calculus, lambdaRS. To be practical, adopting a specialized type system like this should not require the adoption of a new programming language. Instead, we advocate for extensible type systems: new type system fragments like this should be implemented as libraries atop a mechanism that guarantees that they can be safely composed. We support this with two contributions. First, we specify a translation from lambdaRS to a calculus with only standard strings and regular expressions. Then, taking Python as a language with these constructs, we implement the type system together with the translation as a library using typy, an extensible static type system for Python.

C. Theisen, L. Williams, K. Oliver, E. Murphy-Hill.  2016.  Software Security Education at Scale. 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C). :346-355.

Massively Open Online Courses (MOOCs) provide a unique opportunity to reach out to students who would not normally be reached by alleviating the need to be physically present in the classroom. However, teaching software security coursework outside of a classroom setting can be challenging. What are the challenges when converting security material from an on-campus course to the MOOC format? The goal of this research is to assist educators in constructing software security coursework by providing a comparison of classroom courses and MOOCs. In this work, we compare demographic information, student motivations, and student results from an on-campus software security course and a MOOC version of the same course. We found that the two populations of students differed, with the MOOC reaching a more diverse set of students than the on-campus course. We found that students in the on-campus course had higher quiz scores, on average, than students in the MOOC. Finally, we document our experience running the courses and what we would do differently to assist future educators constructing similar MOOC's.

Rogerio de Lemos, Holger Giese, Hausi Muller, Mary Shaw, Jesper Andersson, Marin Litoiu, Bradley Schmerl, Gabriel Tamura, Norha Villegas, Thomas Vogel et al..  2013.  Software engineering for self-adaptive systems: A second research roadmap.

The goal of this roadmap paper is to summarize the stateof-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.

Gabriel Ferreira.  2017.  Software certification in practice: how are standards being applied? ICSE-C '17 Proceedings of the 39th International Conference on Software Engineering Companion.

Certification schemes exist to regulate software systems and prevent them from being deployed before they are judged fit to use. However, practitioners are often unsatisfied with the efficiency of certification standards and processes. In this study, we analyzed two certification standards, Common Criteria and DO-178C, and collected insights from literature and from interviews with subject-matter experts to identify concepts affecting the efficiency of certification processes. Our results show that evaluation time, reusability of evaluation artifacts, and composition of systems and certified artifacts are barriers to achieve efficient certification.

Michael Lanham, Geoffrey Morgan, Kathleen Carley.  2014.  Social Network Modeling and Agent‐Based Simulation in Support of Crisis De‐escalation. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS. 44

Decision makers need capabilities to quickly model and effectively assess consequences of actions and reactions in crisis de-escalation environments. The creation and what-if exercising of such models has traditionally had onerous resource requirements. This research demonstrates fast and viable ways to build such models in operational environments. Through social network extraction from texts, network analytics to identify key actors, and then simulation to assess alternative interventions, advisors can support practicing and execution of crisis de-escalation activities. We describe how we used this approach as part of a scenario-driven modeling effort. We demonstrate the strength of moving from data to models and the advantages of data-driven simulation, which allow for iterative refinement. We conclude with a discussion of the limitations of this approach and anticipated future work.

Junjie Qian, Witawas Srisa-an, Du Li, Hong Jiang, Sharad Seth, Yaodong Yang.  2015.  SmartStealing: Analysis and Optimization of Work Stealing in Parallel Garbage Collection for Java VM.. Principles and Practice of Programming in Java (PPPJ).

Parallel garbage collection has been used to speedup the collection process on multicore architectures. Similar to other parallel techniques, balancing the workload among threads is critical to ensuring good overall collection performance. To this end, work stealing is employed by the current stateof-the-art Java Virtual Machine, OpenJDK, to keep GC threads from idling during a collection process. However, we found that the current algorithm is not efficient. Its usage can often cause GC performance to be worse than when work stealing is not used. In this paper, we identify three factors that affect work stealing efficiency: determining tasks that can benefit from stealing, frequency with which to attempt stealing, and performance impacts of failed stealing attempts. Based on this analysis, we propose SmartStealing, a new algorithm that can automatically decide whether to attempt stealing at a particular point during execution. If stealing is attempted, it can efficiently identify a task to steal from. We then compare the collection performances when (i) the default work stealing algorithm is used, (ii) work stealing is not used at all, and (iii) the SmartStealing approach is used. Without modifying the remaining garbage collection system, the evaluation result shows that SmartStealing can reduce the parallel GC execution time for 19 of the 21 benchmarks. The average reduction is 50.4% and the highest reduction is 78.7%. We also investigate the performances of SmartStealing on NUMA and UMA architectures.

Yu, Tingting, Srisa-an, Witawas, Rothermel, Gregg.  2014.  SimRT: An Automated Framework to Support Regression Testing for Data Races. International Conference on Software Engineering (ICSE) 2014, .

Concurrent programs are prone to various classes of difficult-to- detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SIMRT, an automated regression testing framework for use in de- tecting races introduced by code modifications. SIMRT employs a regression test selection technique, focused on sets of program ele- ments related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritiza- tion technique to improve the rate at which such races are detected. Our empirical study of SIMRT reveals that it is more efficient and effective for revealing races than other approaches, and that its con- stituent test selection and prioritization components each contribute to its performance.

Tingting Yu, Witawas Srisa-an, Gregg Rothermel.  2014.  SimRT: An Automated Framework to Support Regression Testing for Data Races. ICSE 2014 Proceedings of the 36th International Conference on Software Engineering.

Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance.

Mehdi Mashayekhi, Hongying Du, George F. List, Munindar P. Singh.  2016.  Silk: A Simulation Study of Regulating Open Normative Multiagent Systems. Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI). :1–7.

In a multiagent system, a (social) norm describes what the agents may expect from each other.  Norms promote autonomy (an agent need not comply with a norm) and heterogeneity (a norm describes interactions at a high level independent of implementation details). Researchers have studied norm emergence through social learning where the agents interact repeatedly in a graph structure.

In contrast, we consider norm emergence in an open system, where membership can change, and where no predetermined graph structure exists.  We propose Silk, a mechanism wherein a generator monitors interactions among member agents and recommends norms to help resolve conflicts.  Each member decides on whether to accept or reject a recommended norm.  Upon exiting the system, a member passes its experience along to incoming members of the same type.  Thus, members develop norms in a hybrid manner to resolve conflicts.

We evaluate Silk via simulation in the traffic domain.  Our results show that social norms promoting conflict resolution emerge in both moderate and selfish societies via our hybrid mechanism.

Lichao Sun, Zhiqiang Li, Qiben Yan, Witawas Srisa-an, Yu Pan.  2016.  SigPID: Significant Permission Identification for Android Malware Detection. 11th International Conference on Malicious and Unwanted Software (MALCON 2016).

A recent report indicates that a newly developed mali- cious app for Android is introduced every 11 seconds.  To combat this alarming rate of malware creation,  we need a scalable malware detection approach that is effective and efficient. In this paper, we introduce SIGPID, a malware detection system based on permission  analysis to cope with the rapid increase in the number of Android malware. In- stead of analyzing all 135 Android permissions, our ap- proach applies 3-level pruning by mining the permission data to identify only significant permissions that can be ef- fective in distinguishing benign and malicious apps. SIG- PID then utilizes classification algorithms to classify differ- ent families of malware and benign apps. Our evaluation finds that only 22 out of 135 permissions are significant. We then compare the performance of our approach, using only

22 permissions, against a baseline approach that analyzes all permissions. The results indicate that when Support Vec- tor Machine (SVM) is used as the classifier, we can achieve over 90% of precision, recall, accuracy, and F-measure, which  are about the same as those produced by the base- line approach while incurring the analysis times that are 4 to 32 times smaller that those of using all 135 permissions. When we compare the detection effectiveness of SIGPID to those of other approaches, SIGPID can detect 93.62% of malware in the data set, and 91.4% unknown malware.

Nariman Mirzaei, Hamid Bagheri, Riyadh Mahmood, Sam Malek.  2015.  SIG-Droid: Automated System Input Generation for Android Applications. 2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE).

Pervasiveness of smartphones and the vast number of corresponding apps have underlined the need for applicable automated software testing techniques. A wealth of research has been focused on either unit or GUI testing of smartphone apps, but little on automated support for end-to-end system testing. This paper presents SIG-Droid, a framework for system testing of Android apps, backed with automated program analysis to extract app models and symbolic execution of source code guided by such models for obtaining test inputs that ensure covering each reachable branch in the program. SIG-Droid leverages two automatically extracted models: Interface Model and Behavior Model. The Interface Model is used to find values that an app can receive through its interfaces. Those values are then exchanged with symbolic values to deal with constraints with the help of a symbolic execution engine. The Behavior Model is used to drive the apps for symbolic execution and generate sequences of events. We provide an efficient implementation of SIG-Droid based in part on Symbolic PathFinder, extended in this work to support automatic testing of Android apps. Our experiments show SIG-Droid is able to achieve significantly higher code coverage than existing automated testing tools targeted for Android.