Visible to the public Biblio

Filters: First Letter Of Title is P  [Clear All Filters]
A B C D E F G H I J K L M N O [P] Q R S T U V W X Y Z   [Show ALL]
P
Erik Zawadzki, Geoffrey Gordon, Andre Platzer.  2013.  A projection algorithm for strictly monotone linear complementarity problems. Proceedings of NIPS OPT2013: Optimization for Machine Learning.

Complementary problems play a central role in equilibrium finding, physical sim- ulation, and optimization.  As a consequence, we are interested in understanding how to solve these problems quickly, and this often involves approximation.  In this paper we present a method for approximately solving strictly monotone linear complementarity problems with a Galerkin approximation.  We also give bounds for the approximate error, and prove novel bounds on perturbation error. These perturbation  bounds suggest that a Galerkin approximation  may be much less sen- sitive to noise than the original LCP.

Cyrus Omar, Jonathan Aldrich.  2016.  Programmable semantic fragments: the design and implementation of typy. GPCE 2016 Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences.

This paper introduces typy, a statically typed programming language embedded by reflection into Python. typy features a fragmentary semantics, i.e. it delegates semantic control over each term, drawn from Python's fixed concrete and abstract syntax, to some contextually relevant user-defined semantic fragment. The delegated fragment programmatically 1) typechecks the term (following a bidirectional protocol); and 2) assigns dynamic meaning to the term by computing a translation to Python.

We argue that this design is expressive with examples of fragments that express the static and dynamic semantics of 1) functional records; 2) labeled sums (with nested pattern matching a la ML); 3) a variation on JavaScript's prototypal object system; and 4) typed foreign interfaces to Python and OpenCL. These semantic structures are, or would need to be, defined primitively in conventionally structured languages.

We further argue that this design is compositionally well-behaved. It avoids the expression problem and the problems of grammar composition because the syntax is fixed. Moreover, programs are semantically stable under fragment composition (i.e. defining a new fragment will not change the meaning of existing program components.)

Gabriel Moreno, Javier Camara, David Garlan, Bradley Schmerl.  2015.  Proactive Self-Adaptation under Uncertainty: a Probabilistic Model Checking Approach. ESEC/FSE 2015 Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering.

Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, when adaptations have latency, and take some time to produce their effect, they have to be started with sufficient lead time so that they complete by the time their effect is needed. Proactive latency-aware adaptation addresses these issues by making adaptation decisions with a look-ahead horizon and taking adaptation latency into account. In this paper we present an approach for proactive latency-aware adaptation under uncertainty that uses probabilistic model checking for adaptation decisions. The key idea is to use a formal model of the adaptive system in which the adaptation decision is left underspecified through nondeterminism, and have the model checker resolve the nondeterministic choices so that the accumulated utility over the horizon is maximized. The adaptation decision is optimal over the horizon, and takes into account the inherent uncertainty of the environment predictions needed for looking ahead. Our results show that the decision based on a look-ahead horizon, and the factoring of both tactic latency and environment uncertainty, considerably improve the effectiveness of adaptation decisions.

Jaspreet Bhatia, Travis Breaux, Liora Friedberg, Hanan Hibshi, Daniel Smullen.  2016.  Privacy Risk in Cybersecurity Data Sharing. WISCS '16 Proceedings of the 2016 ACM on Workshop on Information Sharing and Collaborative Security.

As information systems become increasingly interdependent, there is an increased need to share cybersecurity data across government agencies and companies, and within and across industrial sectors. This sharing includes threat, vulnerability and incident reporting data, among other data. For cyberattacks that include sociotechnical vectors, such as phishing or watering hole attacks, this increased sharing could expose customer and employee personal data to increased privacy risk. In the US, privacy risk arises when the government voluntarily receives data from companies without meaningful consent from individuals, or without a lawful procedure that protects an individual's right to due process. In this paper, we describe a study to examine the trade-off between the need for potentially sensitive data, which we call incident data usage, and the perceived privacy risk of sharing that data with the government. The study is comprised of two parts: a data usage estimate built from a survey of 76 security professionals with mean eight years' experience; and a privacy risk estimate that measures privacy risk using an ordinal likelihood scale and nominal data types in factorial vignettes. The privacy risk estimate also factors in data purposes with different levels of societal benefit, including terrorism, imminent threat of death, economic harm, and loss of intellectual property. The results show which data types are high-usage, low-risk versus those that are low-usage, high-risk. We discuss the implications of these results and recommend future work to improve privacy when data must be shared despite the increased risk to privacy.

West, Andrew G, Aviv, Adam J.  2014.  On the Privacy Concerns of URL Query Strings. W2SP'14: Proceedings of the 8th Workshop on Web 2.0 Security and Privacy .
Claus Hunsen, Bo Zhang, Janet Siegmund, Christian Kästner, Olaf Lebenich, Martin Becker, Sven Apel.  2015.  Preprocessor-based variability in open-source and industrial software systems: An empirical study. Empirical Software Engineering. 20:1-34.

Almost every sufficiently complex software system today is configurable. Conditional compilation is a simple variability-implementation mechanism that is widely used in open-source projects and industry. Especially, the C preprocessor (CPP) is very popular in practice, but it is also gaining (again) interest in academia. Although there have been several attempts to understand and improve CPP, there is a lack of understanding of how it is used in open-source and industrial systems and whether different usage patterns have emerged. The background is that much research on configurable systems and product lines concentrates on open-source systems, simply because they are available for study in the first place. This leads to the potentially problematic situation that it is unclear whether the results obtained from these studies are transferable to industrial systems. We aim at lowering this gap by comparing the use of CPP in open-source projects and industry—especially from the embedded-systems domain—based on a substantial set of subject systems and well-known variability metrics, including size, scattering, and tangling metrics. A key result of our empirical study is that, regarding almost all aspects we studied, the analyzed open-source systems and the considered embedded systems from industry are similar regarding most metrics, including systems that have been developed in industry and made open source at some point. So, our study indicates that, regarding CPP as variability-implementation mechanism, insights, methods, and tools developed based on studies of open-source systems are transferable to industrial systems—at least, with respect to the metrics we considered.

Santhosh Prabhu, University of Illinois at Urbana-Champaign, Ali Kheradmand, University of Illinois at Urbana-Champaign, Brighten Godfrey, University of Illinois at Urbana-Champaign, Matthew Caesar, University of Illinois at Urbana-Champaign.  2017.  Predicting Network Futures with Plankton. 1st Asia-Pacific Workshop on Networking (APNet).

Recent years have seen significant advancement in the field of formal network verification. Tools have been proposed for offline data plane verification, real-time data plane verification and configuration verification under arbitrary, but static sets of failures. However, due to the fundamental limitation of not treating the network as an evolving system, current verification platforms have significant constraints in terms of scope. In real-world networks, correctness policies may be violated only through a particular combination of environment events and protocol actions, possibly in a non-deterministic sequence. Moreover, correctness specifications themselves may often correlate multiple data plane states, particularly when dynamic data plane elements are present. Tools in existence today are not capable of reasoning about all the possible network events, and all the subsequent execution paths that are enabled by those events. We propose Plankton, a verification platform for identifying undesirable evolutions of networks. By combining symbolic modeling of data plane and control plane with explicit state exploration, Plankton
performs a goal-directed search on a finite-state transition system that captures the behavior of the network as well as the various events that can influence it. In this way, Plankton can automatically find policy violations that can occur due to a sequence of network events, starting from the current state. Initial experiments have successfully predicted scenarios like BGP Wedgies.

Rahman, Akond, Pradhan, Priysha, Partho, Asif, Williams, Laurie.  2017.  Predicting Android Application Security and Privacy Risk with Static Code Metrics. Proceedings of the 4th International Conference on Mobile Software Engineering and Systems. :149–153.

Android applications pose security and privacy risks for end-users. These risks are often quantified by performing dynamic analysis and permission analysis of the Android applications after release. Prediction of security and privacy risks associated with Android applications at early stages of application development, e.g. when the developer (s) are writing the code of the application, might help Android application developers in releasing applications to end-users that have less security and privacy risk. The goal of this paper is to aid Android application developers in assessing the security and privacy risk associated with Android applications by using static code metrics as predictors. In our paper, we consider security and privacy risk of Android application as how susceptible the application is to leaking private information of end-users and to releasing vulnerabilities. We investigate how effectively static code metrics that are extracted from the source code of Android applications, can be used to predict security and privacy risk of Android applications. We collected 21 static code metrics of 1,407 Android applications, and use the collected static code metrics to predict security and privacy risk of the applications. As the oracle of security and privacy risk, we used Androrisk, a tool that quantifies the amount of security and privacy risk of an Android application using analysis of Android permissions and dynamic analysis. To accomplish our goal, we used statistical learners such as, radial-based support vector machine (r-SVM). For r-SVM, we observe a precision of 0.83. Findings from our paper suggest that with proper selection of static code metrics, r-SVM can be used effectively to predict security and privacy risk of Android applications.

Hamid Bagheri, Alireza Sadeghi, Reyhaneh Jabbarvand, Sam Malek.  2016.  Practical, Formal Synthesis and Automatic Enforcement of Security Policies for Android. 2016 46th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN).

As the dominant mobile computing platform, Android has become a prime target for cyber-security attacks. Many of these attacks are manifested at the application level, and through the exploitation of vulnerabilities in apps downloaded from the popular app stores. Increasingly, sophisticated attacks exploit the vulnerabilities in multiple installed apps, making it extremely difficult to foresee such attacks, as neither the app developers nor the store operators know a priori which apps will be installed together. This paper presents an approach that allows the end-users to safeguard a given bundle of apps installed on their device from such attacks. The approach, realized in a tool, called SEPAR, combines static analysis with lightweight formal methods to automatically infer security-relevant properties from a bundle of apps. It then uses a constraint solver to synthesize possible security exploits, from which fine-grained security policies are derived and automatically enforced to protect a given device. In our experiments with over 4,000 Android apps, SEPAR has proven to be highly effective at detecting previously unknown vulnerabilities as well as preventing their exploitation.

Adwait Nadkarni, Benjamin Andow, William Enck, Somesh Jha.  2016.  Practical DIFC Enforcement on Android. USENIX Security Symposium.

Smartphone users often use private and enterprise data with untrusted third party applications.  The fundamental lack of secrecy guarantees in smartphone OSes, such as Android, exposes this data to the risk of unauthorized exfiltration.  A natural solution is the integration of secrecy guarantees into the OS.  In this paper, we describe the challenges for decentralized information flow control (DIFC) enforcement on Android.  We propose context-sensitive DIFC enforcement via lazy polyinstantiation and practical and secure network export through domain declassification.  Our DIFC system, Weir, is backwards compatible by design, and incurs less than 4 ms overhead for component startup.  With Weir,  we demonstrate practical and secure DIFC enforcement on Android.

[Anonymous].  2013.  The Power of Interoperability: Why Objects Are Inevitable. Onward! Essays.

Three years ago in this venue, Cook argued that in
their essence, objects are what Reynolds called procedural
data structures. His observation raises a natural
question: if procedural data structures are the essence
of objects, has this contributed to the empirical success
of objects, and if so, how?
This essay attempts to answer that question. After
reviewing Cook’s definition, I propose the term service
abstractions to capture the essential nature of objects.
This terminology emphasizes, following Kay, that
objects are not primarily about representing and manipulating
data, but are more about providing services
in support of higher-level goals. Using examples
taken from object-oriented frameworks, I illustrate the
unique design leverage that service abstractions provide:
the ability to define abstractions that can be extended,
and whose extensions are interoperable in a
first-class way. The essay argues that the form of interoperable
extension supported by service abstractions
is essential to modern software: many modern frameworks
and ecosystems could not have been built without
service abstractions. In this sense, the success of
objects was not a coincidence: it was an inevitable consequence
of their service abstraction nature.

Jonathan Aldrich.  2013.  The power of interoperability: why objects are inevitable. Onward! 2013 Proceedings of the 2013 ACM international symposium on New ideas, new paradigms, and reflections on programming & software.

Three years ago in this venue, Cook argued that in their essence, objects are what Reynolds called procedural data structures. His observation raises a natural question: if procedural data structures are the essence of objects, has this contributed to the empirical success of objects, and if so, how?

This essay attempts to answer that question. After reviewing Cook's definition, I propose the term service abstractions to capture the essential nature of objects. This terminology emphasizes, following Kay, that objects are not primarily about representing and manipulating data, but are more about providing services in support of higher-level goals. Using examples taken from object-oriented frameworks, I illustrate the unique design leverage that service abstractions provide: the ability to define abstractions that can be extended, and whose extensions are interoperable in a first-class way. The essay argues that the form of interoperable extension supported by service abstractions is essential to modern software: many modern frameworks and ecosystems could not have been built without service abstractions. In this sense, the success of objects was not a coincidence: it was an inevitable consequence of their service abstraction nature.