Visible to the public Biblio

Found 264 results

2016-12-07
Travis Breaux, Ashwini Rao, Jean-Michel Lehker.  2012.  Towards a framework for pattern experimentation: Understanding empirical validity in requirements engineering patterns. 2012 Second IEEE International Workshop on Requirements Patterns (RePa).

Despite the abundance of information security guidelines, system developers have difficulties implementing technical solutions that are reasonably secure. Security patterns are one possible solution to help developers reuse security knowledge. The challenge is that it takes experts to develop security patterns. To address this challenge, we need a framework to identify and assess patterns and pattern application practices that are accessible to non-experts. In this paper, we narrowly define what we mean by patterns by focusing on requirements patterns and the considerations that may inform how we identify and validate patterns for knowledge reuse. We motivate this discussion using examples from the requirements pattern literature and theory in cognitive psychology.

Rocky Slavin, Hui Shen, Jianwei Niu.  2012.  Characterizations and boundaries of security requirements patterns. 2012 Second IEEE International Workshop on Requirements Patterns (RePa).

Very often in the software development life cycle, security is applied too late or important security aspects are overlooked. Although the use of security patterns is gaining popularity, the current state of security requirements patterns is such that there is not much in terms of a defining structure. To address this issue, we are working towards defining the important characteristics as well as the boundaries for security requirements patterns in order to make them more effective. By examining an existing general pattern format that describes how security patterns should be structured and comparing it to existing security requirements patterns, we are deriving characterizations and boundaries for security requirements patterns. From these attributes, we propose a defining format. We hope that these can reduce user effort in elicitation and specification of security requirements patterns.

Jaspreet Bhatia, Morgan Evans, Sudarshan Wadkar, Travis Breaux.  2016.  Automated Extraction of Regulated Information Types using Hyponymy Relations. 2016 RE: Requirements Engineering Conference.

Requirements analysts can model regulated data practices to identify and reason about risks of noncompliance. If terminology is inconsistent or ambiguous, however, these models and their conclusions will be unreliable. To study this problem, we investigated an approach to automatically construct an information type ontology by identifying information type hyponymy in privacy policies using Tregex patterns. Tregex is a utility to match regular expressions against constituency parse trees, which are hierarchical expressions of natural language clauses, including noun and verb phrases. We discovered the Tregex patterns by applying content analysis to 15 privacy policies from three domains (shopping, telecommunication and social networks) to identify all instances of information type hyponymy. From this dataset, three semantic and four syntactic categories of hyponymy emerged based on category completeness and word-order. Among these, we identified and empirically evaluated 26 Tregex patterns to automate the extraction of hyponyms from privacy policies. The patterns identify information type hypernym-hyponym pairs with an average precision of 0.83 and recall of 0.52 across our dataset of 15 policies. 

Mitra Bokaei Hosseini, Sudarshan Wadkar, Travis Breaux, Jianwei Niu.  2016.  Lexical Similarity of Information Type Hypernyms, Meronyms and Synonyms in Privacy Policies. Association for the Advancement of Artificial Intelligence.

Privacy policies are used to communicate company data practices to consumers and must be accurate and comprehensive. Each policy author is free to use their own nomenclature when describing data practices, which leads to different ways in which similar information types are described across policies. A formal ontology can help policy authors, users and regulators consistently check how data practice descriptions relate to other interpretations of information types. In this paper, we describe an empirical method for manually constructing an information type ontology from privacy policies. The method consists of seven heuristics that explain how to infer hypernym, meronym and synonym relationships from information type phrases, which we discovered using grounded analysis of five privacy policies. The method was evaluated on 50 mobile privacy policies which produced an ontology consisting of 355 unique information type names. Based on the manual results, we describe an automated technique consisting of 14 reusable semantic rules to extract hypernymy, meronymy, and synonymy relations from information type phrases. The technique was evaluated on the manually constructed ontology to yield .95 precision and .51 recall.

Jaspreet Bhatia, Travis Breaux, Liora Friedberg, Hanan Hibshi, Daniel Smullen.  2016.  Privacy Risk in Cybersecurity Data Sharing. WISCS '16 Proceedings of the 2016 ACM on Workshop on Information Sharing and Collaborative Security.

As information systems become increasingly interdependent, there is an increased need to share cybersecurity data across government agencies and companies, and within and across industrial sectors. This sharing includes threat, vulnerability and incident reporting data, among other data. For cyberattacks that include sociotechnical vectors, such as phishing or watering hole attacks, this increased sharing could expose customer and employee personal data to increased privacy risk. In the US, privacy risk arises when the government voluntarily receives data from companies without meaningful consent from individuals, or without a lawful procedure that protects an individual's right to due process. In this paper, we describe a study to examine the trade-off between the need for potentially sensitive data, which we call incident data usage, and the perceived privacy risk of sharing that data with the government. The study is comprised of two parts: a data usage estimate built from a survey of 76 security professionals with mean eight years' experience; and a privacy risk estimate that measures privacy risk using an ordinal likelihood scale and nominal data types in factorial vignettes. The privacy risk estimate also factors in data purposes with different levels of societal benefit, including terrorism, imminent threat of death, economic harm, and loss of intellectual property. The results show which data types are high-usage, low-risk versus those that are low-usage, high-risk. We discuss the implications of these results and recommend future work to improve privacy when data must be shared despite the increased risk to privacy.

Tingting Yu, Witawas Srisa-an, Gregg Rothermel.  2014.  SimRT: An Automated Framework to Support Regression Testing for Data Races. ICSE 2014 Proceedings of the 36th International Conference on Software Engineering.

Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance.

Cyrus Omar, Jonathan Aldrich.  2016.  Programmable semantic fragments: the design and implementation of typy. GPCE 2016 Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences.

This paper introduces typy, a statically typed programming language embedded by reflection into Python. typy features a fragmentary semantics, i.e. it delegates semantic control over each term, drawn from Python's fixed concrete and abstract syntax, to some contextually relevant user-defined semantic fragment. The delegated fragment programmatically 1) typechecks the term (following a bidirectional protocol); and 2) assigns dynamic meaning to the term by computing a translation to Python.

We argue that this design is expressive with examples of fragments that express the static and dynamic semantics of 1) functional records; 2) labeled sums (with nested pattern matching a la ML); 3) a variation on JavaScript's prototypal object system; and 4) typed foreign interfaces to Python and OpenCL. These semantic structures are, or would need to be, defined primitively in conventionally structured languages.

We further argue that this design is compositionally well-behaved. It avoids the expression problem and the problems of grammar composition because the syntax is fixed. Moreover, programs are semantically stable under fragment composition (i.e. defining a new fragment will not change the meaning of existing program components.)

Filipre Militao, Jonathan Aldrich, Luis Caires.  2016.  Composing Interfering Abstract Protocols. European Conference on Object-Oriented Programming (ECOOP).

The undisciplined use of shared mutable state can be a source of program errors when aliases unsafely interfere with each other. While protocol-based techniques to reason about interference abound, they do not address two practical concerns: the decidability of protocol composition and its integration with protocol abstraction. We show that our composition procedure is decidable and that it ensures safe interference even when composing abstract protocols. To evaluate the expressiveness of our protocol framework for ensuring safe shared memory interference, we show how this same protocol framework can be used to model safe, typeful message-passing concurrency idioms.

Zack Coker, Michael Maass, Tianyuan Ding, Claire Le Goues, Joshua Sunshine.  2015.  Evaluating the Flexibility of the Java Sandbox. ACSAC 2015 Proceedings of the 31st Annual Computer Security Applications Conference.

The ubiquitously-installed Java Runtime Environment (JRE) provides a complex, flexible set of mechanisms that support the execution of untrusted code inside a secure sandbox. However, many recent exploits have successfully escaped the sandbox, allowing attackers to infect numerous Java hosts. We hypothesize that the Java security model affords developers more flexibility than they need or use in practice, and thus its complexity compromises security without improving practical functionality. We describe an empirical study of the ways benign open-source Java applications use and interact with the Java security manager. We found that developers regularly misunderstand or misuse Java security mechanisms, that benign programs do not use all of the vast flexibility afforded by the Java security model, and that there are clear differences between the ways benign and exploit programs interact with the security manager. We validate these results by deriving two restrictions on application behavior that restrict (1) security manager modifications and (2) privilege escalation. We demonstrate that enforcing these rules at runtime stop a representative proportion of modern Java 7 exploits without breaking backwards compatibility with benign applications. These practical rules should be enforced in the JRE to fortify the Java sandbox.

Nathan Fulton, Cyrus Omar, Jonathan Aldrich.  2014.  Statically typed string sanitation inside a python. PSP '14 Proceedings of the 2014 International Workshop on Privacy & Security in Programming.

Web applications must ultimately command systems like web browsers and database engines using strings. Strings derived from improperly sanitized user input can as a result be a vector for command injection attacks. In this paper, we introduce regular string types, which classify strings constrained statically to be in a regular language specified by a regular expression. Regular strings support standard string operations like concatenation and substitution, as well as safe coercions, so they can be used to implement, in an essentially conventional manner, the pieces of a web application or framework that handle strings arising from user input. Simple type annotations at function interfaces can be used to statically verify that sanitization has been performed correctly without introducing redundant run-time checks. We specify this type system first as a minimal typed lambda calculus, lambdaRS. To be practical, adopting a specialized type system like this should not require the adoption of a new programming language. Instead, we advocate for extensible type systems: new type system fragments like this should be implemented as libraries atop a mechanism that guarantees that they can be safely composed. We support this with two contributions. First, we specify a translation from lambdaRS to a calculus with only standard strings and regular expressions. Then, taking Python as a language with these constructs, we implement the type system together with the translation as a library using typy, an extensible static type system for Python.

Darya Kurilova, Alex Potanin, Jonathan Aldrich.  2014.  Wyvern: Impacting Software Security via Programming Language Design. PLATEAU '14 Proceedings of the 5th Workshop on Evaluation and Usability of Programming Languages and Tools.

Breaches of software security affect millions of people, and therefore it is crucial to strive for more secure software systems. However, the effect of programming language design on software security is not easily measured or studied. In the absence of scientific insight, opinions range from those that claim that programming language design has no effect on security of the system, to those that believe that programming language design is the only way to provide “high-assurance software.” In this paper, we discuss how programming language design can impact software security by looking at a specific example: the Wyvern programming language. We report on how the design of the Wyvern programming language leverages security principles, together with hypotheses about how usability impacts security, in order to prevent command injection attacks. Furthermore, we discuss what security principles we considered in Wyvern’s design. 

Michael Coblenz, Jonathan Aldrich, Brad Myers, Joshua Sunshine.  2014.  Considering Productivity Effects of Explicit Type Declarations. PLATEAU '14 Proceedings of the 5th Workshop on Evaluation and Usability of Programming Languages and Tools.

Static types may be used both by the language implementation and directly by the user as documentation. Though much existing work focuses primarily on the implications of static types on the semantics of programs, relatively little work considers the impact on usability that static types provide. Though the omission of static type information may decrease program length and thereby improve readability, it may also decrease readability because users must then frequently derive type information manually while reading programs. As type inference becomes more popular in languages that are in widespread use, it is important to consider whether the adoption of type inference may impact productivity of developers.

Cyrus Omar, Darya Kurilova, Ligia Nistor, Benjamin Chung, Alex Potanin, Jonathan Aldrich.  2014.  Safely Composable Type-Specific Languages. Proceedings of the 28th European Conference on ECOOP 2014 --- Object-Oriented Programming.

Programming languages often include specialized syntax for common datatypes e.g. lists and some also build in support for specific specialized datatypes e.g. regular expressions, but user-defined types must use general-purpose syntax. Frustration with this causes developers to use strings, rather than structured data, with alarming frequency, leading to correctness, performance, security, and usability issues. Allowing library providers to modularly extend a language with new syntax could help address these issues. Unfortunately, prior mechanisms either limit expressiveness or are not safely composable: individually unambiguous extensions can still cause ambiguities when used together. We introduce type-specific languages TSLs: logic associated with a type that determines how the bodies of generic literals, able to contain arbitrary syntax, are parsed and elaborated, hygienically. The TSL for a type is invoked only when a literal appears where a term of that type is expected, guaranteeing non-interference. We give evidence supporting the applicability of this approach and formally specify it with a bidirectionally typed elaboration semantics for the Wyvern programming language.

Jonathan Aldrich.  2013.  The power of interoperability: why objects are inevitable. Onward! 2013 Proceedings of the 2013 ACM international symposium on New ideas, new paradigms, and reflections on programming & software.

Three years ago in this venue, Cook argued that in their essence, objects are what Reynolds called procedural data structures. His observation raises a natural question: if procedural data structures are the essence of objects, has this contributed to the empirical success of objects, and if so, how?

This essay attempts to answer that question. After reviewing Cook's definition, I propose the term service abstractions to capture the essential nature of objects. This terminology emphasizes, following Kay, that objects are not primarily about representing and manipulating data, but are more about providing services in support of higher-level goals. Using examples taken from object-oriented frameworks, I illustrate the unique design leverage that service abstractions provide: the ability to define abstractions that can be extended, and whose extensions are interoperable in a first-class way. The essay argues that the form of interoperable extension supported by service abstractions is essential to modern software: many modern frameworks and ecosystems could not have been built without service abstractions. In this sense, the success of objects was not a coincidence: it was an inevitable consequence of their service abstraction nature.

Joshua Sunshine, James Herbsleb, Jonathan Aldrich.  2014.   Structuring Documentation to Support State Search: A Laboratory Experiment about Protocol Programming. Proceedings of the 28th European Conference on ECOOP 2014 --- Object-Oriented Programming. 8586

Application Programming Interfaces APIs often define object protocols. Objects with protocols have a finite number of states and in each state a different set of method calls is valid. Many researchers have developed protocol verification tools because protocols are notoriously difficult to follow correctly. However, recent research suggests that a major challenge for API protocol programmers is effectively searching the state space. Verification is an ineffective guide for this kind of search. In this paper we instead propose Plaiddoc, which is like Javadoc except it organizes methods by state instead of by class and it includes explicit state transitions, state-based type specifications, and rich state relationships. We compare Plaiddoc to a Javadoc control in a between-subjects laboratory experiment. We find that Plaiddoc participants complete state search tasks in significantly less time and with significantly fewer errors than Javadoc participants.

Cyrus Omar, Benjamin Chung, Darya Kurilova, Alex Potanin, Jonathan Aldrich.  2013.  Type-directed, whitespace-delimited parsing for embedded DSLs. GlobalDSL '13 Proceedings of the First Workshop on the Globalization of Domain Specific Languages.

Domain-specific languages improve ease-of-use, expressiveness and verifiability, but defining and using different DSLs within a single application remains difficult. We introduce an approach for embedded DSLs where 1) whitespace delimits DSL-governed blocks, and 2) the parsing and type checking phases occur in tandem so that the expected type of the block determines which domain-specific parser governs that block. We argue that this approach occupies a sweet spot, providing high expressiveness and ease-of-use while maintaining safe composability. We introduce the design, provide examples and describe an ongoing implementation of this strategy in the Wyvern programming language. We also discuss how a more conventional keyword-directed strategy for parsing of DSLs can arise as a special case of this type-directed strategy. 

Ligia Nistor, Darya Kurilova, Stephanie Balzer, Benjamin Chung, Alex Potanin, Jonathan Aldrich.  2013.  Wyvern: a simple, typed, and pure object-oriented language. MASPEGHI '13 Proceedings of the 5th Workshop on MechAnisms for SPEcialization, Generalization and inHerItance.

The simplest and purest practical object-oriented language designs today are seen in dynamically-typed languages, such as Smalltalk and Self. Static types, however, have potential benefits for productivity, security, and reasoning about programs. In this paper, we describe the design of Wyvern, a statically typed, pure object-oriented language that attempts to retain much of the simplicity and expressiveness of these iconic designs.

Our goals lead us to combine pure object-oriented and functional abstractions in a simple, typed setting. We present a foundational object-based language that we believe to be as close as one can get to simple typed lambda calculus while keeping object-orientation. We show how this foundational language can be translated to the typed lambda calculus via standard encodings. We then define a simple extension to this language that introduces classes and show that classes are no more than sugar for the foundational object-based language. Our future intention is to demonstrate that modules and other object-oriented features can be added to our language as not more than such syntactical extensions while keeping the object-oriented core as pure as possible.

The design of Wyvern closely follows both historical and modern ideas about the essence of object-orientation, suggesting a new way to think about a minimal, practical, typed core language for objects.

Blase Ur, Patrick Gage Kelley, Saranga Komanduri, Joel Lee, Michael Maass, Michelle L. Mazurek, Timothy Passaro, Richard Shay, Timothy Vidas, Lujo Bauer et al..  2012.  How Does Your Password Measure Up? The Effect of Strength Meters on Password Creation 21st USENIX Security Symposium.

To help users create stronger text-based passwords, many web sites have deployed password meters that provide visual feedback on password strength. Although these meters are in wide use, their effects on the security and usability of passwords have not been well studied. We present a 2,931-subject study of password creation in the presence of 14 password meters. We found that meters with a variety of visual appearances led users to create longer passwords. However, significant increases in resistance to a password-cracking algorithm were only achieved using meters that scored passwords stringently. These stringent meters also led participants to include more digits, symbols, and uppercase letters. Password meters also affected the act of password creation. Participants who saw stringent meters spent longer creating their password and were more likely to change their password while entering it, yet they were also more likely to find the password meter annoying. However, the most stringent meter and those without visual bars caused participants to place less importance on satisfying the meter. Participants who saw more lenient meters tried to fill the meter and were averse to choosing passwords a meter deemed “bad” or “poor.” Our findings can serve as guidelines for administrators seeking to nudge users towards stronger passwords. 

Sarah Pearman, Nicholas Munson, Leeyat Slyper, Lujo Bauer, Serge Egelman, Arnab Kumar, Charu Sharma, Jeremy Thomas, Nicolas Christin.  2016.  Risk Compensation in Home-User Computer Security Behavior: A Mixed-Methods Exploratory Study. SOUPS 2016: 12th Symposium on Usable Privacy and Security.

Risk homeostasis theory claims that individuals adjust their behaviors in response to changing variables to keep what they perceive as a constant accepted level of risk [8]. Risk homeostasis theory is used to explain why drivers may drive faster when wearing seatbelts. Here we explore whether risk homeostasis theory applies to end-user security behaviors. We use observed data from over 200 participants in a longitudinal in-situ study as well as survey data from 249 users to attempt to determine how user security behaviors and attitudes are affected by the presence or absence of antivirus software. If risk compensation is occurring, users might be expected to behave more dangerously in some ways when antivirus is present. Some of our preliminary data suggests that risk compensation may be occurring, but additional work with larger samples is needed. 

Kathleen Carley.  2015.  Crisis Mapping: Big Data from a Dynamic Network Analytic Perspective.

Big data holds the promise of enabling analysts to predict, mitigate and respond rapidly to natural disasters and human crises. Rapid response is critical for saving lives and mitigating damage.  Rapid response requires understanding the socio-cultural terrain; i.e., understanding who needs what or can provide what, where, how, why and when. Today, new technologies are forging a path making use of sensor and social media data to understand the socio-cultural terrain and provide faster response during crises.  A review of this area, the state of the art, and known challenges are discussed.  The basic argument is that this technology is still in its infancy.  Critical scientific challenges, social challenges, and legal challenges will need to be addressed before the promise of big data for Crisis Mapping is fulfilled.

2016-12-06
Alain Forget, Saranga Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang.  2014.  Security Behavior Observatory: Infrastructure for Longterm Monitoring of Client Machines.

Much of the data researchers usually collect about users’ privacy and security behavior comes from short-term studies and focuses on specific, narrow activities. We present a design architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from a panel of hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security and privacy of the collected data, which will inevitably include intimate details about our participants’ behavior. Third, the SBO must serve our research interests, which will inevitably change over the course of the study, as collected data is analyzed, interpreted, and suggest further lines of inquiry. We describe in detail the SBO infrastructure, its secure data collection methods, the benefits of our design and implementation, as well as the hurdles and tradeoffs to consider when designing such a data collection system. 

Alain Forget, Saranga Komanduri, Alessandro Acquisti, Nicolas Christin, Lorrie Cranor, Rahul Telang.  2014.  Building the security behavior observatory: an infrastructure for long-term monitoring of client machines. HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security.

We present an architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security of the collected data, which will inevitably include intimate participant behavioral data. Third, the SBO must serve our research interests, which will inevitably change as collected data is analyzed and interpreted. This short paper summarizes some of our design and implementation benefits and discusses a few hurdles and trade-offs to consider when designing such a data collection system.

Amit Vasudevan, Sagar Chaki, Limin Jia, Anupam Datta.  2016.  überSpark: Enforcing Verifiable Object Abstractions for Automated Compositional Security Analysis of a Hypervisor. 25th USENIX Security Symposium (USENIX Security 16).

We present überSpark (üSpark), an innovative architecture for compositional verification of security properties of extensible hypervisors written in C and Assembly. üSpark comprises two key ideas: (i) endowing low-level system software with abstractions found in higher-level languages (e.g., objects, interfaces, function-call semantics for implementations of interfaces, access control on interfaces, concurrency and serialization), enforced using a combination of commodity hardware mechanisms and lightweight static analysis; and (ii) interfacing with platform hardware by programming in Assembly using an idiomatic style (called CASM) that is verifiable via tools aimed at C, while retaining its performance and low-level access to hardware. After verification, the C code is compiled using a certified compiler while the CASM code is translated into its corresponding Assembly instructions. Collectively, these innovations enable compositional verification of security invariants without sacrificing performance. We validate üSpark by building and verifying security invariants of an existing open-source commodity x86 micro-hypervisor and several of its extensions, and demonstrating only minor performance overhead with low verification costs.

Javier Camara, David Garlan, Gabriel Moreno, Bradley Schmerl.  2016.  Evaluating Trade-offs of Human Involvement in Self-adaptive Systems. Managing Trade-offs in Adaptable Software Architectures.

Software systems are increasingly called upon to autonomously manage their goals in changing contexts and environments, and under evolving requirements. In some circumstances, autonomous systems cannot be fully-automated but instead cooperate with human operators to maintain and adapt themselves. Furthermore, there are times when a choice should be made between doing a manual or automated repair. Involving operators in self-adaptation should itself be adaptive, and consider aspects such as the training, attention, and ability of operators. Not only do these aspects change from person to person, but they may change with the same person. These aspects make the choice of whether to involve humans non-obvious. Self-adaptive systems should trade-off whether to involve operators, taking these aspects into consideration along with other business qualities it is attempting to achieve. In this chapter, we identify the various roles that operators can perform in cooperating with self-adapting systems. We focus on humans as effectors-doing tasks which are difficult or infeasible to automate. We describe how we modified our self-adaptive framework, Rainbow, to involve operators in this way, which involved choosing suitable human models and integrating them into the existing utility trade-off decision models of Rainbow. We use probabilistic modeling and quantitative verification to analyze the trade-offs of involving humans in adaptation, and complement our study with experiments to show how different business preferences and modalities of human involvement may result in different outcomes.

Waqar Ahmad, Christian Kästner, Joshua Sunshine, Jonathan Aldrich.  2016.  Inter-app Communication in Android: Developer Challenges. 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories. :177-188.

The Android platform is designed to support mutually untrusted third-party apps, which run as isolated processes but may interact via platform-controlled mechanisms, called Intents. Interactions among third-party apps are intended and can contribute to a rich user experience, for example, the ability to share pictures from one app with another. The Android platform presents an interesting point in a design space of module systems that is biased toward isolation, extensibility, and untrusted contributions. The Intent mechanism essentially provides message channels among modules, in which the set of message types is extensible. However, the module system has design limitations including the lack of consistent mechanisms to document message types, very limited checking that a message conforms to its specifications, the inability to explicitly declare dependencies on other modules, and the lack of checks for backward compatibility as message types evolve over time. In order to understand the degree to which these design limitations result in real issues, we studied a broad corpus of apps and cross-validated our results against app documentation and Android support forums. Our findings suggest that design limitations do indeed cause development problems. Based on our results, we outline further research questions and propose possible mitigation strategies.