Visible to the public Operationalizing Contextual IntegrityConflict Detection Enabled

Project Details

Lead PI

Performance Period

Jan 01, 2018 - Jan 01, 2018


International Computer Science Institute, Cornell Tech


National Security Agency

Ranked 57 out of 118 Group Projects in this group.
4082 related hits.

According to Nissenbaum's theory of contextual integrity (CI), protecting privacy means ensuring that personal information flows appropriately; it does not mean that no information flows (e.g., confidentiality), or that it flows only if the information subject allows it (e.g., control). Flow is appropriate if it conforms to legitimate, contextual informational norms. Contextual informational norms prescribe information flows in terms of five parameters: actors (sender, subject, recipient), information types, and transmission principles. Actors and information types range over respective contextual ontologies. Transmission principles (a term introduced by the theory) range over the conditions or constraints under which information flows, for example, whether confidentially, mandated by law, with notice, with consent, in accordance with subject's preference, and so on. The theory holds that our privacy expectations are a product of informational norms, meaning that people will judge particular information flows as respecting or violating privacy according to whether or not--in the first approximation--they conform to contextual informational norms. If so, we say contextual integrity has been preserved.

The theory has been recognized in policy arenas, has been formalized, has guided empirical social science research, and has shaped system development. Yet, despite resolving many longstanding privacy puzzles and its promising potential in practical realms, its direct application to pressing needs of design and policy has proven challenging. One challenge is that the theory requires knowledge of data flows, and in practice, systems may not be able to provide this, particularly once data leaves a device. The challenge of bridging theory and practice, in this case, grounding scientific research and design practice in the theory of CI, is not only tractable, but with sufficient effort devoted to operationalizing the relevant concepts, could enhance our methodological toolkit for studying individuals' understandings and valuations of privacy in relation to data-intensive technologies and principles to guide design.

In our view, capturing people's complex attitudes toward privacy, including expectations and preferences in situ, will require methodological innovation and new techniques that apply the theory of contextual integrity. These methodologies and techniques have to accommodate the five independent parameters of contextual norms, scale to diverse contexts in which privacy decision-making takes place, and be sensitive not only to the variety of preferences and expectations within respective contexts, but to distinguish preferences from expectations. What we learn about privacy attitudes by following such methods and techniques should serve in the discovery and identification of contextual information norms, and yield results that are sufficiently rigorous to serve as a foundation for the design of effective privacy interfaces. The first informs public policy and law with information about what people generally expect and what is generally viewed as objectionable; the second informs designers not only about mechanisms to help people to make informed decisions, but also what substantive constraints on flow should or could be implemented within design. Instead of ubiquitous "notice and choice" regimes, the project will aim to identify situations where clear norms, for example, those identified through careful study, can be embedded in technology (systems, applications, platforms) as constraints on flow and where no such norms emerge, variations may be selected according to user preferences. Thus, this project will yield a set of practical, usable, and scalable technologies and tools that can be applied to both existing and future technologies, thereby providing a scientific basis for future privacy research.

Serge Egelman is Research Director of the Usable Security & Privacy Group at the International Computer Science Institute (ICSI) and also holds an appointment in the Department of Electrical Engineering and Computer Sciences (EECS) at the University of California, Berkeley. He leads the Berkeley Laboratory for Usable and Experimental Security (BLUES), which is the amalgamation of his ICSI and UCB research groups. Serge's research focuses on the intersection of privacy, computer security, and human-computer interaction, with the specific aim of better understanding how people make decisions surrounding their privacy and security, and then creating data-driven improvements to systems and interfaces. This has included human subjects research on social networking privacy, access controls, authentication mechanisms, web browser security warnings, and privacy-enhancing technologies. His work has received multiple best paper awards, including seven ACM CHI Honorable Mentions, the 2012 Symposium on Usable Privacy and Security (SOUPS) Distinguished Paper Award for his work on smartphone application permissions, as well as the 2017 SOUPS Impact Award, and the 2012 Information Systems Research Best Published Paper Award for his work on consumers' willingness to pay for online privacy. He received his PhD from Carnegie Mellon University and prior to that was an undergraduate at the University of Virginia. He has also performed research at NIST, Brown University, Microsoft Research, and Xerox PARC.