Theme: Designed-In Security

Access Control Policy Tool (ACPT), An Assurance Tool That Combines Symbolic Model Checking with Combinatorial Coverage

ABSTRACT

The correct specification of access control (AC) policies is a very challenging problem. This problem becomes increasingly severe as a system becomes more and more complex, and is deployed to manage a large amount of sensitive or private information and resources. Thus, it is important to provide a tool, which can thoroughly and automatically check the syntactic and semantic faults of AC policies before deploying them for operation. This presentation demonstrates NIST's effort of developing the tool - Access Control Policy Tool (ACPT), which provides (1) GUI templates for composing AC policies, (2) property checking for AC policy models through an SMV (Symbolic Model Verification) model checker, (3) complete test suite generated by NIST's combinatorial testing tool ACTS, and (4) XACML policy generation as output of verified model. Through the four major functions, ACPT performs all the syntactic and semantic verifications as well as the interface for composing and combining AC rules for AC policies; ACPT assures the efficiency of specified AC policies, and eliminates the possibility of making faulty AC policies that leak the privacy information or prohibit legitimate information sharing.

Vincent Hu
License: 
Creative Commons 2.5
Switch to experimental viewer

Verifying the Trusted Platform Module

ABSTRACT

As the hardware root-of-trust for trusted computing systems, the Trusted Platform Module (TPM) is critical in establishing system trust. The TPM 1.2 and associated Trusted Software Stack (TSS) are described by 700 pages of precise, textual documentation and have not been formally modeled or verified. As the hardware root of trust for trusted computing, formal treatment of the TPM is warranted. To address assurance issues in the TPM, we are developing and verifying a model of the TPM 1.2 using PVS.

The TPM is a cryptographic co-processor defined by the Trusted Computing Group to support trusted computing. It provides three subsystems for: (i) high-integrity measurement storage; (ii) quote generation and delivery; and (iii) sealing data and keys to TPM state. Platform Configuration Registers (PCRs) store and extend hashes in a manner that prevents unauthorized reset or modification. The quote generation system delivers quotes to appraisers using remote attestation to preserve anonymity of appraisal targets and ensure measurement integrity. Finally, sealing session keys to PCR states using TPM resident keys binds a secret to a TPM and a TPM state.

Our objective in modeling the TPM is specifying abstract instruction requirements, establishing correctness of protocols and instruction sequences, and establishing correctness of detailed instruction definitions with respect to abstract requirements. We use an axiomatic approach to specify each instruction as a state transformation. A state monad sequences instruction execution allowing examination of protocols. Our ultimate goal is defining a bisimulation between the abstract and concrete models to demonstrate congruence between models.

The proposed presentation will focus on the abstract requirements model, individual instruction definition and verification, and attestation protocol verification. We will focus on modeling methodology, the use of PVS, and verification results thus far. We will discuss how individual TPM instructions are defined, how TPM state is represented, how the state monad is used for instruction sequencing, and how we use our model for verification of an attestation protocol that uses a Privacy CA to preserve anonymity of appraisal targets.

BIO:

Dr. Perry Alexander is a Professor of Electrical Engineering and Computer Science at the University of Kansas. His research interests include formal modeling, language semantics, systems-level design and component retrieval. He is a Senior Member of IEEE and is past president of the Engineering of Computer-Based Systems Technical Committee. He is the principal architect of the Rosetta systems-level design language. He has published over 90 refereed papers, has won 15 teaching awards, and presented numerous invited talks. The professional achievement he is most proud of is his students.

Perry Alexander
Brigid Halling
License: 
Creative Commons 2.5

Other available formats:

Verifying the Trusted Platform Module
Switch to experimental viewer

Secure Virtualization with Formal Methods

ABSTRACT

Virtualization software is increasingly a part of the infrastructure behind our online activities. The companies and institutions that produce online content are taking advantage of the "infrastructure as a service" cloud computing model to obtain cheap and reliable computing power. These content producers -- news media organizations, social networking sites, online shopping companies, universities, and government agencies -- rent compute time from the cloud provider during which they can run their entire software stack on the cloud provider's server. The cloud providers are able to provide cheap computing power by letting multiple client operating systems share a single physical machine, and they use virtualization technology to do that. The virtualization layer also provides isolation between guests, protecting each from unwanted access by the co-tenants.

In this talk I will discuss our research toward verifying the security of this virtualization layer, and in particular, the isolation properties. I will discuss some of the challenges and opportunities in using traditional formal methods to verify security properties at this level of the software stack, and I will present a new methodology, based on model checking, for handling one of the biggest challenges: verification in the face of very large data structures. One weakness of using model checking is that the verification result is only as good as the model; I will also present work we've done to give stronger confidence in the validity of the model.

Virtualization software has a variety of security-critical applications, from malware analysis, to sandboxing potentially dangerous applications, to hosting virtual machines. Ensuring they are correct is important. However, they often use large data structures, such as page tables and caches, to keep track of emulated state, and these can have a large state space, making automated verification based on straightforward state-space exploration infeasible. I, along with my co-authors, developed a methodology we call S2W to facilitate automated verification of such systems. The methodology uses a combination of abstraction and bounded model checking and allows for the verification of safety properties of large or unbounded arrays. We evaluate this methodology using six case studies, including verification of the address translation logic in the Bochs x86 emulator, and verification of security properties of several hypervisor models. This work was published in the proceedings of Formal Methods in Computer-Aided Design (FMCAD) 2012.

Often, formal verification proceeds by first creating a model of the system under consideration and then verifying properties about the model. This is useful when direct verification of the code is not feasible; however, the resulting verification guarantees are only as good as the model. If the model does not accurately represent the system under consideration, properties proven true of the model may or may not be true of the system. We encountered this limitation in the above S2W work; as diligent as we may be, it is likely that a manually constructed model of complex software will have bugs. In the second half of this talk I will discuss a technique for validating a manually-built model against the code in order to gain confidence the model is correct. We use symbolic execution to learn the execution behaviors of the code, and we use a decision procedure to check whether every behavior in the code exists in the model. We evaluate this methodology using several case studies, including a validation of the model used in the S2W project. In spite of many person-hours spent building, studying, and using that model by at least two different researchers during the work on S2W, the model validation technique uncovered five different errors in the model. Modeling a system can make verification more user-friendly and is often a good way to make verification practical; model validation is important for making the results of that verification more meaningful.

Cynthia Sturton
Rohit Sinha
Petros Maniatis
Sanjit A. Seshia
David Wagner
License: 
Creative Commons 2.5

Other available formats:

Secure Virtualization with Formal Methods
Switch to experimental viewer