The 12th annual HCSS Conference was held May 8-11, 2012 in Annapolis, Maryland. The program agenda and slide presentations are listed below.
The High Confidence Software and Systems (HCSS) Conference was created to support the interchange of ideas among researchers, practitioners, and research managers from Government, research labs, and industry practice. HCSS provides a forum for dialogue centered upon the development of scientific foundations together with innovative and enabling software and hardware technologies for the assured engineering of complex computing systems. These systems, which include networked and cyber-physical systems, must be capable of interacting correctly, safely, and securely with humans and the physical world even while they operate in changing and possibly malicious environments with unforeseen conditions. In many cases, they must be certifiably dependable.
The technical emphasis of the HCSS conference is on mathematically-based tools and techniques and on scientific foundations supporting evidence creation and systems assurance and security. The HCSS vision is one of engaging and growing a community--including researchers and skilled practitioners--that is focused around the creation of dependable systems that are capable, efficient, and responsive; that can work in dangerous or inaccessible environments; that can support large-scale, distributed coordination; that augment human capabilities; that can advance the mission of national security; and that enhance quality of life, safety, and security.
This year's conference will keep in the tradition of two kinds of talks:
The following themes and associated talks are illustrative of the range of topics that will be covered at this year's Conference.
Technology Transfer
Formal Synthesis
Compositional Reasoning
Designed-In Security
Presented as part of the 2012 HCSS conference.
Abstract:
DO-178 ("Software Considerations in Airborne Systems") provides certification guidance for the onboard software in commercial aircraft. The technologies for developing and verifying software have changed significantly since the publication of DO-178 version B in 1992. After more than six years of work, a committee of industry and government experts has produced DO-178C, incorporating new guidance related to model-based and object-oriented software, as well as formal methods. This presentation will provide an overview of software certification in commercial aircraft and describe the changes incorporated in DO-178C with a focus on formal methods.
Biography:
Darren Cofer is a Principal Systems Engineer in the Trusted Systems group of ATC. He earned his PhD in Electrical and Computer Engineering from The University of Texas at Austin. His principal area of expertise is developing and applying advanced analysis methods and tools for verification and certification of high-integrity systems. His background includes work with formal methods for system and software analysis, the design of real-time embedded systems for safety-critical applications, and the development of nuclear propulsion systems in the U.S. Navy. He is currently the principal investigator on Rockwell Collins' META project with DARPA, and serves on RTCA committee SC-205 tasked with developing DO-178C, providing updated certification guidance for airborne software. He is an Associate Technical Editor for Control Systems Magazine and is a Senior Member of the IEEE.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Computer science researchers in the programming languages and formal verification communities, among others, have produced a variety of automated assistance and verification tools and techniques for formal reasoning: parsers, evaluators, proof-authoring systems, software and hardware. These leading-edge advances remain largely underutilized by large populations of potential users that may benefit from them. Among these are instructors and students engaged in the undergraduate-level instruction and study of mathematics.
Tools such as MATLAB and Mathematica are utilized for solving problems by both instructors and students in mathematics courses at the undergraduate level. Other existing tools such as Isabelle/Isar [12], Mizar [11], Coq, or even basic techniques such as monomorphic type checking, computation of congruence closures, and automated logical inference, provide an unambiguous way to represent and verify formal arguments, from chains of basic algebraic manipulations in arithmetic all the way up to graduate-level mathematics topics. What impediments cause these tools to be so seldom used in presenting and assembling even basic algebraic arguments and proofs in undergraduate mathematics courses? Impediments faced by instructors and students who may wish to adopt existing automated formal reasoning assistance and verification tools and techniques include (we cite examples of related work that shares our motivation in addressing each impediment listed): syntaxes that may be unfamiliar and entirely distinct from the syntax used in textbooks and lecture materials [7]; a lack of high-level domain-specific libraries that would make formal arguments as manageable as those found in a text-book (or those presented in the classroom) [1, 3, 10]; a potentially idiosyncratic semantics focused on low-level logical correctness rather than practical high-level relationships typically explored in a mathematics course; a steep learning curve in employing a tool or technique [2, 5]; and logistical difficulties in setting up a working environment letting instructors and students use the same tools without much supporting IT infrastructure [5, 6].
Building on earlier work in assembling and evaluating user-friendly and accessible formal verification tools for research and classroom instruction [4, 8, 9], we have recently attempted to address many of these issues while making more manageable the task of utilizing existing formal reasoning assistance and verification techniques within the classroom. We have assembled a web-based integrated formal reasoning environment for classroom instruction that incorporates several standard techniques drawn from work done by the programming languages and formal verification communities over the last several decades. These techniques include basic evaluation of expressions, monomorphic type inference, basic verification of a subset of logical formulas in propositional and first-order logic, and an algorithm for computing congruence closures. This interactive web-based environment can run entirely within a standard web browser, and can be seamlessly integrated with a web page of lecture materials and homework assignments. The lectures and assignments contain within them formal arguments (including both complete examples and problems to be completed) that can be evaluated and/or automatically verified interactively. The environment provides an explicit library of logical definitions and formulas that students can utilize to complete assignments; these formulas can be browsed and filtered by students within the environment. The environment provides interactive and instant verification feedback about logical validity as well as an interactive, exhaustive list of all inferred properties for a given formal argument input.
The environment has been utilized within the classroom for an undergraduate-level course in linear algebra, both by the instructor in presenting examples during lectures, and by students when completing homework assignments. The focus of this work is not on the expressiveness or performance of the underlying tools and techniques within the environment (most examples used in such a course are simple); rather, it is on the design and practical evaluation of an integrated environment of basic, existing tools and techniques that is accessible (logistically and semantically) to non-expert end-users. We found that even basic capabilities are helpful in teaching students what is a formal argument and how to assemble a formal argument by examining possible formulas and applying those formulas while working towards a goal. Students were also able to transfer to an exam setting the techniques they used in assembling formal arguments within the environment.
Biography:
Andrei Lapets is a postdoctoral fellow at the Hariri Institute at Boston University. His research interests lie in finding ways to improve and measure the usability and accessibility of formal reasoning assistance tools and techniques. In 2011 he received a Ph.D. in Computer Science from Boston University, and he also holds S.M. and A.B. degrees in Computer Science and Mathematics from Harvard University.
References:
[1] A. Abel, B. Chang, and F. Pfenning. Human-Readable Machine Verifiable-Proofs for Teaching Constructive Logic. In U. Egly, A. Fiedler, H. Horacek, and S. Schmitt, editors, PTP '01: IJCAR Workshop on Proof Transformations, Proof Presentations and Complexity of Proofs, Siena, Italty, 2001.
[2] A. Asperti, C.S. Coen, E. Tassi, and S. Zacchiroli. User Interaction with the Matita Proof Assistant. Journal of Automated Reasoning, 39(2): 109-139, 2007.
[3] C.E. Brown. Verifying and Invalidating Textbook Proofs Using Scunak. In MKM '06: Mathematical Knowledge Management, pages 110-123, Wokingham, England, 2006.
[4] V. Ishakian, A. Lapets, A. Bestavros, and A. Kfoury. Formal Verification of SLA Tranformations. In Proceedings of CloudPerf 2011:IEEE International Workshop on Performance Aspects of Cloud and Service Virtualization, Washington, D.C., USA, July 2011.
[5] D. Jackson. Alloy: A Lightweight Object Modeling Notation. Software Engineering and Methodology, 11(2):256-290, 2002.
[6] C. Kaliszyk. Web Interfaces for Proof Assistants. Electronic Notes in Theoretical Computer Science, 174(2):49-61, 2007.
[7] F.Kamareddine and J.B. Wells. Computerizing Mathematical Text with MathLang. Electronic Notes in Theoretical Computer Science, 205:5-30, 2008.
[8] A. Lapets. User-Friendly Support for Common Concepts in a Lightweight Verifier. In Proceedings of VERIFY-2010: The 6th International Verification Workshop, Edinburgh, UK, July 2012
[9] A. Lapets and A. Kfoury. A User-friendly Interface for a Lightweight Verification System. In Proceedings of UITP'10: 9th International Workshop on User Interfaces for Theorem Provers, Edinburgh, UK, July 2010.
[10] J.H. Siekmann, C. Benzmuller, A. Fiedler, A. Meier, and M. Pollet. Proof Development with OMEGA: sqrt(2) Is Irrational. In LPAR, pages 367-387, 2002.
[11] A. Trybulex and H. Blair. Computer Assisted Reasoning with MIZAR. In Proceedings of the 9th IJCAI, pages 26-28, Los Angeles, CA, USA, 1985.
[12] M. Wenzel. The Isabelle/Isar Reference Manual, January 2011.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
This talk will describe our attempt to integrate data and services within a large organization, or to use the Internet. We have developed a theory, called Orc, in which we specify the sources of data and computations and how to orchestrate their executions (concurrently, one after the other, only when the majority of data is available, ...). The Orc programming language has a very small kernel, using external libraries of sites for support functions. The language starts with concurrency as the defining characteristic of programs. With its hierarchical and recursive combination facilities, it provides a model of structured concurrency. It can be used to orchestrate a small application or a giant one with many millions of threads, running in real time, from milliseconds to months. We have developed an experimental implementation of the language. The language, its documentation, and a web-interface are available at www.orc.csres.utexas.edu where programs can also be submitted for execution.
Motivation for the Work
The Internet promises to improve our lives by allowing us to connect to data and services that support our daily existence. It promises to improve business processes by integrating data from a variety of sources within specific applications. This vision has not yet been fully realized. Instead, our infrastructure is currently built from numerous closed-world systems, each built by a single individual or business entity, which enclose and protect data and computation, while providing access to their services through limited human-oriented interfaces. As a result, users are forced to manually mediate between these services, thus wasting time and effort.
To illustrate this situation, consider the following scenario. A patient receives an electronic prescription for a drug from a doctor. The patient compares prices at several near-by pharmacies, and chooses the cheapest one to fill the prescription. He pays the pharmacy and receives an electronic invoice which he sends to the insurance company for reimbursement with instructions to deposit the amount in his bank account. Eventually, he receives a confirmation from his bank. The entire computation is mediated by the patient who acquires data from one source, does some minor computations and sends data to other sources. The mediation is entirely manual.
This computing scenario is repeated millions of times a day in diverse areas such as business computing, e-commerce, health care and logistics. In spite of the extraordinary advances in mobile computing, human participation is currently required in every major step in most applications. This is because the infrastructure for efficient mediation is largely absent, thus contributing to cost and delay in these applications. We believe that humans can largely be eliminated, or assigned a supporting role, in many applications. Doing so is not only beneficial in terms of efficiency, but also essential if we are to realize the full potential of the interconnectivity among machines, using the services and data available in the internet or within a large organization. We would prefer that humans advise and report to the machines, rather than that humans direct the machines in each step. Orc has proved successful in this effort because it has small number of key concepts and a strong theoretical basis.
Biography:
Jayadev Misra is a professor and holder of the Schlumberger Centennial chair in Computer Science at the University of Texas at Austin. He is known for his work in the area of concurrent programming, with emphasis on rigorous methods to improve the programming process. His work on the UNITY methodology, jointly with Chandy, has been influential in both academia and industry, and has spawned a large number of tools and research projects. He has recently developed a programming language, called "Orc", for concurrent orchestrations of interacting components. He is also spear-heading an effort, jointly with Tony Hoare, to automate large-scale program verification. Misra is a fellow of ACM and IEEE. He held the Guggenheim fellowship during 1988-1989. He was the Strachey lecturer at Oxford University in 1996, and has held the Belgian FNRS International Chair of Computer Science in 1990. He is a member of the Academy of Distinguished Teachers at the University of Texas at Austin. Misra has been the past editor of several journals including: Computing Surveys, Journal of the ACM, Information Processing Letters, and the Formal Aspects of Computing. He is the author of two books, Parallel Program Design: A Foundation, Addison-Wesley, 1988, co-authored with Mani Chandy, and A Discipline of Multiprogramming, Springer-Verlag, 2001.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Network and embedded security devices including cross-domain solutions, network guards, and message routers have information flow policies that are complex and crucial to fulfilling device requirements. To be used in military and national infrastructure systems, such devices must often be certified to very stringent criteria such as Common Criteria EAL 6/7 and DCID 6/3. Past certification efforts have created models of software code in theorem provers and proved that the models of code comply with security policies - leaving "trust gaps" between source code and models and inhibiting developers from proceeding with verification during the development process.
To remove these trust gaps and to make information flow specification and checking more accessible to developers, it is desirable to have frameworks that support specification of information flow policies directly at the source code level as part of developer workflows. Spark, a subset of Ada for engineering safety and security-critical systems, is one of the best commercially available frameworks for formal-methods-supported development of critical software. Spark provides a source-level information flow specification and checking framework that has been used to support certification arguments on several different information assurance projects.
Our experience with Spark is derived, in part, from our collaborations with Rockwell Collins where Spark is used in security critical projects including the Janus high-speed cryptography engine and several other embedded information assurance devices. While SPARK provides information flow annotations and associated automated checking mechanisms, industrial experience has revealed that these annotations are not precise enough to specify many desired information flow policies. One key problem is that information assurance applications such as cross-domain solutions often contain information flow policies that are conditional in the sense that data is allowed to flow between system components only when the system satisfies certain state predicates. However, existing information flow specification and verification environments, including Spark, can only capture unconditional information flows. Another limitation of the Spark information flow framework is that arrays are treated as indivisible entities-flow that involve only particular locations of an array have to be abstracted into flow on the whole array. This has substantial practical impact since Spark does not allow dynamic allocation of memory, and hence makes heavy use of arrays to implement complex data structures.
In this talk, we will describe our experience in developing and applying a Hoare logic for information flow that enables precise compositional specification of conditional information flow policies in programs with arrays, and automated deduction algorithms for checking and inferring contracts in an enhanced SPARK information flow contract language that hides the details of the logic from the developer. To provide very high degrees of confidence, our algorithm for verifying code compliance to an information flow contract emits formal certificates of correctness that are checked in the Coq proof assistant. We demonstrate our techniques on a collection of simple information assurance examples.
Although the framework currently targets Spark, the techniques can be applied to other languages used for high-assurance applications such as safety critical subsets of C.
Biography:
Dr. John Hatcliff is a University Distinguished Professor at Kansas State University in the Computing and Information Sciences Department. His research areas include software engineering, formal methods, static analysis, security, software certification, and medical device development and integration. He has been a principal investigator on a number of DoD-funded projects from agencies such as the Air Force Office of Scientific Research, Army Research Office, and DARPA. His research has also been supported by the National Science Foundation, Lockheed Martin, Rockwell Collins, Intel, and IBM.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Biography:
Ms. Dawn C. Meyerriecks currently serves as the Assistant Director of National Intelligence for Acquisition, Technology & Facilities. In this role, she explores and delivers complex technologies underpinning national missions.
Prior to this assignment, she provided senior leadership business and technology consulting direction to government and commercial clients. In addition to consulting, she served on a number of government and commercial advisory boards, to include the STRATCOM C2 Advisory Group, the Defense Science Board, the NCTC Advisory Board, the National Academy of Sciences, the Unisys Federal Advisory Board, and the SunFed Advisory Board.
From 2004 - 2006, Ms. Meyerriecks served as the Senior Vice President for Product Technology at AOL. While at AOL, she was responsible for full lifecycle development and integration of all consumer-facing AOL products and services, including the re-launch of aol.com, AOL Instant Messenger, and the open client platform.
Prior to AOL, Ms. Meyerriecks worked for nearly ten years at the Defense Information Systems Agency (DISA), where she was the Chief Technology Officer and Technical Director for the Joint Interoperability and Engineering Organization (JIEO). Her last assignment was to charter and lead a new Global Information Grid (GIG) Enterprise Services organization. Ms. Meyerriecks worked at the Jet Propulsion Laboratory (JPL) as a senior engineer and product manager before her tenure at DISA.
In addition to being named the Government Computer News Department of Defense Person of the Year for 2004, Ms. Meyerriecks has been honored with numerous other awards, including InfoWorld 2002 CTO of the year; Federal Computer Week 2000 Top 100; InfoWorld 2001 CTO of the year for the government sector; the Presidential Distinguished Service Award, November 2001; the Senior Executive Service Exceptional Achievement Awards in 1998, 1999, 2000; and the National Performance Review in August 1996. In November 2001, she was featured in Fortune magazine as one of the top 100 intellectual leaders in the world.
Ms. Meyerriecks earned a Bachelor of Science Degree in Electrical Engineering with a double major in Business from Carnegie Mellon University and a Master of Science in Computer Science from Loyola Marymount University.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
"Invent a better mousetrap and the world will beat a path to your door? Not!"
Great ideas and even great research does not sell itself. Someone has to convince an organization to turn an idea into a workable application and then convince many people to use it. In a university or research lab, this process is called tech transfer and is rarely done well.
From its founding, Microsoft Research placed equal weight on developing new ideas and seeing them put to use. We have not always been successful, but our research innovations are in all of Microsoft's products. In this talk, I will share my experience building software development tools for Microsoft's use and my observation on how tech transfer can succeed.
Biography:
James Larus is a Principal Researcher in Microsoft Research. Larus has been an active contributor to the programming languages, compiler, software engineering, and computer architecture communities. He published many papers, received 26 US patents, and served on numerous program committees and NSF and NRC panels. His book, Transactional Memory (Morgan Claypool) appeared in 2007. Larus became an ACM Fellow in 2006.
Larus joined Microsoft Research as a Senior Researcher in 1998 to start and lead the Software Productivity Tools (SPT) group, which developed and applied a variety of innovative program analysis techniques to construct new tools to find software defects. This group's research had considerable impact on the research community (2011 SIGPLAN Most Influential Paper and the 2011 CAV Award), as well as being shipped in Microsoft products such as the Static Driver Verifier, FX/Cop, and other widely-used internal software development tools. Larus became an MSR Research Area Manager for programming languages and tools and started the Singularity research project, which demonstrated how modern programming languages and software engineering techniques can fundamentally improve software architectures. Subsequently, he helped start XCG, an effort in MSR to develop hardware and software support for cloud computing. In XCG, Larus led groups developing the Orleans framework for cloud programming and computer hardware projects.
Before joining Microsoft, Larus was an Assistant and Associate Professor of Computer Science at the University of Wisconsin-Madison, where he published approximately 60 research papers and co-led the Wisconsin Wind Tunnel (WWT) research project with Professors Mark Hill and David Wood. WWT was a DARPA and NSF-funded project investigated new approaches to simulating, building, and programming parallel shared-memory computers. Larus's research spanned a number of areas: including new and efficient techniques for measuring and recording executing programs' behavior, tools for analyzing and manipulating compiled and linked programs, programming languages for parallel computing, tools for verifying program correctness, and techniques for compiler analysis and optimization.
Larus received his MS and PhD in Computer Science from the University of California, Berkeley in 1989, and an AB in Applied Mathematics from Harvard in 1980. At Berkeley, Larus developed one of the first systems to analyze Lisp programs and determine how to best execute them on a parallel computer.
Presented as part of the 2012 HCSS conference.
Abstract:
This talk is intended to give a general overview of GTRI's VehicleForge.mil framework and how it was designed to meet the unique needs of the Open Hardware Design community. Specifically it will start with a summary of the VehicleForge.mil vision and outline several of the key challenges associated with the collaborative design of hardware. From these needs and challenges, the VehicleForge architecture will be summarized with an emphasis on those elements that support the needs of a distributed, multi-domain community. The talk will conclude with an overview of our VehicleForge design process and how we use VehicleForge to design and build VehicleForge.
Biography:
Dr. Jack Zentner is a Senior Research Engineer at the Electronic Systems Laboratory (ELSYS) in the Georgia Tech Research Institute (GTRI). His primary area of research is in the development of probabilistic design methods and design tools for large scale, complex systems. In relation to design tool development, he is the Principal Investigator and Lead Systems Engineer for the DARPA VehiclForge.mil project. Over the past several years, he has also been developing and applying strategic decision making methods for a range of government and industry customers to help them map their high-level goals and objectives to a prioritized set of actionable alternatives. In addition to conducting research, Dr. Zentner is a course developer and lecturer for Georgia Tech's Professional Masters in Applied Systems Engineering, and teaches several professional development courses in systems engineering as part of Georgia Tech's Defense Technology Certificate Program. These courses include, Advanced Problem Solving Methods and The Fundamentals of Modern Systems Engineering. He earned a B.S. in Mechanical Engineering from Colorado State University in 1999, and a M.S. and Ph.D. in Aerospace Engineering from the Georgia Institute of Technology in 2001 and 2006, respectively.
Slide presentation not yet available.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract
Extracting optimal performance from modern computing platforms has become increasingly difficult. The effect is particularly noticeable in computations that are of mathematical nature such as those needed in media processing, communication, control, data analytics, or scientific simulations. In these domains, straightfoward implementations often underperform by one or even two orders of magnitude. The reason is in optimizations that are known to be difficult and often impossible for compilers: parallelization, vectorization, and locailty optimizations.
On the other hand, many mathematical applications spend most of their runtime in well-defined mathematical kernels such as matrix computations, tranforms, interpolation, coding, and others. Since these are likely to be needed for decades to come, it makes sense to build program generation systems for their automatic production.
With Sprial we have built such a system for the domain of linear transforms. The key principles underlying Spiral are a domain specific mathematical language to represent algorithms, rewriting systems for different forms of parallelization and a locality optimization called recursion step closure, and the use of machine learning to adapt code at installation time. Spiral-generated code has proven to be often faster than any human-written code and has been deployed in industry.
More information: www.spiral.net
Biography:
Markus Puschel is a Professor of Computer Science at ETH Zurich, Switzerland since 2010. Before, he was a Professor of Electrical and Computer Engineering at Carnegie Mellon University, where he still has an adjunct status. He received his Diploma (M.Sc.) in Mathematics and his Doctorate (Ph.D.) in Computer Science, in 1995 and 1998, respectively, both from the University of Karlsruhe, Germany. At Carnegie Mellon he was a recipient of the Outstanding Research Award of the College of Engineering and the Eta Kappa Nu Award for Outstanding Teaching. He also holds the title of Privatdozent at the University of Technology, Vienna, Austria. In 2009 he cofounded SpiralGen Inc. The work on Spiral has received several recognitions including being feature as NSF Discovery.
More information: http://www.inf.ethz.ch/~markusp
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
David Melski, melski@grammatech.com, GrammaTech, Inc.
We describe the results of the first phase in the development of PEASOUP, a technology that enables the safe execution of software executables of uncertain provenance. PEASOUP (Preventing Exploits Against Software of Uncertain Provenance) prevents exploits of number-handling weaknesses and memory-safety weaknesses in Software of Uncertain Provenance (SOUP). In addition, PEASOUP prevents any exploit based on arc-injection or code-injection, regardless of the type of vulnerability targeted for attack. PEASOUP advanced the state-of-the-art in automatic program analysis, diversification, confinement, and remediation. PEASOUP is joint work of GrammaTech, the University of Virginia, the Georgia Institute of Technology, and Raytheon.
Analysis. The PEASOUP analyzer uses a novel combination of precise run-time analyses [40] with recent techniques for generating high-coverage test suites [19, 39]. The analyzer has the following components: test-case generation, input classification, intermediate representation recovery, variant generation, and variant validation.
Diversification. During Phase I, we developed a novel diversification technique called Instruction-Layout Randomization (ILR) that works by relocating 99.7% of the instructions in a program. Many approaches to diversification suffer from low-entropy or complicated estimates of the measure of introduced entropy based on assumptions that may not hold for some executables. ILR advances the state of the art in program diversification because it does not suffer from these shortcomings: it is simple to show that 99.7% of instructions can be relocated to any one of 231 addresses (on a 32-bit machine). This represents 3.5 orders of magnitude improvement over some of the most common, successful diversification techniques. ILR makes any type of arc-injection attack infeasible, including attacks based on Return-Oriented Programming (ROP). Furthermore, ILR still has the desirable properties of existing diversification techniques: it has low overhead and is easy to deploy.
In addition to ILR, we demonstrated that PEASOUP can safely perform Stack-Layout Randomization (SLR) on software binaries. Previous approaches to randomizing the layout of the stack required access to a program's source code
Confinement. Both ILR and SLR demonstrate the power of PEASOUP's analysis phase to recover high-quality IR. In fact ILR and SLR demonstrate that it is possible to implement confinement techniques such as control-flow integrity and stack-canaries directly on binaries. Previous binary analysis techniques were not adequate for these purposes.
Phase I also demonstrated that it is possible to combine Software Dynamic Transaltion with Secure In-VM Monitoring (SIM). SDT provides PEASOUP with the ability to implement fine-grained security policies. However, the translator itself could be subject to attack. SIM solves this short-coming by using hard-ware memory protection to ensure that the translator is not compromised. We believe that this represents an advance in the provable security guarantees that can be made for a fine-grained policy enforcement technique.
Remediation. Finally, PEASOUP demonstrated advances in the state-of-the-art for automatic program remediation during the independent evaluation of the Phase I prototype. Specifically, the padding introduced by PEASOUP's diversification techniques allowed real-world applications (bzip2, ngircd) to continue correct execution when they were provided malicious inputs.
This work is sponsored by Air Force Research Laboratories (contract #FA8650-10-C-7025).
Biography:
Dr. David Melski is the principal investigator of GrammaTech's PEASOUP effort. Dr. Melski is the vice-president of research at GrammaTech. In that role, he oversees GrammaTech's research on automated program analysis and transformation to develop tools and techniques that aide with information assurance, software producibility, reverse engineering of software, and software protection. Melski graduated summa cum laude from the University of Wisconsin in 1994 with a B.S. in Computer Sciences and Russian Studies. He received his Ph.D. in Computer Sciences from the University of Wisconsin in 2002, where his research interests included static analysis, profiling, and profile-directed optimization. His publications include an invited submission to the Journal of Theoretical Computer Science on the interconvertibility of a class of set constraints and context-free reachability. Melski's Ph.D. thesis presented a framework for developing interprocedural path-profiling techniques, and examined the use of path profiles for automatic program optimization. While at Wisconsin, Melski worked as a research assistant and was twice awarded the CISCO fellowship.
Presented as part of the 2012 HCSS conference.
Abstract:
The VIBRANCE (= Vulnerabilities in Bytecode Removed by Analysis, Nuanced Confinement, and diversification) project aims at constructing a tool that automatically hardens Java bytecode to make it resistant to certain classes of attacks. VIBRANCE uses static and dynamic analysis to find vulnerable code, run-time confinement to prevent exploits of the vulnerable code, run-time confinement to prevent exploits of the vulnerable code, and diversification to increase the difficulty of attacks. The current version of the VIBRANCE tool provides comprehensive and precise protection from injection (e.g. SQL) and other tainted-data attacks. For a large class of vulnerabilities, the protection added by VIBRANCE blocks the attacks and safely continues execution. Besides addressing specific CWEs, the VIBRANCE tool can also conservatively handle unforeseen weaknesses. Most of the VIBRANCE tool's protections are not hard-wired in the implementation, but can be customized via a configuration file that enables fine-tuning of the checks to be performed on tainted data and of the actions to be taken when those checks fail. The VIBRANCE project is joint work of Kestrel Institute, Kestrel Technology, and MIT CSAIL.
Alessandro Coglio is a Principal Scientist at Kestrel Institute and a Board Manager and Co-Founder of Kestrel Technology LLC. He has over 10 years of experience in both technical and management tasks in a variety of research and development projects. His current research interests include formal methods and tools to develop correct-by-construction software via formal specification, refinement, and theorem proving. Prior to joining Kestrel, Mr. Coglio was a Consulting Researcher at University of Genoa (Italy), where he worked on several research projects on theorem proving (in collaboration with Stanford University), Petri net and discrete event systems, and artificial emotions. Mr. Coglio has a degree in Informatics Engineering from University of Genoa (Italy).
Presented as part of the 2012 HCSS conference.
This talk presents a design flow and supporting tools to significantly improve the design and verification of complex cyber-physical systems. We focus on system architecture models composed from libraries of components and complexity-reducing design patterns having formally verified properties. This allows new system designs to be developed rapidly using patterns that have been shown to reduce unnecessary complexity and coupling between components. Components and patterns are annotated with formal contracts describing their guaranteed behaviors and the contextual assumptions that must be satisfied for their correct operation. We describe a compositional reasoning framework for proving the correctness of a system design, and present an example based on an aircraft flight control system to illustrate the method and supporting analysis tools.
Biography:
Darren Cofer is a Principal Systems Engineer in the Trusted Systems group of ATC. He earned his PhD in Electrical and Computer Engineering from The University of Texas at Austin. His principal area of expertise is developing and applying advanced analysis methods and tools for verification and certification of high-integrity systems. His background includes work with formal methods for system and software analysis, the design of real-time embedded systems for safety-critical applications, and the development of nuclear propulsion systems in the U.S. Navy. He is currently the principal investigator on Rockwell Collins' META project with DARPA, and serves on RTCA committee SC-205 tasked with developing DO-178C, providing updated certification guidance for airborne software. He is an Associate Technical Editor for Control Systems Magazine and is a Senior Member of the IEEE.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Two approaches to synthesizing verified machine code emulators by theorem proving will be described and contrasted. The initial motivation is to provide trustworthy and efficient execution of guest code binaries (e.g. for discontinued legacy hardware) on a different hose processor (e.g. current technology COTS), whilst retaining the soundness of original validations, and so reducing the effort needed for re-certification. Both approaches use formal models of the guest and host architectures. The first exploits just-in-time (JIT) compilation to translate each basic block of guest machine code into host machine code. Efficiency depends on how far the fetch-decode-execute loop of the emulated instruction set is partial evaluated on the actual binary code being emulated. The second approach uses static binary translation in which an entire program or module from one architecture is translated to run on another. This approach is partial, but when applicable it can result in better code. An initial case study will be outlined in the talk. Verified emulators for a legacy 32-bit ARM architecture (ARMv4) running on a 64-bit x86 are synthesized by mechanized proof using the HOL4 system. This work is a collaboration between Magnus Myreen, Anthony Fox, and Mike Gordon.
Biography:
Magnus Myreen received his BA in Computer Science from the University of Oxford, tutored by Dr. Jeff Sanders. During the summers of his undergraduate degree, he worked as a research assistant at A...bo Akademi University in Finland for Professor RalphaEU"Johan Back. Magnus completed his PhD on program verification in 2008 at the University of Cambridge, supervised by Professor Mike Gordon. Currently, Magnus is a research associate at the University of Cambridge.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
The modular design of software has contributed greatly to the ability of application developers to quickly write sophisticated applications; however, in most instances only a small fraction of the functionality of particular software components is needed. The unmistakable trend is for operating systems and libraries to grow over time. This trend is not going to reverse any time soon because it represents a growth in econominc value: by adding support for more hardware, and features to support more applications, the operating systems grows its ecosystem, and therefore its value. However, this level of bloat in the software stack is inimical to security: there is ever more, and more complex, code to be analyzed to achieve any given degree of assurance for a system. While most systems are not contrained by the resource limits that they were in the past, the prevalence of security vulnerabilities that rely on live resources, such as jump-to-libc and return-oriented programming, suggests that even dead-functionality in binaries and sharied libraries can be dangerous.
Partial evaluation offers a principled approach to address all aspects of operating system growth. By specializing an operating system with respect to a given set of applications to run on a given hardware configuration, only those features essential for the particular applications being deployed on a particular set of machines need remain. Partial evaluation can trim device drivers and other kernel modules down to the exact minimum required to support the application on the given hardware. To address the problem of extraneous functionality, we proposed static previrtualization, a static analysis and code specialization technique that uses partial evaluation.
We, together with several collaborators, have implemented a preliminary version of static previrtualization in a tool called Occam that combines techniques from partial evaluation and link-time optimization to specialize the software stack and thereby reduce the functionality available to applications. Occam uses types as a compositional mechanism to address the independent specialization of libraries and describe code compatibility and transformations that can be statically reasoned about and applied to code.
Biographies:
Natarjan Shankar
Natarjan Shankar is a Staff Scientist at SRI, International. Dr. Shankar has a Ph.D. in Computer Science from the University of Texas at Austin. His interests are in the study of formal methods for the specification and verification of hardware and software systems, automated deduction, and computational logic. He is the co-developer of several leading verification tools including PVS and SAL. His recent research has focused on inference architectures, cyber-physical systems, probabilistic inference, natural language processing, and formal tool integration. His recent work includes the development of inference techniques, software and hardware verification methods, tool integration frameworks, and probabilistic reasoning. Dr. Shankar leads an international initiative on the scientific challenges of verified software.
Ashish Gehani
Dr. Ashish Gehani is a member of the Computer Science Laboratory at SRI, International. He obtained his B.S. (Honors) degree in Mathematics from the University of Chicago, and his M.S. and Ph.D. degrees in Computer Science from Duke University. Dr. Gehani has conducted research on data provenance, digital forensics, intrusion prevention, protecting sensor networks, multimedia security, and biological computation. He has published over thirty peer-reviewed research papers and served as a reviewer for over two dozen conferences and journals.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Garbage collection is an attractive target for exploring the issues in synthesizing concurrent algorithms. The basic algorithms are tricky to get correct, and become even trickier when implemented on modern multi-core architectures. We are using Kestrel's Specware system to synthesize garbage collectors, starting from a formal specification of collection requirements. Novel features include a mixed algebraic/coalgebraic style of specification, and coalgebraically-oriented transformations that generate correct-by-construction refinements. Dijkstra's on-the-fly concurrent garbage collector algorithm falls out from a straightforward application of fixpoint-iteration algorithm design and a finite differencing transformation. Other transformations include refinement of high-level types to more complicated, but efficient low-level types, globalization to transform from functional to shared-memory programs, and so on. Most of the transformations work by performing equational calculations using theorems from the application domain, and current efforts are augmenting the transformations to generate checkable proofs at the same time as they refine the source specification - in effect to generate proof-carrying code.
Dr. Douglas R. Smith is a Principal Scientist at Kestrel Institute and serves as President of Kestrel Technology LLC (both in Palo Alto). He is a Fellow of the American Association of Artificial Intelligence (AAAI) and an ASE Fellow (Automated Software Engineering). During 1986-2000, he periodically taught an advanced graduate course on knowledge-based software development at Stanford University. Dr. Smith served as Chairman of IFIP Working Group 2.1 on Algorithmic Languages and Calculi during 1994-2000. Dr. Smith's research interests have centered around formal specifications, automated inference, and program synthesis. He has led the development of a series of software synthesis systems, including KIDS (Kestrel Interactive Development Systems), Specware, Designware, and Planware. Applications have included a variety of complex high-performance schedulers for the US Air Force. Dr. Smith has over 30 years of experience in the field of automated program synthesis and has published over 100 papers. He has one patent. He received the Ph.D. in Computer Science from Duke University in 1979.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Embedded systems form a ubiquitous, networked, computing substrate that underlies much of modern technological society. Such systems range from large supervisory control and data acquisition (SCADA) systems that manage physical infrastructure to medical device such as pacemakers and insulin pumps, ton computer peripherals such as printers and routers, to communication devices such as cell phones and radios, to vehicles such as airplanes and satellites. Such devices have been networked for a variety of reasons, including the ability to conveniently access diagnostic information, perform software updates, provide innovative features, lower costs, and improve ease of use. Researchers and hackers have shown that these kinds of networked embedded systems are vulnerable to remote attack, and such attacks can cause physical damage while hiding the effects from monitors.
The goal of the HACMS program is to create technology for the construction of high-assurance cyber-physical systems, where high assurance is defined to mean functionally correct and satisfying appropriate safety and security properties. Achieving this goal requires a fundamentally different approach from what the software community has taken to date. Consequently, HACMS will adopt a clean-slate, formal methods-based approach to enable semi-automated code synthesis from executable, formal specifications. In addition to generating code, HACMS seeks a synthesizer capable of producing a machine-checkable proof that the generated code satisfies functional specifications as well as security and safety policies. A key technical challenge is the development of techniques to ensure that such proofs are composable, allowing the construction of high-assurance systems out of high-assurance components.
Biography:
Dr. Kathleen Fisher joined DARPA as a program manager in 2011. Her research and development interests relate to programming languages and high assurance systems. Dr. Fisher joined DARPA from Tufts University. Previously, she worked as a Principal Member of the technical staff at AT&T Labs. Dr. Fisher received her Doctor of Philosophy in computer science and her Bachelor of Science in math and computational science from Stanford University.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
I will describe a new approach to the old problem of automatic temporal property verification. As well as leading to dramatic performance improvements over existing techniques, this approach also brings some light to a couple of age-old questions.
Biography:
Dr. Byron Cook is a Principal Researcher at Microsoft Research Cambridge, as well as Professor of Computer Science at University College London. His research interests focus on formal verification, theorem proving, operating systems, and biology. Byron is particularly well known for his work on the Terminator program termination prover as well as the SLAM software model checker. See http://research.microsoft.com/en-us/people/bycook/ for more information.
Presented as part of the 2012 HCSS conference.
Abstract:
Separation logic provided a breakthrough in reasoning about the heap in many cases, making specs and proofs tractable and close to the programmer's intuition. Its "frame rule" enables local reasoning by answering the frame problem: describing what does not change when updating parts of the heap. However, for algorithms exhibiting intrinsic sharing, neither separation logic nor any other formalism has achieved simple proofs. We propose that most programs manipulating graphs, DAGs, or overlaid data structures require an answer to a different problem, the ramification problem (like the frame problem, another classic AI problem): describing how global structures change because of a local update. This work presents a solution in the form of a new inference rule: the ramification rule, mixing separation logic and relevant logic, and valid in any separation logic. We show how it allows sound and local reasoning about effects on data structures with sharing.
Biography:
Jules Villard is a research assistant working in the Programming Principles, Logic, and Verification group led by Peter O'Hearn at University College London (UK). He has worked on compositional techniques for the verification of message-passing programs, as highlighted by his PhD thesis (2011). He is the creator of the Heap-Hop academic prototype, capable of specifying and verifying message-passing programs operating over a shared memory. His current research involves the areas of program verification, compositional reasoning, and concurrency.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Cryptography is essential for ensuring that sensitive information can be securely stored and transmitted without risk of tampering or disclosure to untrusted parties. Although typically only a very small part of an overall system, cryptographic implementations are often highly complex due to a variety of performance optimizations, and may have many edge cases or diffcult to understand implementation tricks.
For public key cryptography in particular, implementations may contain subtle bugs that only affect a very small fraction of the total input space. For example, a bug was recently found in OpenSSL's implementation of ECC that affects less than 1 in 229 modular reductions2. Exhaustive testing of ECC implementations is infeasible due to the vast state space involved, and the complexity of ECC makes it intractable to verify using existing automated formal verification techniques. How can one have confidence that an ECC implementation is correct?
At Galois, we have found that compositional reasoning is key to solving this problem.
We have developed a compositional verification tool, called SAWScript, that allows individual methods within a cryptographic implementation to be precisely specified and independently verified. Specifications of low-level methods can be used to simplify the verification of more complex methods, and the SAWScript tool has incorporated a variety of automated and manual verification techniques, including SAT, SMT, and rewriting.
To validate the effectiveness of SAWScript, we have successfully implemented and verified a Java-based implementation of ECDSA over the NIST P-384 curve. Our implementation was designed for high performance rather than ease of verification. It is eight times faster than the BouncyCastle Java implementation of ECDSA, and is competitive with OpenSSL's C implementation. Our correctness proof decomposes the overall verification of ECDSA signing and signature checking into the verification of 50 different methods, and requires approximately 10 minutes to complete on a commodity laptop.
In the talk, I will describe Galois' tools and verification efforts to date, as well as sketch some more recent developments on the verification of low-level cryptographic implementations written in C.
________________________________
1 Corresponding author: Joe Hendrix, jhendrix@galois.com
2 B. B. Brumley, M. Barbosa, D. Page, and F. Vercauteren. Practical Realisation and Elimination of an ECC-Related Software Bug Attack. CT-RSA, volume 7178 of LNCS, pages 171186. Springer, 2012.
Biography:
Dr. Joe Hendrix is a researcher working on security and formal methods at Galois, Inc. His work at Galois focuses on development of tools for formal verification and runtime monitoring. He designed the SAWScript verification tools, and authored the implementation at elliptic curve cryptography described in this talk. Prior to joining Galois, he was the main developer of tools for attack surface analysis at Microsoft. Dr. Hendrix received his Ph.D. in Computer Science at the University of Illinois Urbana-Champaign in 2008 for research in new decision procedures for automated reasoning.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
This talk describes Axe, an automated formal verification tool for proving equivalence of programs. Axe has been used to verify real-world Java implementations of cryptographic operations, including block ciphers, stream ciphers, and cryptographic has functions. Axe proves the bit-for-bit equivalence of the outputs of two programs, one of which may be a formal, mathematical specification. To do so, Axe relies on a novel combination of techniques from combinational equivalence checking and inductive theorem proving. First, the loops in some programs can be completely unrolled, creating large loop-free terms. Axe proves the equivalence of such terms using a phased approach, including aggressive word-level simplifications, bit-blasting, test-case-cased identification of internal equivalences, and "sweeping and merging." For loops that cannot be unrolled, Axe uses execution traces to detect loop properties, including loop invariants for single loops and connection relationships between corresponding loops. Axe proves such properties inductively. In many cases, synchronizing transformations must be performed to align the loop structures of the programs being compared; Axe can perform and verify a variety of these transformations.
Axe was the subject of the presenter's Ph.D. dissertation at Stanford University (completed June, 2011).
Biography:
Eric W. Smith is a Computer Scientist at Kestrel Institute. He has over a decade of experience with formal methods, especially theorem provers. At Kestrel, he is involved in several projects, including leading Kestrel's effort to formally prove that Android applications comply with their security policies. He is also using Kestrel's Specware system to formally specify a virtual Trusted Platform Module (vTPM), with the goal of generating correct by-construction C code, including machine-checkable proofs of correctness in the Isabelle/HOL theorem prover. Dr. Smith is also helping to design the security architecture of a multi-level secure fractionated satellite system. Before joining Kestrel, Dr. Smith completed his Ph.D. in Computer Science at Stanford University under Professor David Dill. His dissertation work included developing Axe, a theorem prover and equivalence checker capable of highly automated proofs of the correct operation of real-world cryptographic programs written in Java. Axe was built on top of the ACL2 theorem prover, with which Dr. Smith has extensive experience. He has used ACL2 for microprocessor verification at AMD and for processor modeling and machine code proofs at Rockwell Collins. Dr. Smith did his undergraduate work at the University of Texas at Austin, where he earned bachelor's degrees in Computer Science and Plan II Honors under thesis advisor J. Moore.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Modern software systems are ubiquitous and complex, often running on critical platforms ranging from human implant devices to nuclear power plant controls. How can we be sure that these programs will behave as intended? Programmers must first specify the correct behavior of their programs, and one widely accepted specification language is temporal logic. Despite decades of research in finite-state contexts and some adaptations of finite-state algorithms, we still lack scalable tools for proving temporal logic properties of software.
Temporal property languages that support both universal and existential temporal reasoning allow us to ask and answer sophisticated questions about the non-deterministic behavior of systems. For example, in the domain of planning, the temporal property A(EF(p W p)) is useful when trying to establish that the system could always terminate and will always establish p if and when it does terminate. A proof of this property can be used when searching for a plan for causing termination if and when it is desired. Similar applications arise in games, formal verification for security, as well as precondition/environment synthesis.
The problem is that, until now, reliable and scalable automatic methods for proving properties of this sort in the presence of (possibly infinite-state) programs have remained unknown. In this talk we will describe a technique for supporting existential reasoning for programs that is based on a reduction to known approaches for universal subsets. This novel method gives rise to a new class of automatic tools. Formally we add new sound and complete relational proof rules for CTL's existential operators (e.g. EF, EG, etc.) to the relational proof system for the universal subset of CTL from our previous work [CAV 2011 and FMSD 2012]. These new proof rules unify aspects common to the universal and existential subsets of CTL, while highlighting an additional requirement necessary for existential reasoning: recurrence sets [Ganty et al., POPL 2009]. An algorithm is then provided that uses counterexamples to failed proof attempts to find the recurrence sets that are necessary to prove the desired CTL property.
Using a preliminary implementation, we demonstrate the practical viability of the approach on examples drawn the PostgreSQL database server, the Apache web server, and the Windows OS kernel. We also demonstrate how proofs found in our system can be used in useful ways in given domains such as planning, security analysis, games, precondition/environment synthesis, etc.
I am a research scientist at the Courant Institute of Mathematical Sciences at New York University. Previously I was a Ph.D student at the Univeristy of Cambridge (UK) Computer Laboratory, jointly supervised by Byron Cook and Mike Gordon. I am funded by the Gates Cambridge Scholarship. I received my Sc.M from Brown University, where I worked with Maurice Herlihy.
In my research, I develop new mathematical methods for making software safer and more efficient, and apply those methods to realistic computer systems. In recent years I have accomplished this in two ways. First, I have improved programming languages by introducing new language features which are, by design both safer and more efficient. In particular, I am concerned with language advances which enable engineers to safely produce software which consists of parallel computation (e.g. my work on Transactional Boosting and Coarse-Grained Transactions). Second, I have developed static and dynamic analysis techniques in order to better understand the performance (e.g. discovering symbolic complexity bounds), understand the behavior (e.g. request tracing in BorderPatrol), discover bugs and formally prove correctness of programs (e.g. proving Linear Temporal Logic properties via a reduction to a program analysis task and symbolic partial determinization).
Presented as part of the 2012 HCSS conference.
The Android platform currently provides only limited support for preventing collusion between multiple apps installed on the same device: the main security mechanism is that at installation time each individual app is required to declare a manifest of the Android permissions it needs to operate. However, the Android platform is also a rich source of security threats: there are several sensitive information sources (Wi-Fi state, contacts, microphone, etc.), several potentially risky sinks (internet, SMS, phone, etc.), and a potentially large number of inter-app information flow paths from the sources to the sinks. The Android permission mechanism is widely considered as lacking the expressive power to discriminate exploitable security vulnerabilities from benign behaviors.
Current program analysis tools that might be employed to compensate for this lack typically focus on finding defects (or security risks) in apps, but do not seek to assure their absence, so their applicability to security analysis is limited. Typically, users employ them by marking specific variables in the app under test, and then analyzing tool-discovered information flows between those variables. This analysis approach requires significant user effort and expertise, and is error-prone because evaluators often fail to identify information flow paths representing exploitable security vulnerabilities. In Android marketplaces, these shortcomings are accentuated by the possibility of multiple apps colluding to support an exploitable security vulnerability.
At Galois, we are addressing these challenges in app evaluation by developing single- and multi-app security analysis tools for deployment on Android marketplaces. We have demonstrated a research prototype tool, called FUSE, that computes sound approximations of app intent calls. By sound we mean that our tools produce not false negatives, but may produce false positives. FUSE computes the possible intent calls for an app by analyzing the bytecode of all methods in all app components. FUSE then populates an SQL database of all the components in a marketplace of apps and the intent calls and intent filters they support. Queries over this database are run to find all intra- and inter- app control flow paths based on intent calls and matching intent filters. The resulting control flow graph can be traversed to ensure the absence of control flow paths that violate a marketplace security policy. We have run this inter-app control flow analysis on an artificial marketplace consisting of 104 real apps, with analysis run-times of ten seconds or so, discovering over 3000 possible intent calls between components, which represent 1,100 intent calls between apps. Our current work is focused on extending the scope of the security analysis tool to track information flow through and between apps.
Joe Hurd, Ph.D., is the Formal Methods Lead at Galois, Inc. He has over 10 years of experience formally verifying the correctness of complex software, including probabilistic programs, elliptic curve cryptography, and game tree analysis algorithms. He is the developer of Metis, an automatic theorem prover for first order logic, and coordinates the OpenTheory project, a package management system for higher order logic theories. Dr. Hurd is an active member of the theorem proving research community, having organized conferences in 2005 and 2008, given invited talks, and regularly appears on program committees and reviews papers for conferences and journals. Prior to joining Galois in 2007, Dr. Hurd was a research fellow at Magdalen College, Oxford University. He studied at Cambridge University, receiving a Masters degree in Mathematics in 1997, and a Ph.D. in Computer Science in 2002.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Designers of multicore architectures provide many options for bundling resources in the chip package. Simple architectures put multiple cores on a single chip and share bus interfaces and/or a common cache. System-on-a-Chip (SoC) and System-in-a-Package (SiP) designers choose to integrate additional functionality (e.g., audio and video, encryption engines, analog-to-digital converts, etc.) into the single chip package. As the communication complexity increases between different resources in a chip package, multicore designers continue to look at new methods to increase parallelism and scalability. The Network-on-a-Chip (NoC) method attempts to solve these problems by emulating a modern telecommunications network in a single chip package.
As with any new innovation, security architects and analysis must review the security ramifications of multicore architectures. For example, what new communication channels are present in the multicore architecture and what safeguards are available to protect those communication channels? It is not readily apparent that existing multicore architectures maintain proper information flow isolation so as to enable implementation of secure systems, but single-core information flow identification and evaluation methods do not scale well to modern multicore architectures. This paper presents an information flow security analysis of multicore architectures.
Analyzing the information flow security attributes of multicore architectures is not as simple as conducting multiple single-core analyses, because most multicore architectures have onboard virtual systems interfacing the application layer with the system-on-a-chip. An analysis framework, tailored to the complexities of multicore architectures and their onboard virtual machines, is needed. The framework should expose overt and covert channel vulnerabilities in multicore architectures.
The University of Idaho Center for Secure and Dependable Systems (CSDS) has a long history of research defining and characterizing architectures for high assurance systems. The work presented here leverages the CSDS information flow analysis work from single core architectures into a framework for information flow analysis of multicore architectures. We have applied the framework to Cell Broadband Engine (CBE), Freescale P4080, and Intel i7 Nehalem. This presentation defines the multicore architecture security framework, discusses how it was applied to those three real architectures, and describes our results showing the limitations of these three architectures for use in high assurance systems.
Biographies:
Ryan Bradetich received his BSCS in 1997 and his MSCS in 2007 from the University of Idaho. He is a development manager at Schweitzer Engineering Laboratories, Inc. (SEL). Ryan currently manages teams working on communication and security products used in control system networks. Prior to joining SEL, he worked for Hewlett-Packard on the security team responsible for auditing and reporting the security status for approximately 20,000 UNIX and Windows(r) systems.
Dr. Paul W. Oman is a Professor of Computer Science at the University of Idaho. He is currently working on secure communications and critical infrastructure protection with grants from NSF, NIATT, and DARPA. From 200 to 2002 he served as a Senior Research Engineer at Schewitzer Engineering Laboratories, Inc. specializing in digital equipment for electric power system protection. Before joining SEL, he was Chair of the CS Department and held the distinction of Hewlett-Packard Engineering Chair for a period of seven years.
Dr. Jim Alves-Foss is the Director of the University of Idaho's Center for Secure and Dependable Systems and is a professor of Computer Science. He received his BS degree in Mathematics and Computer Science and Physics from the University of California at Davis in 1989 and 1991 respectively. His main research interests are in the design and analysis of secure distributed systems, with a focus on formal methods and software engineering.
Presented as part of the 2012 HCSS conference.
Abstract:
Hypervisors have become increasingly popular as a way to allow several guest operating systems to share the same hardware platform. The hypervisor is maximally privileged software, as it has direct access to the hardware. The security of the hypervisor is extremely important since the security of the entire system that runs on the hypervisor depends critically on whether the hypervisor correctly enforces the desired security policies. In this talk, we present our work on using software model checking to ensure that the hypervisor implementation has desired security properties. In particular, we develop techniques to verify security properties of hypervisors using an off-the-shelf software model checker for C (CBMC). We address several technical challenges, including identifying the security properties that are desired of the hypervisors; defining the adversary model; and scaling up the verification techniques given the complex data structures that hypervisors operate on. We focus on memory protection property: the malicious guest OS cannot access memory beyond the region that the hypervisor allocates for it, and the guest OS cannot write into the memory region where the hypervisor code and data reside. We view the guest OS as the adversary, and model it based on the interfaces that the hypervisor exposes to the guest OS. To handle large page tables maintained by the hypervisor, which is crucial to ensuring memory protection, we rely on the key observation that the desired properties of these data structures, and the accesses to these data structures, have a uniform pattern: each row of the data structure is accessed independently of other rows. We develop a reduction technique that allows us to define a canonical adversary who has access to the interfaces to a page table of a fixed small size -- only one entry at each level. We prove a set of small-model theorems that show that if there exists a successful attack in a system that has a page table of arbitrary size, then there is an attack by the canonical adversary who has access to the small page table. As a result, analyzing the security properties of the hypervisor is reduced to model checking the canonical adversary, which is very fast. We have successfully applied our techniques to the verification of several hypervisors, including SecVisor, ShadowVisor, and variants of the Xen hypervisor. We have identified previously unknown vulnerabilities in the design of SecVisor.
Biography:
Limin Jia is a Research Systems Scientist at CyLab at Carnegie Mellon University. She received her Ph.D. in Computer Science from Princeton University. Her research interests include programming languages, language-based security, logic, and program verification. At CyLab, Limin's research focuses on formal aspects of security. She is particularly interested in applying language-based security techniques as well as formal logic to model and verify security properties of software systems.
Presented as part of the 2012 HCSS conference.
In 2009, the L4.verified project delivered a machine-checked, formal proof that the C code of the seL4 microkernel correctly implements its abstract specification. This proof measured roughly 200,00 lines of proof script in the interactive theorem prover Isabelle/HOL. In the years since then, the kernel has been publicly released for evaluation and teaching, has undergone API evolution, and has been even more deeply analyzed, including:
The talk will present an overview of the current state of the formal and informal analysis around the seL4 kernel, and will summarize the experience on keeping this large formal proof up-to-date under API evolution and further analysis.
Biography:
Gerwin Klein is Principal Researcher at NICTA, Australia's National Centre of Excellence for ICT Research, and Conjoint Associate Professor at the University of New South Wales in Sydney, Australia. He is leading NICTA's Formal Methods research discipline and was the leader of the L4.verified project that created the world's first formal, machine-checked proof of functional correctness of the C implementation of a general-purpose microkernel in 2009. He joined NICTA in 2003 after receiving his PhD from Technische Universitat Munchen, Germany, where he formally proved type-safety of the Java Bytecode Verifier in the theorem prover Isabelle/HOL. Gerwin has won a number of awards together with his team, among them the 2011 MIT TR-10 award for the top ten emerging technologies world-wide, NICTA's Richard E. Newton impact award for the kernel verification work, the best paper award from SOSP'09 for the same, and an award for the best PhD thesis in Germany in 2003 for his work on bytecode verification.
Acknowledgements: NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program.
This material is in part based on research sponsored by the Air Force Research Laboratory under agreement numbers FA2386-10-1-4105 and FA2386-11-1-4070. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government.
Switch to experimental viewer
Presented as part of the 2012 HCSS conference.
Abstract:
In order to "build-in" security, one must have a design methodology that allows one to make design choices that account for the "amount" of security a particular design approach provides. Quantifying security at design time is a significant research challenge. Traditional cyber security modeling approaches either do not explicitly consider system participants or assume a fixed set of participant behaviors that are independent of the system. Increasingly, accumulated cyber security data indicate that system participants can play an important role in the creation or elimination of cyber security vulnerabilities. Thus, there is a need for cyber security analysis tools that take into account the actions and decisions of human participants as well as attackers. In this talk, we describe steps toward creating a design-time security modeling methodology that provides a structured and quantitative means of analyzing cyber security problems whose outcomes are influenced by human-system interactions as well as attacker actions.
Biographies:
William H. Sanders is a Donald Biggar Willett Professor of Engineering and the Director of the Coordinated Science Laboratory (www.csl.illinois.edu) at the University of Illinois at Urbana-Champaign. He is a professor in the Department of Electrical and Computer Engineering and the Department of Computer Science. He was the founding Director of the Information Trust Institute (www.iti.illinois.edu). He is also a Fellow of the IEEE and the ACM, a past Chair of the IEEE Technical Committee on Fault-Tolerant Computing, and past Vice-Chair of the IFIP Working Group 10.4 on Dependable Computing.
Dr. Sanders's research interests include dependability & security evaluation, architecting of reliable & secure systems, stochastic modeling, and performance evaluation of distributed systems, with a focus on critical infrastructures. He has published more than 200 technical papers in those areas. He is currently the Director and PI of the DOE/DHS Trustworthy Cyber Infrastructure for the Power Grid (TCIPG) Center (www.tcipg.org), which is at the forefront of national efforts to make the U.S. power grid smart and resilient. He is also a member of the NIST Smart Grid Advisory Committee, which advises the NIST Director on the direction of NIST's Smart Grid-related programs and activities.
Dr. Sanders is co-developer of three tools for assessing computer-based systems: METASAN, UltraSAN, and Mobius. Mobius and UltraSAN have been distributed widely to industry and academia; more than 500 licenses for the tools have been issued to universities, companies, and NASA for evaluating the performance, dependability, and security of a variety of systems. He is also a co-developer of the Loki distributed system fault injector, the AQuA/ITUA middlewares for providing dependability/security to distributed and networked applications, and NetAPT (the Network Access Policy Tool), a tool for assessing the security of networked systems.
Dr. Sanders holds a B.S.E. in Computer Engineering (1983), an M.S.E. in Computer, Information, and Control Engineering (1985), and a Ph.D. in Computer Science and Engineering (1988), all from the University of Michigan.
Switch to experimental viewer
Presented as part of the 2012 HCSS conference.
Abstract:
Over the last two decades formal methods have achieved widespread use by industry in the development of safety and security critical systems. However, these successes often go unacknowledged as evidence of the successful transition of formal methods simply because
the name "formal methods" is often still associated with its early prototypes rather than with the successful methods and tools they have evolved into. To better understand the benefits that formal methods can provide, Rockwell Collins has conducted many experiments over the last
twenty years in their use. This paper describes several of these experiments, discusses what worked and what didn't, and identifies the key lessons we have learned about introducing formal methods into an industrial setting.
Biography:
Steven Miller is a Principal Software Engineer in the Advanced Technology Center of Rockwell Collins with over 30 years of experience in software development. He received his Ph.D. in computer science from the University of Iowa in 1991 and a B.A. in physics and mathematics from the University of Iowa in 1974.
His research interests include formal methods, model-based development, requirements modeling and analysis, and software testing. He was the principal investigator on a five year project sponsored by NASA Langley and Rockwell Collins to develop advanced methods and tools for the development flight critical systems. Other projects he has led include an Air Force Research Labs project to apply formal verification to adaptive flight control systems and a collaborative effort with SRI International and NASA Langley to formally verify the microcode in the AAMP5 and AAMP-FV microprocessors.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
The number of systems and services that people interact with has increased rapidly over the past 20 years. Most of those systems and services have security controls, but until recently, the usability of those mechanisms was not considered. Research over the past 15 years has provided ample evidence that systems that are not usable are not secure, either, because users make mistakes or devise workarounds that create vulnerabilities. In this talk, I will present an overview of the most pressing problems, and what solutions research on usable security
(HCISec) has produced in response to this challenge.
Past attempts have been focused on improving user interfaces to security mechanisms, but delivering effective security in practice requires more fundamental changes to how we design and implement security in systems and services. The talk will present examples of new approaches to requirements capture and system design, and how concepts from usability and economics are transforming security thinking in organizations.
Biography:
M. Angela Sasse is the Professor of Human-Centered Technology and Head of Information Security Research in the Department of Computer Science at University College London (UCL), UK. She is also the Director of the new Academic Centre of Excellence for Research in Cybersecurity at UCL (a distinction awarded by the UK Government Communications Headquarters).
A usability researcher by training, she started investigating the causes and effects of usability issues with security mechanisms in 1996. In addition to studying specific mechanisms such as passwords, biometrics, and access control, her research group has developed human-centered frameworks that explain the role of security, privacy, identity and trust in human interactions with technology. A list of projects and publications can be found at http://sec.cs.ucl.ac.uk/people/m_angela_sasse/.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Fulfillment of the aspiration of designed-in security requires a re-balancing of costs, benefits, and incentives in the development of software-intensive systems. This re-balancing, if it is to be motivated on the basis of immediately valued outcomes (rather than mandates), must be facilitated by significant enhancements to both development and evaluation practice. There are a number of cases in industry that, in particular development settings, provide compelling evidence of feasibility. In this talk, we take an informal look at some of the common features of these success cases, with particular consideration of the technical interplay of development and evaluation factors. The models, tools, practices, and team culture in these success cases demonstrate this interplay of practices for development and for evaluation. We consider, from the research perspective, how this intertwining can be advanced in ways that can take us well beyond the current industry baseline and potentially lead to approaches that are suitable for at-scale government critical systems. Two particular issues are addressed: The first is the role of evidence accumulation in the simultaneous creation of systems and their assurance cases. The second is the folkloric tradeoff of systems capability with evidenced quality, and the hypothesis that this may not always be a necessary feature -- that designed-in security approaches can improve productivity overall.
Biography:
William L. Scherlis is a full Professor in the School of Computer Science at Carnegie Mellon. He is director of CMU's Institute for Software Research (ISR) in the School of Computer Science and the founding director of CMU's PhD Program in Software Engineering. Since Jan 2012 he has also been serving as Acting CTO for the Software Engineering Institute. His research relates to software assurance, software analysis, and assured safe concurrency. Dr. Scherlis joined the Carnegie Mellon faculty after completing a Ph.D. in Computer Science at Stanford University, a year at the University of Edinburgh (Scotland) as a John Knox Fellow, and an A.B. at Harvard University.
Scherlis has led the Fluid Project for more than a decade, which has focused on techniques and practices for scalable software assurance, leading to a family of tools for "analysis-based verification," based primarily on sound static analysis but also including dynamic and heuristic analysis. Building on the use of fragmentary specifications, the project emphasizes issues of scalability, composability, and usability in the development of techniques to assure safe concurrency. Some of the technologies are commercialized through a Carnegie Mellon spinoff, and these versions have been applied to larger-scale systems including Hadoop, Java system libraries such as java.util.Concurrent, and diverse proprietary and open source systems such as app servers and simulation engines.
Scherlis was principal investigator for the Carnegie Mellon/NASA High Dependability Computing Project (HDCP), in which CMU led a collaboration with five universities (MIT, USC, U Wash, U Md, U Wisc) to help NASA address long-term software dependability challenges.
Scherlis has testified before Congress on innovation and information technology, and, previously, on roles for a Federal CIO. He interrupted his career at CMU to serve at Defense Advanced Research Projects Agency (DARPA) for six years, departing in 1993 as a senior executive. While at DARPA his responsibilities related to research and strategy in software technology, computer security, information infrastructure, and other topics. He was involved in the initiation of the high performance computing and communications (HPCC) program (now NITRD) and in creating the concept of operations for CERT-like organizations, several hundred of which are now in operation world-wide.
Scherlis chaired the National Research Council (NRC) study committee on defense software producibility, which recently released its final report Critical Code: Software Producibility for Defense. He served multiple terms as a member of the DARPA Information Science and Technology Study Group (ISAT). He also chaired a NRC study on information technology, innovation, and e-government, and has led or participated in other national studies related to cybersecurity, crisis response, analyst information management, Ada, and health care informatics infrastructure. He has been an advisor to major IT companies and is a founder of SureLogic and Panopto. He has served as program chair for a number of technical conferences, including the ACM Foundations of Software Engineering (FSE) Symposium. He has more than 80 scientific publications. He is a Fellow of the IEEE and a lifetime National Associate of the National Academy of Sciences.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Functionality and security are in natural tension with one another. Functionality is about letting things happen. Security is about preventing them. So it is not surprising that programming methods that emphasize raw functionality lead to all manner of security problems. The power of general purpose languages -- so useful when I want to explore a new idea -- is the very thing that opens up so many opportunities for exploit. This is particularly acute when combined with the desire for efficiency. If I want to get people onto a plane as quickly as possible, then many of them might get through with very little checking. And if I want to process the elements of an array as quickly as possible, then maybe I'll forgo the bounds check. Oh yes, and to make sure the control-flow within the program is as efficient as possible, I won't check that the return address on the stack is the same as it was a few milliseconds ago when I wrote it there. And so buffer-overrun vulnerabilities are born.
Domain-specific techniques provide a powerful approach to addressing this dilemma between functionality and security. In many cases, the full generality of a general purpose programming language is not required to program specific solutions. Instead, the set of possible solutions can be characterized constructively using a (small) language that is focused on the specific domain. The language provides a great freedom of expression within the domain, and yet does not permit insecurities to arise from accidentally straying outside the domain. Examples of this technique range from simple regular-expression matchers through techniques for constructing SQL queries so that injection-attacks cannot occur, right through to implementations of operating system kernels. The tools associated with the domain-specific language are able to produce very efficient code while also providing strong guarantees about the absence of large classes of security vulnerabilities. By looking at the wide range of domain-specific techniques now in use, we can extract general lessons as to the kind of security guarantees these techniques are able to provide by construction, as well as indications of when they may or may not be appropriate.
Biography:
Dr. John Launchbury is Chief Scientist of Galois, Inc. John founded Galois in 1999 to address challenges in Information Assurance through the application of Functional Programming and Formal Methods. Under his leadership, formerly as CEO, the company has grown strongly, successfully winning and delivering on multiple contract awards for more than a decade. John continues to lead Galois' growing stature for its thought leadership in high assurance technology development.
Prior to founding Galois, John was a full professor in Computer Science and Engineering at the Oregon Graduate Institute School of Science and Engineering at OHSU. His instruction style earned him several awards for outstanding teaching, and he is internationally recognized for his work on the analysis and semantics of programming languages, and on the Haskell programming language in particular. John received First Class Honors in Mathematics from Oxford University in 1985. He holds a Ph.D. in Computing Science from University of Glasgow and won the British Computer Society's distinguished dissertation prize. In 2010, John was inducted as a Fellow of the Association for Computing Machinery (ACM).
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Mobile applications--both native and web-based--are a critical emerging segment of the software industry, and security for these applications is of increasing concern. Unfortunately, today's mobile applications are expressed at a low level of abstraction in which important security properties are implicit and only indirectly related to code, making assurance difficult.
In this talk I will argue for a new approach based on two principles: expressing the application at a level of abstraction more appropriate for reasoning about security, and integrating explicit security design information into the implementation of the application. I will describe past successes in applying these principles and describe the work we are now doing to build a system that embodies this approach.
Biography:
Jonathan Aldrich is Associate Professor of Computer Science at Carnegie Mellon University. He is the director of CMU's undergraduate minor program in Software Engineering, and teaches courses in programming languages, software engineering, and program analysis for quality and security. Dr. Aldrich joined the CMU faculty after completing a Ph.D. at the University of Washington and a B.S. at Caltech.
Dr. Aldrich's research centers on programming languages and type systems that are deeply informed by software engineering and security considerations. His research contributions include verifying the correct and secure implementation of an architectural design, modular formal reasoning about code, and API protocol specification and verification. For his work on software architecture, Aldrich received a 2006 NSF CAREER award and the 2007 Dahl-Nygaard Junior Prize, given annually for a significant technical contribution to object-oriented programming. Aldrich was one of 12 computer science researchers nationwide who were selected to the 2007 DARPA Computer Science Study Group. Aldrich's current work focuses on programming systems for secure mobile applications, and on leveraging permissions to reason about typestate and concurrency.
Switch to experimental viewerPresented as part of the 2012 HCSS conference.
Abstract:
Margrave is a policy-analysis tool providing query-based verification and query-based views of policies. It supports reasoning about the combined effects of policies written in different configuration languages, such as a firewall filter and a static router, or multiple cooperating access-control policies in an enterprise. It supports "change-impact analysis", allowing a user to compare the effects of policy updates.
In this talk we will focus on the role of model-finding in Margrave and the resulting property-free verification paradigm it supports.
This is joint work with Kathi Fisler (WPI), Timothy Nelson (WPI), and Shriram Krishnamurthi (Brown)
Biography:
Daniel J. Dougherty is Professor of Computer Science at WPI, having held previous faculty positions at Dartmouth College and Wesleyan University.
His current research focus is on formal methods, particularly as applied to security policies and cryptographic protocols. He has also worked in the areas of databases, logic, automated deduction, and the design and implementation of programming languages.
He is involved in policy research with colleagues at WPI, Brown University, INRIA and CNRS and is a member of the Steering Committee for the annual Security and Rewriting Techniques workshop. His work has been supported by the National Science Foundation, the National Security Agency, the Office of Naval Research, and the Department of Education.
http://web.cs.wpi.edu/~dd/
Abstract:
The National Science Foundation's Cyber-Physical Systems program promotes the vision of developing a scientific and engineering foundation for systems that are built as networks of physical and computational components operating at many scales, yet which remain safe, secure, and dependable -- "systems you can bet your life on." The CPS challenge spans essentially every engineering domain. It requires the integration of knowledge and engineering principles across many computational and engineering research disciplines (computing, networking, control, human interaction, learning theory, as well as mechanical, chemical, biomedical, and other engineering specialties) to develop a "new CPS system science." The objective of the CPS "Virtual Organization" (CPS-VO) is to actively build and help support the multidisciplinary community needed to underpin this new research area and to enable international and interagency collaboration on CPS.
Biography:
Chris vanBuskirk has been a Senior Research Engineer and Project Manager at Vanderbilt's Institute for Software Integrated Systems for the past twelve years. Chris' general professional interests lie in the practical application of novel, model-based formalisms and design methodologies to complex, real-world, human-in-the-loop, science/engineering activities. After completing an M.S. in Engineering at The University of Mississippi, Chris has pursued a career in R&D at organizations such as Cray Research Inc., UMiss Medical Center, The National Cancer Institute, and Johns Hopkins University. Currently, in addition to project management duties on the DARPA AVM Program's META Language and META Design Flow projects, Mr. vanBuskirk also serves as PI for the NSF's CPS Virtual Organization (http://cps-vo.org/), which actively supports the formation and development of distributed research communities required by the demanding challenges of the massively multi-disciplinary cyber-physical systems domain.
Switch to experimental viewerAbstract:
This work introduces the concept topic modeling utilizing latent Dirichlet allocation (LDA) as a particular approach for detecting malicious activity in wireless sensor networks (WSN). A brief summary of the theory and derivation of the underlying mathematics for LDA is provided for general applications in topic modeling. One insightful real-world application is demonstrated to showcase the extensibility and utility of the LDA algorithm for use in detecting vulnerabilities in computer systems.
Biography:
Joseph Natarian (Member, IEEE) received the B.Sc. degree in electrical engineering and computer science from the Wright State University, Dayton, Ohio, in 2008. He is currently pursuing the M.Sc in electrical engineering from the University of Dayton, Dayton, Ohio. Since 2007 he has been supporting the Air Force Research Laboratory (AFRL) in Dayton, Ohio. From 2007 to 2008, he worked for General Dynamics, Advanced Information Systems, where he supported multiple research projects in the Collaborative Interfaces Branch of the Warfighter Interface Division within AFRL. In 2008 he joined the Civil Service as a member of the in-house research team in the Distributed Collaborative Sensor System Technology Branch of the Autonomic Trusted Sensing for Persistent Intelligence Technology Office within AFRL, as an research engineer. As a research egineer, he has served as a subject matter expert and technical advisor for multiple Defense Adnaved Research Project Agency (DARPA) research programs from the Information Innovation Office (I2O) . His research interests include trust in complex systems, tools to evaluate system security, and techniques for identifying and traversing threat vectors of complex systems. Currently Mr. Natarian is a program manager in the advanced programs division of the Sensors Directorate at AFRL.
Switch to experimental viewerAbstract:
Guardol is a domain-specific language designed to facilitate the construction of correct network guards operating over tree-shaped data. The Guardol system generates Ada code from Guardol programs and also provides specification and automated verification support. Guard programs and specifications are translated to higher order logic, then deductively transformed to a form suitable for a SMT-style decision procedure for recursive functions over tree-structured data. The result is that difficult properties of Guardol programs can be proved fully automatically.
Biography:
Dr. David S. Hardin has made contributions in the areas of formal methods, computer architecture for High Assurance systems, as well as real-time and embedded Java. He is currently a Principal Engineer in the Advanced Technology Center at Rockwell Collins, where he has served as Principal Investigator for several U.S. DoD Research and Development programs. He is the editor of the book Design and Verification of Microprocessor Systems for High-Assurance Applications (Springer 2010), and a co-author of The Real-Time Specification for Java, as well as the Java 2 Micro Edition Connected Device Configuration/Foundation Profile standards. He is author or co-author of more than 30 peer-reviewed publications, and is a co-inventor on nine U.S. patents. In 1999, Dr. Hardin co-founded aJile Systems, a startup company focused on real-time and embedded Java technology, and served as aJile's Chief Technical Officer from 1999 to 2003. Dr. Hardin was selected as a Rockwell Engineer of the Year for 1997 for his contributions to the development of the world's first Java microprocessor. His academic career includes BSEE and MSEE degrees from the University of Kentucky, and a Ph.D. in Electrical and Computer Engineering from Kansas State University, in 1989. Dr. Hardin is a proud native of the Commonwealth of Kentucky, and is a Kentucky Colonel.
Switch to experimental viewerAbstract:
One area that has been particularly productive in the rapidly growing field of formal analysis of cryptographic protocols is development of special-purpose model checkers. These tools use variants of the Dolev-Yao model to search through different concurrent executions of a protocol in the face of a network controlled by an attacker who can read, alter, and redirect traffic using operations available to an honest principal. These tools have been extremely effective at finding non-intuitive flaws in protocol design.
However, it has increasingly been realized that there is a need not to only be able to list the operations that are used in the protocol but to specify and reason about the algebraic properties as well. This is because many cryptographic algorithms and protocols depend on these algebraic properties as well. This is because many cryptographic algorithms and protocols depend on these algebraic properties to achieve their goals. Probably the best-known example is Diffie-Hellman key exchange, which uses an exponentiation operator exp such that exp(exp(x, a), b) = exp(x, a*b) where * is associative-commutative. Moreover, these very algebraic properties can also be used by attackers to defeat the protocols.
These algebraic properties can be easily expressed as equations involving the operators used in the protocol. However, incorporating the equations within protocol analysis is not trivial. In order to be effective a tool must be capable of reasoning about protocols that use, not only a particular equational theory, but different combinations of equational theories. This is especially the case when protocols that use different algorithms are composed, as is often the case.
The Maude-NRL Protocol Analyzer (Maude-NPA for short), is a tool that was designed specifically for reasoning about protocols that rely on different combinations of equational theories. It produces symbolic traces, that is traces that may contain logical variables Although the general problem of deciding security in the Dolve-Yao model is undecidable when an unbounded number of protocol executions is allowed, Maude-NPA is often able to decide security in these cases by a combination its symbolic approach and inductive state-space reduction techniques. Maude-NPA determines security by searching backwards to an initial state specified by the user, producing an attack trace if it is successful.
Maude-NPS's backwards search algorithm is based on unification and narrowing modulo an equational theory E, where E is the theory describing the behavior of the operations used in the protocol. Unification solves the problem of determining which substitutions to the variables in two states make the two states the same modulo E. Narrowing allows us to unify the output of a state transition with a substate of a symbolic to determine what symbolic state must precede it. We have developed a general unification algorithm, folding variant narrowing, that applies to the case in which he equational theory E is a composition of theories (R, ), in which R is a set of rewrite rules, that is, rules that can be given an orientation l - r, and the other is a commonly occurring theory such as the associative commutative theory (AC). The other requirements on R with respect to are either easily checkable or have been thoroughly studied in the literature, with automated support available. When one wishes to compose two theories (R1, ) and (R2, ), all that is required is to check whether the R1 and R2 satisfy these properties. Combining two theories 1 and 2 is harder, but the underlying Maude engine supports such combinations for commonly occurring sets of axioms.
In this poster we will give an overview of the Maude-NPA tool and how it applies folding variant narrowing to support composition of theories and protocols. We will also demonstrate the use of Maude-NPA and discuss future plans. The Maude-NPA tool is available for download at http://maude.cs.uiuc.edu/tools/Maude-NPA/.
Biography:
Dr. Meadows is a senior researcher in computer security at the Center for High Assurance Systems at the Naval Research Laboratory (NRL), heading that group's Formal Methods Section. She was the principle developer of the NRL Protocol Analyzer (NPA), which was one of the first software tools to find previously undiscovered flaws in cryptographic protocols, and has been used successfully in analysis of a large number of protocols, including the Internet Key Exchange Protocol and the Group Domain of Interpretation protocol, both of which because standards for the Internet Engineering Task Force. Currently she is co-PI on a project that is developing a successor to the NRL Protocol Analyzer, Maude-NPA, which allows more fine-grained specification of crypto-algorithms than NPA. Other projects she is involved with include the machine-verified analysis of crypto-algorithms and protocols for the IARPA-sponsored Security and Privacy Assurance Research Program, and the development of logical systems for reasoning about the security of protocols for pervasive computing. Prior to joining NRL, she was on the mathematics faculty at Texas A&M University. There she worked in various areas of cryptography, including secret sharing schemes and software protection. She received her Ph.D. in mathematics from the University of Illinois at Urbana-Champaign.
Abstract:
This poster outlines the design of `Quest-V', which is implemented as a collection of separate kernels operating together as a distributed system on a chip. Quest-V uses virtualization techniques to isolate kernels and prevent local faults from affecting remote kernels. This leads to a high-confidence multikernel approach, where failures of system subcomponents do not render the entire system inoperable. A virtual machine monitor for each kernel keeps track of shadow page table mappings that control immutable memory access capabilities. This ensures a level of security and fault tolerance in situations where a service in one kernel fails, or is corrupted by a malicious attack. Communication is supported between kernels using shared memory regions for message passing. Similarly, device driver data structures are shareable between kernels to avoid the need for complex I/O virtualization, or communication with a dedicated kernel responsible for I/O. In Quest-V, device interrupts can be delivered directly to a kernel, rather than via a monitor that determines the destination. Apart from bootstrapping each kernel, handling faults and managing shadow page tables, the monitors are not needed. This differs from conventional virtual machine systems in which a central monitor, or hypervisor, is responsible for scheduling and management of host resources amongst a set of guest kernels. In this poster, we show how Quest-V can implement novel fault isolation and recovery techniques that are not possible with conventional systems.
Biography:
Dr. Richard West is an Associate Professor of Computer Science at Boston University. His research interests encompass operating systems, real-time/embedded systems, resource management, kernels, system organization and structure, and hardware-software interaction. His current research focus is on the development of a new operating system, called Quest, and its sister system called Quest-V. While Quest is a real-time OS for multicore systems, Quest-V uses virtualization techniques to implement a safe and predictable OS as a distributed system on a chip. Work on these systems is funded in part by NSF grants, amongst others.
For the past few years, Dr. West has collaborated with VMware, Inc. in both Palo Alto and Cambridge, MA, where he has held the position of Advanced Research Engineer. Through this role, he worked with VMware's resource management team to develop new performance-enhanced techniques for managing physical resources in the ESX hypervisor. Several patent submissions related to cache-aware resource management and scheduling in multicore systems have since been submitted.
Before joining Boston University, Dr. West obtained Ph.D. and M.S. degrees in Computer Science from the Georgia Institute of Technology. He also holds a Master of Engineering degree from the University of Newcastle-upon-Tyne, England.
Switch to experimental viewerAbstract:
While certain subfields of security have a strong scientific basis (e.g., cryptography, formal methods), there is no comprehensive scientific basis for constructing systems that are trustworthy by design. This lack of a disciplined and rigorous scientific basis profoundly limits our ability to design, deploy, and trust most large-scale and cyber-physical systems (CPS). A foundational science of security is needed now. To this end, the Science of Security Group will provide a virtual organization for community awareness, collaboration, and information all directed toward the maturing of the scientific basis for security.
Abstract:
This poster describes SPARKSkein - a new reference implementation of the Skein algorithm. Skein is a cryptographic "hash" algorithm - one of the basic building blocks of integrity and authentication in digital communications. Skein is one of the final five candidate algorithms in competition to become the new international standard hash algorithm. The designers of Skein have provided a "reference implementation" written in C. Such reference implementations are supposed to be readable, "obviously correct", portable, and provide reasonable performance on contemporary hardware.
This poster reports our experience in re-implementing Skein in SPARK - a programming language and verification toolset that is consciously designed for the development of high-assurance software. Unusually, SPARK offers a formal semantics and a verification system based on Hoare-logic and theorem proving, but retains the "low level" features needed to implement real-time code, so it seemed like an interesting idea to see if Skein could be implemented in SPARK without compromise.
The new implementation is readable, completely portable to a wide-variety of machines, and "formal" in that it is subject to a completely automated proof of type safety. This proof also identified a subtle bug in the implementation (an integer overflow which leads to an undefined output hash) that persists in the C version of the code. Performance testing has been carried out using three generations of the GCC compiler. With the latest compiler, the SPARK code offers identical or better performance to the existing C reference implementation at all optimization levels. As a further result of this work, we have identified several opportunities to improve both the SPARK verification tools and GCC.
Finally, in the interests of empirical engineering, the entire SPARKSkein release and the supporting tools have been made freely available.
Biography:
Roderick Chapman is a principle engineer at Altran Praxis, specializing in many aspects of high-integrity systems development. He has made significant contributions to the Correctness by Construction approach, and is currently the technical authority for the SPARK Product Group at Praxis. He is a regular invited speaker at international conferences, and is widely recognized as a leading authority on high-integrity software development, programming language design, and software verification tools.
In 2006, Rod was invited (at the age of just 37) to become a Fellow of the British Computer Society. More recently, Rod has turned his attention to software process, working on merging the discipline of traditional high-integrity process with more agile approaches such as XP, and the philosophy of the Lean Engineering movement. As part of this work, Rod is leading the roll out of PSP within the company. In 2011, Rod was the join recipient of the first Microsoft Research Verified Software Milestone Award for his contribution to the Tokeneer project.
Switch to experimental viewer