Visible to the public Biblio

Filters: Keyword is assurance case  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
Xu, B., Lu, M., Zhang, D..  2017.  A Software Security Case Developing Method Based on Hierarchical Argument Strategy. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :632–633.

Security cases-which document the rationale for believing that a system is adequately secure-have not been sufficiently used for a lack of practical construction method. This paper presents a hierarchical software security case development method to address this issue. We present a security concept relationship model first, then come up with a hierarchical asset-threat-control measure argument strategy, together with the consideration of an asset classification and threat classification for software security case. Lastly, we propose 11 software security case patterns and illustrate one of them.

Diskin, Zinovy, Maibaum, Tom, Wassyng, Alan, Wynn-Williams, Stephen, Lawford, Mark.  2018.  Assurance via Model Transformations and Their Hierarchical Refinement. Proceedings of the 21th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems. :426-436.

Assurance is a demonstration that a complex system (such as a car or a communication network) possesses an importantproperty, such as safety or security, with a high level of confidence. In contrast to currently dominant approaches to building assurance cases, which are focused on goal structuring and/or logical inference, we propose considering assurance as a model transformation (MT) enterprise: saying that a system possesses an assured property amounts to saying that a particular assurance view of the system comprising the assurance data, satisfies acceptance criteria posed as assurance constraints. While the MT realizing this view is very complex, we show that it can be decomposed into elementary MTs via a hierarchy of refinement steps. The transformations at the bottom level are ordinary MTs that can be executed for data specifying the system, thus providing the assurance data to be checked against the assurance constraints. In this way, assurance amounts to traversing the hierarchy from the top to the bottom and assuring the correctness of each MT in the path. Our approach has a precise mathematical foundation (rooted in process algebra and category theory) –- a necessity if we are to model precisely and then analyze our assurance cases. We discuss the practical applicability of the approach, and argue that it has several advantages over existing approaches.

Chechik, Marsha.  2019.  Uncertain Requirements, Assurance and Machine Learning. 2019 IEEE 27th International Requirements Engineering Conference (RE). :2–3.
From financial services platforms to social networks to vehicle control, software has come to mediate many activities of daily life. Governing bodies and standards organizations have responded to this trend by creating regulations and standards to address issues such as safety, security and privacy. In this environment, the compliance of software development to standards and regulations has emerged as a key requirement. Compliance claims and arguments are often captured in assurance cases, with linked evidence of compliance. Evidence can come from testcases, verification proofs, human judgement, or a combination of these. That is, we try to build (safety-critical) systems carefully according to well justified methods and articulate these justifications in an assurance case that is ultimately judged by a human. Yet software is deeply rooted in uncertainty making pragmatic assurance more inductive than deductive: most of complex open-world functionality is either not completely specifiable (due to uncertainty) or it is not cost-effective to do so, and deductive verification cannot happen without specification. Inductive assurance, achieved by sampling or testing, is easier but generalization from finite set of examples cannot be formally justified. And of course the recent popularity of constructing software via machine learning only worsens the problem - rather than being specified by predefined requirements, machine-learned components learn existing patterns from the available training data, and make predictions for unseen data when deployed. On the surface, this ability is extremely useful for hard-to specify concepts, e.g., the definition of a pedestrian in a pedestrian detection component of a vehicle. On the other, safety assessment and assurance of such components becomes very challenging. In this talk, I focus on two specific approaches to arguing about safety and security of software under uncertainty. The first one is a framework for managing uncertainty in assurance cases (for "conventional" and "machine-learned" systems) by systematically identifying, assessing and addressing it. The second is recent work on supporting development of requirements for machine-learned components in safety-critical domains.