Visible to the public Biblio

Filters: Keyword is software quality  [Clear All Filters]
2020-03-27
Boehm, Barry, Rosenberg, Doug, Siegel, Neil.  2019.  Critical Quality Factors for Rapid, Scalable, Agile Development. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :514–515.

Agile methods frequently have difficulties with qualities, often specifying quality requirements as stories, e.g., "As a user, I need a safe and secure system." Such projects will generally schedule some capability releases followed by safety and security releases, only to discover user-developer misunderstandings and unsecurable agile code, leading to project failure. Very large agile projects also have further difficulties with project velocity and scalability. Examples are trying to use daily standup meetings, 2-week sprints, shared tacit knowledge vs. documents, and dealing with user-developer misunderstandings. At USC, our Parallel Agile, Executable Architecture research project shows some success at mid-scale (50 developers). We also examined several large (hundreds of developers) TRW projects that had succeeded with rapid, high-quality development. The paper elaborates on their common Critical Quality Factors: a concurrent 3-team approach, an empowered Keeper of the Project Vision, and a management approach emphasizing qualities.

2020-03-09
Song, Zekun, Wang, Yichen, Zong, Pengyang, Ren, Zhiwei, Qi, Di.  2019.  An Empirical Study of Comparison of Code Metric Aggregation Methods–on Embedded Software. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :114–119.

How to evaluate software reliability based on historical data of embedded software projects is one of the problems we have to face in practical engineering. Therefore, we establish a software reliability evaluation model based on code metrics. This evaluation technique requires the aggregation of software code metrics into project metrics. Statistical value methods, metric distribution methods, and econometric methods are commonly-used aggregation methods. What are the differences between these methods in the software reliability evaluation process, and which methods can improve the accuracy of the reliability assessment model we have established are our concerns. In view of these concerns, we conduct an empirical study on the application of software code metric aggregation methods based on actual projects. We find the distribution of code metrics for the projects under study. Using these distribution laws, we optimize the aggregation method of code metrics and improve the accuracy of the software reliability evaluation model.

Chhillar, Dheeraj, Sharma, Kalpana.  2019.  ACT Testbot and 4S Quality Metrics in XAAS Framework. 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). :503–509.

The purpose of this paper is to analyze all Cloud based Service Models, Continuous Integration, Deployment and Delivery process and propose an Automated Continuous Testing and testing as a service based TestBot and metrics dashboard which will be integrated with all existing automation, bug logging, build management, configuration and test management tools. Recently cloud is being used by organizations to save time, money and efforts required to setup and maintain infrastructure and platform. Continuous Integration and Delivery is in practice nowadays within Agile methodology to give capability of multiple software releases on daily basis and ensuring all the development, test and Production environments could be synched up quickly. In such an agile environment there is need to ramp up testing tools and processes so that overall regression testing including functional, performance and security testing could be done along with build deployments at real time. To support this phenomenon, we researched on Continuous Testing and worked with industry professionals who are involved in architecting, developing and testing the software products. A lot of research has been done towards automating software testing so that testing of software product could be done quickly and overall testing process could be optimized. As part of this paper we have proposed ACT TestBot tool, metrics dashboard and coined 4S quality metrics term to quantify quality of the software product. ACT testbot and metrics dashboard will be integrated with Continuous Integration tools, Bug reporting tools, test management tools and Data Analytics tools to trigger automation scripts, continuously analyze application logs, open defects automatically and generate metrics reports. Defect pattern report will be created to support root cause analysis and to take preventive action.

2020-02-10
Izurieta, Clemente, Prouty, Mary.  2019.  Leveraging SecDevOps to Tackle the Technical Debt Associated with Cybersecurity Attack Tactics. 2019 IEEE/ACM International Conference on Technical Debt (TechDebt). :33–37.
Context: Managing technical debt (TD) associated with external cybersecurity attacks on an organization can significantly improve decisions made when prioritizing which security weaknesses require attention. Whilst source code vulnerabilities can be found using static analysis techniques, malicious external attacks expose the vulnerabilities of a system at runtime and can sometimes remain hidden for long periods of time. By mapping malicious attack tactics to the consequences of weaknesses (i.e. exploitable source code vulnerabilities) we can begin to understand and prioritize the refactoring of the source code vulnerabilities that cause the greatest amount of technical debt on a system. Goal: To establish an approach that maps common external attack tactics to system weaknesses. The consequences of a weakness associated with a specific attack technique can then be used to determine the technical debt principal of said violation; which can be measured in terms of loss of business rather than source code maintenance. Method: We present a position study that uses Jaccard similarity scoring to examine how 11 malicious attack tactics can relate to Common Weakness Enumerations (CWEs). Results: We conduct a study to simulate attacks, and generate dependency graphs between external attacks and the technical consequences associated with CWEs. Conclusion: The mapping of cyber security attacks to weaknesses allows operational staff (SecDevOps) to focus on deploying appropriate countermeasures and allows developers to focus on refactoring the vulnerabilities with the greatest potential for technical debt.
Yang, Jinqiu, Tan, Lin, Peyton, John, A Duer, Kristofer.  2019.  Towards Better Utilizing Static Application Security Testing. 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). :51–60.
Static application security testing (SAST) detects vulnerability warnings through static program analysis. Fixing the vulnerability warnings tremendously improves software quality. However, SAST has not been fully utilized by developers due to various reasons: difficulties in handling a large number of reported warnings, a high rate of false warnings, and lack of guidance in fixing the reported warnings. In this paper, we collaborated with security experts from a commercial SAST product and propose a set of approaches (Priv) to help developers better utilize SAST techniques. First, Priv identifies preferred fix locations for the detected vulnerability warnings, and group them based on the common fix locations. Priv also leverages visualization techniques so that developers can quickly investigate the warnings in groups and prioritize their quality-assurance effort. Second, Priv identifies actionable vulnerability warnings by removing SAST-specific false positives. Finally, Priv provides customized fix suggestions for vulnerability warnings. Our evaluation of Priv on six web applications highlights the accuracy and effectiveness of Priv. For 75.3% of the vulnerability warnings, the preferred fix locations found by Priv are identical to the ones annotated by security experts. The visualization based on shared preferred fix locations is useful for prioritizing quality-assurance efforts. Priv reduces the rate of SAST-specific false positives significantly. Finally, Priv is able to provide fully complete and correct fix suggestions for 75.6% of the evaluated warnings. Priv is well received by security experts and some features are already integrated into industrial practice.
Palacio, David N., McCrystal, Daniel, Moran, Kevin, Bernal-Cárdenas, Carlos, Poshyvanyk, Denys, Shenefiel, Chris.  2019.  Learning to Identify Security-Related Issues Using Convolutional Neural Networks. 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME). :140–144.
Software security is becoming a high priority for both large companies and start-ups alike due to the increasing potential for harm that vulnerabilities and breaches carry with them. However, attaining robust security assurance while delivering features requires a precarious balancing act in the context of agile development practices. One path forward to help aid development teams in securing their software products is through the design and development of security-focused automation. Ergo, we present a novel approach, called SecureReqNet, for automatically identifying whether issues in software issue tracking systems describe security-related content. Our approach consists of a two-phase neural net architecture that operates purely on the natural language descriptions of issues. The first phase of our approach learns high dimensional word embeddings from hundreds of thousands of vulnerability descriptions listed in the CVE database and issue descriptions extracted from open source projects. The second phase then utilizes the semantic ontology represented by these embeddings to train a convolutional neural network capable of predicting whether a given issue is security-related. We evaluated SecureReqNet by applying it to identify security-related issues from a dataset of thousands of issues mined from popular projects on GitLab and GitHub. In addition, we also applied our approach to identify security-related requirements from a commercial software project developed by a major telecommunication company. Our preliminary results are encouraging, with SecureReqNet achieving an accuracy of 96% on open source issues and 71.6% on industrial requirements.
Ashfaq, Qirat, Khan, Rimsha, Farooq, Sehrish.  2019.  A Comparative Analysis of Static Code Analysis Tools That Check Java Code Adherence to Java Coding Standards. 2019 2nd International Conference on Communication, Computing and Digital Systems (C-CODE). :98–103.

Java programming language is considered highly important due to its extensive use in the development of web, desktop as well as handheld devices applications. Implementing Java Coding standards on Java code has great importance as it creates consistency and as a result better development and maintenance. Finding bugs and standard's violations is important at an early stage of software development than at a later stage when the change becomes impossible or too expensive. In the paper, some tools and research work done on Coding Standard Analyzers is reviewed. These tools are categorized based on the type of rules they cheeked, namely: style, concurrency, exceptions, and quality, security, dependency and general methods of static code analysis. Finally, list of Java Coding Standards Enforcing Tools are analyzed against certain predefined parameters that are limited by the scope of research paper under study. This review will provide the basis for selecting a static code analysis tool that enforce International Java Coding Standards such as the Rule of Ten and the JPL Coding Standards. Such tools have great importance especially in the development of mission/safety critical system. This work can be very useful for developers in selecting a good tool for Java code analysis, according to their requirements.

2020-01-20
Bardia, Vivek, Kumar, C.R.S..  2017.  Process trees amp; service chains can serve us to mitigate zero day attacks better. 2017 International Conference on Data Management, Analytics and Innovation (ICDMAI). :280–284.
With technology at our fingertips waiting to be exploited, the past decade saw the revolutionizing Human Computer Interactions. The ease with which a user could interact was the Unique Selling Proposition (USP) of a sales team. Human Computer Interactions have many underlying parameters like Data Visualization and Presentation as some to deal with. With the race, on for better and faster presentations, evolved many frameworks to be widely used by all software developers. As the need grew for user friendly applications, more and more software professionals were lured into the front-end sophistication domain. Application frameworks have evolved to such an extent that with just a few clicks and feeding values as per requirements we are able to produce a commercially usable application in a few minutes. These frameworks generate quantum lines of codes in minutes which leaves a contrail of bugs to be discovered in the future. We have also succumbed to the benchmarking in Software Quality Metrics and have made ourselves comfortable with buggy software's to be rectified in future. The exponential evolution in the cyber domain has also attracted attackers equally. Average human awareness and knowledge has also improved in the cyber domain due to the prolonged exposure to technology for over three decades. As the attack sophistication grows and zero day attacks become more popular than ever, the suffering end users only receive remedial measures in spite of the latest Antivirus, Intrusion Detection and Protection Systems installed. We designed a software to display the complete services and applications running in users Operating System in the easiest perceivable manner aided by Computer Graphics and Data Visualization techniques. We further designed a study by empowering the fence sitter users with tools to actively participate in protecting themselves from threats. The designed threats had impressions from the complete threat canvas in some form or other restricted to systems functioning. Network threats and any sort of packet transfer to and from the system in form of threat was kept out of the scope of this experiment. We discovered that end users had a good idea of their working environment which can be used exponentially enhances machine learning for zero day threats and segment the unmarked the vast threat landscape faster for a more reliable output.
2019-11-12
Zhang, Xian, Ben, Kerong, Zeng, Jie.  2018.  Cross-Entropy: A New Metric for Software Defect Prediction. 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS). :111-122.

Defect prediction is an active topic in software quality assurance, which can help developers find potential bugs and make better use of resources. To improve prediction performance, this paper introduces cross-entropy, one common measure for natural language, as a new code metric into defect prediction tasks and proposes a framework called DefectLearner for this process. We first build a recurrent neural network language model to learn regularities in source code from software repository. Based on the trained model, the cross-entropy of each component can be calculated. To evaluate the discrimination for defect-proneness, cross-entropy is compared with 20 widely used metrics on 12 open-source projects. The experimental results show that cross-entropy metric is more discriminative than 50% of the traditional metrics. Besides, we combine cross-entropy with traditional metric suites together for accurate defect prediction. With cross-entropy added, the performance of prediction models is improved by an average of 2.8% in F1-score.

2019-09-26
Miletić, M., Vuku\v sić, M., Mau\v sa, G., Grbac, T. G..  2018.  Cross-Release Code Churn Impact on Effort-Aware Software Defect Prediction. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :1460-1466.

Code churn has been successfully used to identify defect inducing changes in software development. Our recent analysis of the cross-release code churn showed that several design metrics exhibit moderate correlation with the number of defects in complex systems. The goal of this paper is to explore whether cross-release code churn can be used to identify critical design change and contribute to prediction of defects for software in evolution. In our case study, we used two types of data from consecutive releases of open-source projects, with and without cross-release code churn, to build standard prediction models. The prediction models were trained on earlier releases and tested on the following ones, evaluating the performance in terms of AUC, GM and effort aware measure Pop. The comparison of their performance was used to answer our research question. The obtained results showed that the prediction model performs better when cross-release code churn is included. Practical implication of this research is to use cross-release code churn to aid in safe planning of next release in software development.

2019-08-26
Izurieta, C., Kimball, K., Rice, D., Valentien, T..  2018.  A Position Study to Investigate Technical Debt Associated with Security Weaknesses. 2018 IEEE/ACM International Conference on Technical Debt (TechDebt). :138–142.
Context: Managing technical debt (TD) associated with potential security breaches found during design can lead to catching vulnerabilities (i.e., exploitable weaknesses) earlier in the software lifecycle; thus, anticipating TD principal and interest that can have decidedly negative impacts on businesses. Goal: To establish an approach to help assess TD associated with security weaknesses by leveraging the Common Weakness Enumeration (CWE) and its scoring mechanism, the Common Weakness Scoring System (CWSS). Method: We present a position study with a five-step approach employing the Quamoco quality model to operationalize the scoring of architectural CWEs. Results: We use static analysis to detect design level CWEs, calculate their CWSS scores, and provide a relative ranking of weaknesses that help practitioners identify the highest risks in an organization with a potential to impact TD. Conclusion: CWSS is a community agreed upon method that should be leveraged to help inform the ranking of security related TD items.
2019-07-01
Clemente, C. J., Jaafar, F., Malik, Y..  2018.  Is Predicting Software Security Bugs Using Deep Learning Better Than the Traditional Machine Learning Algorithms? 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS). :95–102.

Software insecurity is being identified as one of the leading causes of security breaches. In this paper, we revisited one of the strategies in solving software insecurity, which is the use of software quality metrics. We utilized a multilayer deep feedforward network in examining whether there is a combination of metrics that can predict the appearance of security-related bugs. We also applied the traditional machine learning algorithms such as decision tree, random forest, naïve bayes, and support vector machines and compared the results with that of the Deep Learning technique. The results have successfully demonstrated that it was possible to develop an effective predictive model to forecast software insecurity based on the software metrics and using Deep Learning. All the models generated have shown an accuracy of more than sixty percent with Deep Learning leading the list. This finding proved that utilizing Deep Learning methods and a combination of software metrics can be tapped to create a better forecasting model thereby aiding software developers in predicting security bugs.

Medeiros, N., Ivaki, N., Costa, P., Vieira, M..  2018.  An Approach for Trustworthiness Benchmarking Using Software Metrics. 2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC). :84–93.

Trustworthiness is a paramount concern for users and customers in the selection of a software solution, specially in the context of complex and dynamic environments, such as Cloud and IoT. However, assessing and benchmarking trustworthiness (worthiness of software for being trusted) is a challenging task, mainly due to the variety of application scenarios (e.g., businesscritical, safety-critical), the large number of determinative quality attributes (e.g., security, performance), and last, but foremost, due to the subjective notion of trust and trustworthiness. In this paper, we present trustworthiness as a measurable notion in relative terms based on security attributes and propose an approach for the assessment and benchmarking of software. The main goal is to build a trustworthiness assessment model based on software metrics (e.g., Cyclomatic Complexity, CountLine, CBO) that can be used as indicators of software security. To demonstrate the proposed approach, we assessed and ranked several files and functions of the Mozilla Firefox project based on their trustworthiness score and conducted a survey among several software security experts in order to validate the obtained rank. Results show that our approach is able to provide a sound ranking of the benchmarked software.

Carrasco, A., Ropero, J., Clavijo, P. Ruiz de, Benjumea, J., Luque, A..  2018.  A Proposal for a New Way of Classifying Network Security Metrics: Study of the Information Collected through a Honeypot. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :633–634.

Nowadays, honeypots are a key tool to attract attackers and study their activity. They help us in the tasks of evaluating attacker's behaviour, discovering new types of attacks, and collecting information and statistics associated with them. However, the gathered data cannot be directly interpreted, but must be analyzed to obtain useful information. In this paper, we present a SSH honeypot-based system designed to simulate a vulnerable server. Thus, we propose an approach for the classification of metrics from the data collected by the honeypot along 19 months.

2019-02-22
Novikov, A. S., Ivutin, A. N., Troshina, A. G., Vasiliev, S. N..  2018.  Detecting the Use of Unsafe Data in Software of Embedded Systems by Means of Static Analysis Methodology. 2018 7th Mediterranean Conference on Embedded Computing (MECO). :1-4.

The article considers the approach to identifying potentially unsafe data in program code of embedded systems which can lead to errors and fails in the functioning of equipment. The sources of invalid data are revealed and the process of changing the status of this data in process of static code analysis is shown. The mechanism for annotating functions that operate on unsafe data is described, which allows to control the entire process of using them and thus it will improve the quality of the output code.

2018-09-05
Teusner, R., Matthies, C., Giese, P..  2017.  Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :418–425.
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
2018-06-07
Novikov, A. S., Ivutin, A. N., Troshina, A. G., Vasiliev, S. N..  2017.  The approach to finding errors in program code based on static analysis methodology. 2017 6th Mediterranean Conference on Embedded Computing (MECO). :1–4.

The article considers the approach to static analysis of program code and the general principles of static analyzer operation. The authors identify the most important syntactic and semantic information in the programs, which can be used to find errors in the source code. The general methodology for development of diagnostic rules is proposed, which will improve the efficiency of static code analyzers.

Kang, E. Y., Mu, D., Huang, L., Lan, Q..  2017.  Verification and Validation of a Cyber-Physical System in the Automotive Domain. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :326–333.
Software development for Cyber-Physical Systems (CPS), e.g., autonomous vehicles, requires both functional and non-functional quality assurance to guarantee that the CPS operates safely and effectively. EAST-ADL is a domain specific architectural language dedicated to safety-critical automotive embedded system design. We have previously modified EAST-ADL to include energy constraints and transformed energy-aware real-time (ERT) behaviors modeled in EAST-ADL/Stateflow into UPPAAL models amenable to formal verification. Previous work is extended in this paper by including support for Simulink and an integration of Simulink/Stateflow (S/S) within the same too lchain. S/S models are transformed, based on the extended ERT constraints with probability parameters, into verifiable UPPAAL-SMC models and integrate the translation with formal statistical analysis techniques: Probabilistic extension of EAST-ADL constraints is defined as a semantics denotation. A set of mapping rules is proposed to facilitate the guarantee of translation. Formal analysis on both functional- and non-functional properties is performed using Simulink Design Verifier and UPPAAL-SMC. Our approach is demonstrated on the autonomous traffic sign recognition vehicle case study.
2018-05-01
Kaur, A., Jain, S., Goel, S..  2017.  A Support Vector Machine Based Approach for Code Smell Detection. 2017 International Conference on Machine Learning and Data Science (MLDS). :9–14.

Code smells may be introduced in software due to market rivalry, work pressure deadline, improper functioning, skills or inexperience of software developers. Code smells indicate problems in design or code which makes software hard to change and maintain. Detecting code smells could reduce the effort of developers, resources and cost of the software. Many researchers have proposed different techniques like DETEX for detecting code smells which have limited precision and recall. To overcome these limitations, a new technique named as SVMCSD has been proposed for the detection of code smells, based on support vector machine learning technique. Four code smells are specified namely God Class, Feature Envy, Data Class and Long Method and the proposed technique is validated on two open source systems namely ArgoUML and Xerces. The accuracy of SVMCSD is found to be better than DETEX in terms of two metrics, precision and recall, when applied on a subset of a system. While considering the entire system, SVMCSD detect more occurrences of code smells than DETEX.

2018-03-05
Osaiweran, A., Marincic, J., Groote, J. F..  2017.  Assessing the Quality of Tabular State Machines through Metrics. 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :426–433.

Software metrics are widely used to measure the quality of software and to give an early indication of the efficiency of the development process in industry. There are many well-established frameworks for measuring the quality of source code through metrics, but limited attention has been paid to the quality of software models. In this article, we evaluate the quality of state machine models specified using the Analytical Software Design (ASD) tooling. We discuss how we applied a number of metrics to ASD models in an industrial setting and report about results and lessons learned while collecting these metrics. Furthermore, we recommend some quality limits for each metric and validate them on models developed in a number of industrial projects.

Medeiros, N., Ivaki, N., Costa, P., Vieira, M..  2017.  Software Metrics as Indicators of Security Vulnerabilities. 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE). :216–227.

Detecting software security vulnerabilities and distinguishing vulnerable from non-vulnerable code is anything but simple. Most of the time, vulnerabilities remain undisclosed until they are exposed, for instance, by an attack during the software operational phase. Software metrics are widely-used indicators of software quality, but the question is whether they can be used to distinguish vulnerable software units from the non-vulnerable ones during development. In this paper, we perform an exploratory study on software metrics, their interdependency, and their relation with security vulnerabilities. We aim at understanding: i) the correlation between software architectural characteristics, represented in the form of software metrics, and the number of vulnerabilities; and ii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code. To achieve these goals, we use, respectively, correlation coefficients and heuristic search techniques. Our analysis is carried out on a dataset that includes software metrics and reported security vulnerabilities, exposed by security attacks, for all functions, classes, and files of five widely used projects. Results show: i) a strong correlation between several project-level metrics and the number of vulnerabilities, ii) the possibility of using a group of metrics, at both file and function levels, to distinguish vulnerable and non-vulnerable code with a high level of accuracy.

2018-02-15
Hibshi, H., Breaux, T. D..  2017.  Reinforcing Security Requirements with Multifactor Quality Measurement. 2017 IEEE 25th International Requirements Engineering Conference (RE). :144–153.

Choosing how to write natural language scenarios is challenging, because stakeholders may over-generalize their descriptions or overlook or be unaware of alternate scenarios. In security, for example, this can result in weak security constraints that are too general, or missing constraints. Another challenge is that analysts are unclear on where to stop generating new scenarios. In this paper, we introduce the Multifactor Quality Method (MQM) to help requirements analysts to empirically collect system constraints in scenarios based on elicited expert preferences. The method combines quantitative statistical analysis to measure system quality with qualitative coding to extract new requirements. The method is bootstrapped with minimal analyst expertise in the domain affected by the quality area, and then guides an analyst toward selecting expert-recommended requirements to monotonically increase system quality. We report the results of applying the method to security. This include 550 requirements elicited from 69 security experts during a bootstrapping stage, and subsequent evaluation of these results in a verification stage with 45 security experts to measure the overall improvement of the new requirements. Security experts in our studies have an average of 10 years of experience. Our results show that using our method, we detect an increase in the security quality ratings collected in the verification stage. Finally, we discuss how our proposed method helps to improve security requirements elicitation, analysis, and measurement.

2018-02-06
Resch, S., Paulitsch, M..  2017.  Using TLA+ in the Development of a Safety-Critical Fault-Tolerant Middleware. 2017 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). :146–152.

Creating and implementing fault-tolerant distributed algorithms is a challenging task in highly safety-critical industries. Using formal methods supports design and development of complex algorithms. However, formal methods are often perceived as an unjustifiable overhead. This paper presents the experience and insights when using TLA+ and PlusCal to model and develop fault-tolerant and safety-critical modules for TAS Control Platform, a platform for railway control applications up to safety integrity level (SIL) 4. We show how formal methods helped us improve the correctness of the algorithms, improved development efficiency and how part of the gap between model and implementation has been closed by translation to C code. Additionally, we describe how we gained trust in the formal model and tools by following a specific design process called property-driven design, which also implicitly addresses software quality metrics such as code coverage metrics.

2018-02-02
Kokaly, S..  2017.  Managing Assurance Cases in Model Based Software Systems. 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C). :453–456.

Software has emerged as a significant part of many domains, including financial service platforms, social networks and vehicle control. Standards organizations have responded to this by creating regulations to address issues such as safety and privacy. In this context, compliance of software with standards has emerged as a key issue. For software development organizations, compliance is a complex and costly goal to achieve and is often accomplished by producing so-called assurance cases, which demonstrate that the system indeed satisfies the property imposed by a standard (e.g., safety, privacy, security). As systems and standards undergo evolution for a variety of reasons, maintaining assurance cases multiplies the effort. In this work, we propose to exploit the connection between the field of model management and the problem of compliance management and propose methods that use model management techniques to address compliance scenarios such as assurance case evolution and reuse. For validation, we ground our approaches on the automotive domain and the ISO 26262 standard for functional safety of road vehicles.

Santos, J. C. S., Tarrit, K., Mirakhorli, M..  2017.  A Catalog of Security Architecture Weaknesses. 2017 IEEE International Conference on Software Architecture Workshops (ICSAW). :220–223.

Secure by design is an approach to developing secure software systems from the ground up. In such approach, the alternate security tactics are first thought, among them, the best are selected and enforced by the architecture design, and then used as guiding principles for developers. Thus, design flaws in the architecture of a software system mean that successful attacks could result in enormous consequences. Therefore, secure by design shifts the main focus of software assurance from finding security bugs to identifying architectural flaws in the design. Current research in software security has been neglecting vulnerabilities which are caused by flaws in a software architecture design and/or deteriorations of the implementation of the architectural decisions. In this paper, we present the concept of Common Architectural Weakness Enumeration (CAWE), a catalog which enumerates common types of vulnerabilities rooted in the architecture of a software and provides mitigation techniques to address them. The CAWE catalog organizes the architectural flaws according to known security tactics. We developed an interactive web-based solution which helps designers and developers explore this catalog based on architectural choices made in their project. CAWE catalog contains 224 weaknesses related to security architecture. Through this catalog, we aim to promote the awareness of security architectural flaws and stimulate the security design thinking of developers, software engineers, and architects.