Visible to the public Reinforcing Security Requirements with Multifactor Quality Measurement

TitleReinforcing Security Requirements with Multifactor Quality Measurement
Publication TypeConference Paper
Year of Publication2017
AuthorsHibshi, H., Breaux, T. D.
Conference Name2017 IEEE 25th International Requirements Engineering Conference (RE)
Date Publishedsep
KeywordsAnalytical models, authentication, Context, Databases, elicited expert preferences, formal specification, Human Behavior, human factor, human factors, Metrics, minimal analyst expertise, MQM, multifactor authentication, Multifactor Quality measurement, Multifactor Quality Method, natural language scenarios, Natural languages, Operating systems, pubcrawl, qualitative analysis, quantitative statistical analysis, requirements analysts, requirements elicitation, requirements engineering, resilience, Resiliency, scenarios, security of data, security quality ratings, security requirements, security requirements elicitation, security requirements reinforcement, software quality, Stakeholders, statistical analysis, user study, vignettes, weak security constraints
AbstractChoosing how to write natural language scenarios is challenging, because stakeholders may over-generalize their descriptions or overlook or be unaware of alternate scenarios. In security, for example, this can result in weak security constraints that are too general, or missing constraints. Another challenge is that analysts are unclear on where to stop generating new scenarios. In this paper, we introduce the Multifactor Quality Method (MQM) to help requirements analysts to empirically collect system constraints in scenarios based on elicited expert preferences. The method combines quantitative statistical analysis to measure system quality with qualitative coding to extract new requirements. The method is bootstrapped with minimal analyst expertise in the domain affected by the quality area, and then guides an analyst toward selecting expert-recommended requirements to monotonically increase system quality. We report the results of applying the method to security. This include 550 requirements elicited from 69 security experts during a bootstrapping stage, and subsequent evaluation of these results in a verification stage with 45 security experts to measure the overall improvement of the new requirements. Security experts in our studies have an average of 10 years of experience. Our results show that using our method, we detect an increase in the security quality ratings collected in the verification stage. Finally, we discuss how our proposed method helps to improve security requirements elicitation, analysis, and measurement.
DOI10.1109/RE.2017.77
Citation Keyhibshi_reinforcing_2017