Visible to the public Predicting Release Reliability

TitlePredicting Release Reliability
Publication TypeConference Paper
Year of Publication2017
AuthorsRotella, P., Chulani, S.
Conference Name2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C)
Date Publishedjul
Keywordsbug fixing, code coverage, Computer bugs, conceptual simplification, Correlation, customer experience, customer services, demanding quantitative evidence, field bug, functional extension, internal customers, internal stakeholders, Mathematical model, Measurement, Metrics, Modeling, pre-release metrics, predicting release reliability, prediction, Predictive Metrics, prerelease metrics, pubcrawl, release comparisons, reliability performance, software metrics, software release reliability, software reliability, software reliability assessment mechanism, Testing

Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.

Citation Keyrotella_predicting_2017