How I Raised Product Quality in an IT Startup

In this article I talk about the use of a dashboard of the functional component of product quality, a set of quality indicators reflected on it, methods of their measurement based on the recommendations of the ISO 25023 standard and show positive changes in the product's operation.

TLDR or Executive Summary

The image shows a compilation of quality dashboards with recorded measurement results for 4 product versions – from v0.0.3 to v0.0.6:

Improving quality in dynamics

Improving quality in dynamics

For those interested in details and where the quality is, the rest of the article is intended. The terms “characteristic”, “sub-characteristic”, “indicator” of quality will be used. All this is described in the ISO/IEC 25023 standard and in the related 250n series standards:

SQuaRE Series of Standards

SQuaRE Series of Standards

Dashboard composition

2 plates with quality indicators – 1 for the sub-characteristic “functional completeness” (Completeness) and 1 for the sub-characteristic “functional correctness” (Correctness). They both relate to the quality characteristic “Functional suitability” (the characteristic belongs to the “product quality” model – ISO/IEC 25010) and occupy a place in the 2 lower plates:

Plates with quality indicator measurements

Plates with quality indicator measurements

Calculation of completeness: (Blocked – Total) / Total
Correctness calculation: ( (Blocked – Total) – Failed ) / (Blocked – Total)”

Due to such calculation, it is easy to underestimate the value of the Correctness indicator at first glance: according to the figures, it has decreased, but it is important to understand that only implemented requirements are taken into account – by increasing the completion to 100%, we get correctness that has not worsened – 84% vs. 83% for a larger number of requirements:

Correctness change: v0.0.5 -> v0.0.6″ title=”Correctness change: v0.0.5 -> v0.0.6″ width=”588″ height=”112″ src=”https://habrastorage.org/getpro/habr/upload_files/bb2/1d6/adf/bb21d6adfecd9482a8afe7e10aea5765.png”/></p><p><figcaption>Correctness change: v0.0.5 -> v0.0.6</figcaption></p></figure><p>5 donut charts, where the color indicates the priority of the functional requirement (Low, Medium, High, Critical), and the numbers on the chart reflect their quantity. Under each chart there are plates (5 pieces in a row) of two elements – “status” and “quantity”.</p><ul><li><p>Status – Total (all requirements without reference to the result of the corresponding test); Failed (requirements, the testing of which revealed bugs); Blocked (unimplemented requirements); Not Run (requirements that cannot be tested at a specific moment due to testing limitations); Passed (requirements, the testing of which did not reveal errors)</p></li><li><p>Quantity – the number of requirements of all priorities with each status</p></li></ul><figure class=Requirements Plaques

Requirements Plaques

For better understanding, let's read 2 charts from one dashboard:

Total 212 – total 212 functional requirements: 110 – Critical / 53 High / 24 Medium / 25 Low

Failed 33 – bugs were found for a total of 33 requirements: this includes 14 Critical / 13 High / 6 Medium requirements

It is important to note that bugs have a priority level of blocking/critical (i.e. those that prevent the use of the functionality) or, in a minority of cases, high – such bugs have a workaround, but negatively affect the impression of using the function.

Measurement Analysis

There are 2 approaches here: if you look at any one result, for example the first one – product version v0.0.3 – then it is worth paying attention to the quality indicator “completeness”, which in case of inequality of 100% will stimulate the team either to more active actions in development or to agree on a reduction in the volume of work. In combination with completeness, it is also important subjective perception of the amount of redness (requirements with the Critical priority) in the Failed and Blocked status. I specifically emphasize subjectivity to draw a parallel – the quality perceived by each user (quality in use model, ISO/IEC 25010) is also subjective. However, this does not mean that we should not measure quality: having conducted a comparative analysis of indicators for several versions, we can already operate with facts. Moreover, based on measurements, we can make informed decisions (data-driven decision making – DDDM) – in this case, the decision to improve the work of critical functionality.

The team should be happy

The team should be happy

Requirements Traceability Matrix?

Experienced test managers and engineers may notice similarities with the multifaceted RTM – Requirements Traceability Matrix.

RTM options

RTM options

Indeed, there is some similarity with the dashboard, but the fundamental differences are important:

  • The goal of RTM is to establish a link between documentation elements and software elements (see here) while the purpose of the dashboard is to give (a) a concise overview (b) of the state of quality

  • Convenience of analysis – just try to put 2 RTMs next to each other and identify any trends there.

What did you work with?

  • Product – for video surveillance and video analytics using AI

  • Google Data Studio (now LookerStudio) – for dashboard layout and display

  • ALM Inflectra Spira – a system for managing requirements, development and testing in one product

  • Gitlab – scheduled pipeline for collecting data on requirements and testing and uploading to Google DS

Instead of a conclusion

In addition to being the only team member with a QA role, the challenge of convincing the CEO of the need to prioritize requirements and implement an ALM system added complexity, but I enjoyed the experience and its outcome. I would also like to add that nothing can be improved by having a dashboard alone – it is not the fact of its presence that is important, but the opportunities that open up due to its presence – DDDM, transparency for management.

If you are interested in sharing your experience on the topic of IT product quality, QACoE (QA Center of Excellence), TCoE (Testing CoE) – write to my linkedin.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *