Ensuring Quality Across the Development Pipeline, or How to Create Products That Meet Expectations

Teamwork in software development is not only a combination of competencies and expertise, but also shared responsibility for the final product. Unfortunately, this is often forgotten, leaving quality assurance and compliance with requirements only to QA engineers. This is a wrong practice, which often ends with the release of “raw” products with shortcomings and vulnerabilities.

Alexey Petrov, Quality Director at Odnoklassniki, is back with you. In this article, I will tell you how a team can take care of the quality of the future product at every stage of software development, as well as fight bugs before they appear.

The material is based on my report at CodeFest. You can watch it Here.

A short prologue

Academic formulations state that quality is the degree to which the set of inherent characteristics of an object corresponds to requirements. To put it simply: the better the quality of the product, the more it meets the needs of the end user.

In today's market conditions, quality is one of the key competitive advantages of a product and a company. This is an obvious relationship, so many software development teams are concerned about quality assurance and use different practices for this, such as Shift-left/right testing and others.

However, many teams still encounter situations where quality assurance tasks are misinterpreted:

  • leave them exclusively to the QA engineer (quality assurance specialist);

  • allow each specialist to have their own independent priorities, which may not be consistent with the goals of other participants.

In both cases, it is almost impossible to ensure high quality of the product and prevent the occurrence of problems or bugs.

Therefore, before implementing any practices aimed at improving the product being developed, it is important to make quality a common goal, and its achievement a common task. Moreover, if best practice is followed, each member of the team (product owner, developer, QA engineer) will be able to improve quality “on the spot”, implementing a comprehensive approach.

Best practice for key stages of the software life cycle

Let's highlight the main stages of the Software Development Lifecycle (SDLC):

At each of these stages, measures can be implemented to ensure quality. Let's take a closer look at what practices can be applied.

Requirements

Development of technical specifications

The technical task for development is the initial document, without which it is impossible to start working on the product as a whole. But it is important to understand that it is difficult to get a high-quality project from a simple document prepared “in a hurry”: the requirements described in the technical task should become the foundation for the quality of the future software.

Therefore, the preparation of the technical specifications must be approached thoroughly. Ideally, the desired result and functionality should be described in as much detail as possible – this is important to eliminate discrepancies between specialists during development and testing.

Of course, developing a detailed and high-quality TOR requires a lot of time and resources, but without a clear TOR the result is not obvious. It is better to make the right TOR right away than to redo the finished implementation from scratch later.

Updating the technical specifications throughout the SDLC

Writing a technical specification is not enough. During development, the initial requirements may change. For example, if it turns out that implementing a particular feature is unreasonably difficult or expensive. Or if new requirements appear.

That is why, in order to avoid situations in which the product submitted for testing does not meet the initial requirements, it is important to regularly check and update the technical specifications.

It is important that the team appoint one specialist responsible for updating – this is necessary so that amendments to the technical specifications are made centrally, only after agreement with the customer of the development, and not at the personal discretion of the performers on the ground.

3 Amigo

3 Amigo is a practice of synchronizing work during software development at different stages. The method involves a meeting between the developer, tester, and product manager, during which they discuss the vision of the project implementation and try to eliminate gaps in how business needs are understood and what customers expect.

Often, it all comes down not only to discussing the task, but also to answering the question: “what will happen if…”. And the more specific, “inconvenient” questions are worked out, the fewer non-obvious problems can “surface” in the finished product. Everything is simple here: discovering potential bottlenecks in the implementation of the product at the stage of forming the technical specifications is fraught with the need to add just a few lines to the requirements file. And discovering these same problems already at the time of development will inevitably result in the need to edit the code or roll back, which will affect not only the overall quality, but also increase the time-to-market.

Thus, joint development of requirements by team members provides a comprehensive assessment of the upcoming task long before implementation begins.

Initial checklist (QA Checklist)

At the stage of preparing the technical specifications, you can also make a preliminary list of checks that the product will undergo. Ideally, all involved specialists will take part in forming the list of such checks: product owner, developer, QA engineer. Teamwork in this case will allow you to initially highlight all critical areas in the future product.

At the same time, such a checklist, attached to the task, can give the developer and tester an approximate idea of ​​the upcoming testing and acceptance of the functionality.

Development

Unit tests

Unit tests (or modular tests) are a software testing method that involves checking the functionality of individual functional modules, processes, or parts of the product code. That is, the goal of unit tests is to show that new parts of the software are functional separately.

Unit tests are the basis of the testing pyramid. They are lightweight and fast, but at the same time they provide an understanding of the level of code coverage by tests and allow you to get rid of many non-obvious problems and errors at the earliest stages of development (even before reaching the stage of full-fledged testing).

Code review

Code review is a practice where developers look at and evaluate code written by their colleagues. Code review allows not only to find errors earlier, but also to see hidden defects that a suboptimal solution can lead to.

It is important to understand that by working together to study the code, each team member can offer more reasonable solutions that can improve the final product. That is, in addition to finding errors, this practice also helps in finding the best implementation of the project.

Note: To conduct a code review, you can involve not only developers, but also testers (with a sufficient level of expertise), and also use special tools – for example, static code analyzers and even AI algorithms.

Pipeline with limitations

One way to ensure the quality of software being developed early is to build a pipeline with constraints. That is, if the code does not meet specific requirements, it will not be possible to move further along the pipeline. This is necessary to eliminate situations in which quality or compliance checks are intentionally ignored or mistakenly skipped. For example, constraints can be triggered if:

  • the code was not processed by linters;

  • there are no autotests or their results are unsatisfactory;

  • Additional scripts not executed.

In fact, you can choose any “trigger” within your own implementation, taking into account all its features. The main thing is that low-quality code with potential problems does not get into production. For example, in OK, only features with all green tests are allowed to be rolled out into production.

QA Notes

As an additional practice to help ensure and maintain high quality of the products being developed, it is possible to make it mandatory to create QA Notes.

QA Notes are notes from developers that are left when a task is handed over for testing. In them, they can highlight risks, additional testing scenarios, and also list the conditions necessary for conducting tests.

It is important to understand that QA Notes are a recommendation, not a strict guide to action. At the same time, such a document is important to proactively notify the tester about a pool of non-obvious tasks and nuances. For example, if the implemented feature affects third-party, already working libraries, then in QA Notes you can highlight that it is necessary to check not only the new, but also the existing functionality. That is, the practice is useful, but for its implementation it is necessary for the developer to understand the software architecture and the logic of the built dependencies.

Testing

Public checklist

Quite often, after checks, testers do not leave any detailed artifacts about the work done. Therefore, it is difficult for developers to understand what exactly was checked and to what extent, how the checks were carried out and whether they were sufficient. In particular, it is not always obvious whether the tester followed the recommendations mentioned, for example, in QA Notes or QA Checklist. This can potentially affect the quality of the product – for example, if it turns out later that the existing checks are insufficient, and no one noticed this before.

To eliminate such risks, it is advisable to create a public checklist – a document in which the tester will highlight what exactly and how exactly he will test or has tested. Thus, with the help of such a checklist, the entire team will be able to:

  • assess the scope, depth and quality of inspections;

  • make changes to test scripts;

  • shorten or expand the list of checks.

Maintaining such a public checklist actually provides double profit:

  • testing becomes transparent, the team can influence its implementation;

  • The tester receives the basis for the test report.

Acceptance tests

Acceptance tests are basic checks of the work done, which help to ensure that the software meets the key requirements. In fact, an acceptance test often comes down to checking the product with the customer (for example, a product manager) – to make sure that the developer has correctly understood the task and the implementation meets the initial requirements. One of the key advantages of such tests is their simplicity – the involvement of the developer and the customer is enough for checking.

Team testing

Team testing is an approach where the entire team is involved in testing. This practice is useful for a number of reasons:

  • helps to share testing expertise within the team;

  • removes the bottleneck effect from the QA engineer;

  • allows all “performers” to additionally check their work for bugs and errors;

  • provides an opportunity to gain a deeper understanding of the architecture and dependencies of the solution.

The benefit of practice is that it can potentially change the approach to development as a whole. For example, if a product manager sees during a test that there is a bug that appeared due to an insufficiently detailed technical specification, or if a developer understands that an error has entered the product that could have been caught earlier if code review had not been ignored. As a result, everyone can highlight their weak points and pay more attention to them in the future.

Integration and regression tests

Integration tests are checks that allow you to ensure that all interactions with the created software meet the requirements, and that new features work correctly, predictably, and without problems.

Regression tests are checks that are performed after changes, bug fixes, or updates. They provide information about the functionality of previously existing functionality and make sure that any changes do not lead to degradation of software quality, do not affect existing functionality, and do not create new problems.

Automated tests

Test automation is a common practice that is often seen as a way to speed up testing and reduce the need for testers to perform repetitive, routine checks. In addition, it is also an effective way to catch errors, as it allows for the reduction of human error and the risks associated with it.

There is one nuance to note here. Automated tests are cool. That is why teams often try to automate all tests that they can get their hands on. But you should understand that you should not try to automate everything without reason – in some cases, the costs of developing automated tests will be many times higher than the benefits of their implementation.

Test report

A test report is a document in which the tester describes in detail the tests performed and their results. Moreover, the report may contain not only text, but also tables, lists, graphs, descriptions of the tested configurations, the stack used, and more. The basis for such a report may be the previously mentioned public list of tests.

In addition to tracking the work of testers, reports are also needed to make a reasoned decision about the fate of the software being tested – whether it can be rolled into production or should be returned to the developer and improved.

Moreover, a testing report is absolutely necessary in case of incident analysis, if the software did get into production, but gave a drop in metrics, began to affect existing application modules or does not work correctly. In such situations, having a testing report can help in localizing the causes of the abnormal situation.

Exploitation

Acceptance tests in production

Acceptance tests are one of the final stages of testing, during which the software is tested for acceptability, readiness, and quality before being released to the entire audience. The purpose of acceptance tests is to understand how well the product solves business problems and meets user needs. There are several main types of acceptance tests.

  • User Acceptance Testing (UAT). It is carried out to understand whether the software meets user requirements.

  • Operational Acceptance Testing (OAT). It is carried out to check the main parameters, including the level of fault tolerance, technical and information security.

  • Alpha Testing. It is performed to check the still “raw” product for compliance with the technical specifications and user scenarios in it.

  • Beta Testing. Involves checking a finished product for undetected or non-obvious flaws. Beta tests usually involve real users, who, within the framework of the test, can fully use the available functionality of the software and identify bugs in it that are inherent only to the production load.

The choice of the type of acceptance tests usually depends on the initial requirements and the available test environments (which usually need to be additionally deployed). But ideally, combine several approaches at once, despite the possible costs – it is better to invest in tests before the release than to spend money and time on urgently reworking problematic software that has made it into production.

Canary unloading

Canary release is a strategy for deploying a new version of software for subsequent testing. The method implies that the current version remains the main one in production, and the new one is used only in certain scenarios. This approach allows you to get information about the quality of new software or its individual feature before releasing it to the entire audience.

It is noteworthy that canary unloading, in addition to obvious problems, also allows you to safely identify performance drops. And also, if after rolling out a new function, users began to use the updated service less, the problem may be not only in bugs at the code level, but also in shortcomings on the UX/UI side.

Blue/Green deployment

Blue/Green deployment is an approach to releasing and testing updates that involves using two completely independent environments. So:

  • in blue (“Blue”) the product continues to spin with user load;

  • in green (“Green”) updates are released to which part of the organic traffic is redirected for the purpose of testing on real users.

The algorithm is then simple:

  • as the release is tested on a part of the audience, the volume of traffic to the “green” environment increases;

  • After successful completion of all tests and user scenarios, traffic is completely switched to the “green” environment.

This way, you can consistently check parts of new software for compliance with quality requirements and only release into production for the entire audience what the team is completely confident in.

Importantly, the approach allows for secure, uninterrupted updates (both forward and backward). So, if global issues are detected in the green environment, you can quickly roll back to the previous version by simply switching the routing to the blue environment.

User documentation

With user documentation, things are a little different than in the above practices. Often, it is needed not to improve the quality of the product, but to ensure that end users understand what exactly is considered correct operation.

It is especially important when launching new projects or after global updates. For example, if after an update users do not see the button in its usual place or encounter new authorization rules, they may regard this as an error and start flooding technical support with requests or disseminating information about incorrect software operation on external platforms, damaging the company's reputation.

Thus, the user documentation:

  • simplifies the work with software for the end user;

  • is the point of truth for communication between development and support teams and users.

At the same time, it is important to understand that:

  • team resources must be allocated to create technical documentation;

  • It is important to ensure that user documentation is provided with convenient access;

  • It is important to keep user documentation up to date.

Technical support

Technical support is an unobvious but effective way to improve a product and maintain its quality at a high level. This is because technical support allows you to get full, detailed feedback from end users, including:

  • information about the quality of the software;

  • data on defects and failures;

  • requests for improved functionality.

But to get profit from technical support, it is necessary to ensure a transparent process of interaction with the technical support and build a working mechanism for escalating user requests so that feedback and requests are not collected aimlessly.

Incident Analysis

Incident review is not only “pain and suffering” for the team, aimed at eliminating incidents and the reasons for their occurrence. It is also a way to turn problems into opportunities for improvement. This is due to the fact that proper analysis of problems and work on errors helps to insure against relapses in the future, that is, to improve the quality of the product in the long term.

But, as in previous cases, in order for this practice to bring results, it is important for the team to allocate resources for the implementation of action items based on the results of the analyses.

Analysis of technical and product metrics

Technical metrics allow you to get an idea of ​​the software operation in industrial operation from a technical perspective and respond to incidents in a timely manner.

In turn, product metrics allow you to get an idea of ​​how software works in production from the business and product side, and also react to changes in patterns and deviations from the usual indicators. For example, if before the rollout of updates, users sent 10 million events per minute, and after the release of the update, the indicator dropped to 6 million, then this may not only be an unsuccessful experiment, but also a consequence of a technical error occurring among users.

Thus, the analysis of technical and product metrics is not a way to preventively identify problems, but a method for quickly detecting them already in production to eliminate defects.

What's the bottom line?

There are many effective practices that help ensure software quality at all stages of the life cycle – from drawing up the technical specifications and development to operation in production. At the same time, it is important to understand that quality assurance is not a procedure, but a continuous process. Ideally, it should be reduced to the Deming cycle (PDCA: Plan, Do, Check, Act) – this approach will allow you to constantly improve the level of requirements and quality of products.

Moreover, it is better to start thinking about measures to improve quality and ways to eliminate problems at the stage of idea formation – according to Boehm's curve, the earlier a defect is identified, the cheaper it is to fix it.

Therefore, it is important that the entire team is involved in quality improvement processes, and each specialist understands his responsibility and the “growth areas” that he can influence. Moreover, each of the practices works better in combination. Therefore, the more improvement methods are applied, the better the result will be.

It is worth noting separately that “improvements for the sake of improvements” is a surefire failure. Changes must be measurable – this way you will be able not only to clearly set a goal for yourself, but also to accurately understand what results you have achieved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *