How to organize testing in order to accelerate and stabilize product releases. Part 2

The tester has many opportunities to improve the quality of the product and make the team work more comfortable. The main thing is to discuss any changes with the team and implement only what is convenient and useful for everyone.

My name is Victoria Dezhkina, I am responsible for testing a number of products in the Big Data Directorate of X5 Retail Group. In the last part of the article, I began to talk about how we changed the processes in the product team “Automation system for procurement of a retail network.” Product releases were constantly delayed for several days and often came out raw. We changed the code calculation and task planning order, which allowed us to shorten the release cycle by a few days, but we still had to work out the optimal format for setting and accepting tasks, establish test points in the release cycle and learn how to prioritize problems for fixing defects.


The format of the formulation and acceptance of tasks and defects

The method of setting the task largely determines how quickly and correctly it will be completed. You can describe tasks in different ways, for example, using user stories that reflect the needs of the end user of the system. It sounds something like this: “the user wants to receive a report by pressing the green button.”

The disadvantage of this approach is that it does not explain what will be “under the hood” of this decision. User stories leave developers too much freedom, and sometimes it becomes a problem: the team begins to reinvent the wheel or saws something too laborious. And given that in conditions of rapid development there is almost never enough time for complete documentation, with this approach you get cryptic code that greatly complicates the integration of new developers into the product.

We discussed several design options for tasks and defects and settled on a “hybrid” approach: use case + technical subtasks. The business customer writes the use case, that is, describes the options for using the new functionality, and the analyst with the tester, on the basis of this, are technically sub-tasks for developers. In the description of the task in Jira, we add the use case from which it is made, or a test case that allows you to reproduce the error, while the name and description of the main task remains “human-readable”.

Let’s see, for example, what’s inside the defect with the name “The user does not understand how the errors that occur when choosing a purchase rate” are handled. The task description contains:

  • a case where you can reproduce the error;
  • real and expected result;
  • subtasks for the backend and frontend with clear instructions for developers to fix. “Backend: for this API, give the corresponding answer to the frontend” + a matrix of options showing what answers should be in each of the possible situations. “Frontend: for this API, depending on the response from the back, issue the corresponding error text” + matrix of options.

When the developer finishes his subtask, he simply closes it. If all subtasks are closed, the defect is returned to retest. If additional problems are detected, the corresponding subtask is created again.

It turns out the corresponding defect description rule:

  1. Create a task with a description of a functional problem, a case for reproducing the error, and a link to the history, during the verification of which a defect was found.
  2. Add two subtasks to the task for the backend and frontend. Subtasks for the frontend contain additional information: on which page, in which environment the defect is located, which API or component does not work correctly, what exactly needs to be fixed and the link to use case with a description of the correct behavior. The backend subtasks contain a description of the environment on which the error was found, which API it is, what parameters are transmitted, which response comes, the reason why the implemented logic is considered incorrect with reference to the documentation, and also instructions on what exactly needs to be changed.

We also refused to form AC (acceptance criteria) on our product, since at the planning stage we discuss not only what we are developing and testing, but also how.

What did it give? This approach allowed us at any time to understand what was wrong with the functionality on the part of the user, at what stage the work on the defect and, depending on the load on the back and front, prioritize the subtasks to the same defect in different ways.

As a result, even before the start of development, the whole team understands which part of each task will affect it personally, and at the end, each task contains information: how it was developed, how it was tested, whether there was documentation on it, and also what was corrected in it during the development process.

This approach is used only on our product, because it turned out to be the most convenient for us. Other products of the X5 Big Data Directorate use their own schemes: for example, User stories with AC.

It would seem that our option does not contribute at all to accelerate development, because it requires more steps to get started. But this is not so.

We organized the process so that testing was conducted in parallel with development. Thereby the developer does not sit idle while the tester works through and localizes the task as much as possible. Plus, we always see which specific developer worked on the task, how it was implemented – this allows us to understand in the future which of the developers will cope faster with new similar problems. The logic is simple: the less a developer does things that aren’t directly related to writing code, the better, and the most accurate defect localization allows you to think deeper about possible connections and problems caused by a specific error.

The question may also arise whether the rules that we have established in our product do not interfere with the formation of uniform testing and development standards in the department. Actually not: the general rules of the department determine what the task should contain at a certain stage of development, and we comply with these requirements, we simply work out the task at earlier stages.

Test Moments

We discussed for a long time the question of at what stage to conduct testing. At first there was an idea to check each individual task in the local branch, but with this approach it would be impossible to check how these tasks work together, and their conflicts would be detected only at the stage of the assembled release, when it is too late to change anything.

Therefore, we agreed to test each task separately, but on one test bench. At first we wanted to roll out several tasks at once, but above I already told you what risks this idea carries. One at a time much faster. This is a known effect: reducing the number of parallel tasks does not slow down, but rather accelerates the process. In Kanban, for example, there is such a thing as a WIP limit (WIP is work in progress), which limits the number of tasks that can be solved simultaneously by each role.

As a result, we installed Five points where testers are actively involved in the development process:

  • At the documentation stage. We make sure that there are no problems that conflict with the logic of what has already been done; we fix the details of the implementation of each task.
  • At the stage of setting the problem. We speak with the analyst all the possible cases related to the task and take them into account when forming the task
  • At the planning stage. We talk about how the planned implementation can hook on the related functionality and what problems it can bring. We coordinate with the product all critical defects and complement the sprint.
  • In preparation for the release. We iteratively check each task on a test bench, and on the day before the planned release we collect all the tasks together and check on one bench.
  • After the release. We check how the release works on the prod.

At the start, when we had releases every 2 weeks, the scheme of work looked like this:

It became (once a week release):

Rules for interaction of the backend – testing – frontend connection

When a lot of different data is sent between the backend and frontend in the API, it is not always clear why they are needed and how they interact. Because of this, malfunctions can occur at the front end. For example, the calculation number, demand cal, is transferred from the back. Nominally, this is one parameter, but eight more fields should be “attracted” to the backing to perform the calculation along with it. If you do not pass them along with the costing number, this operation will not be performed on the front end.

To avoid such situations, we began to describe the parameters passed, indicating them in the comments to the sub-task for developing the API in Jira, which explained what data the back and front will exchange. We tried to describe all the APIs in the Swagger framework, but with its help when automatically generating documentation, it was not possible to transfer the frontend, what exactly the backend will do with the passed parameters. Therefore, we agreed that if we are talking about a parameter that is not just written on the back, but uses other parameters, it is necessary to describe its purpose in the task.

We also began to control the designation of variables so that in the same API all fields were standardized. Our product consists of microservices, and each can have its own field names. In one field with the name of the supplier can be supplier, in another – supplierID, in the third name, etc. When transferring this data to one component of the front-end, difficulties may begin, so we went through all the parameters and began to standardize all variable names. To do this, we collected a summary table of all current designations, a table of all front components and the variables used by them (with which the front-end developer helped a lot) and compared them. Now all new APIs get standard variable names, and old APIs are corrected when tasks arise for their completion.

Accelerate Defect Repair

At the stage of setting the task, the business customer determines the priorities – he knows best what and in what order is needed for the development of the product. But after rolling out to dev, when there are tasks to fix defects, the tester commits their priorities.

Sometimes there is a need to urgently change the priorities of these tasks – for example, we find a minor defect in the back-end, without which the front-end team cannot start fixing.

Previously, in such situations, we immediately went to the developers and asked them to change the priority of the tasks, but this distracted them. Therefore, we agreed that we will contact only at certain times – after the code freeze, up to 5 times a day. What did it give? We stopped reducing the productivity of developers by sudden calls, got rid of downtime, and increased the time for the analyst to work through the tasks.

Moreover, due to the fact that tasks no longer appear spontaneously for developers, we always know who has what kind of load, who used to work on a task and will be able to deal with it faster. As a result, we understand much better whether we will manage to prepare the release on schedule or not.

These measures, together with the unified logic of rolling out the code on dev, release and prod, allowed reduce the period of correction of defects from 3 days to 3-4 hours.

results

Over the 9 months of our procurement automation product we managed to reduce release cycle from 2.5 weeks to 1 week with the possibility of daily rolling out, significantly increasing the stability of releases.

What changed:

  1. We got rid of the need to correct defects after development, since we took this work to the stage of preparing tasks to the maximum.
  2. Reduced the period of correction of defects from 3 days to 3-4 hours.
  3. We got the opportunity to roll out releases “on command.” Now we can pack up any day, roll out the tasks, and by evening everything will be ready and debugged.
  4. They increased the transparency of processes for all participants in the process: now all developers and testers of the team understand what is happening at the moment, who are busy with what tasks, how much more time is needed to develop and fix errors, etc.

BONUS: I managed to reduce the level of stress in the team (I hope), and thanks to the coordinated work of the team (thanks to delivery), I could easily switch to the remote 🙂

Introducing these improvements, we adhered to several rules:

  • Testers and developers are in the same boat (repeat it like a mantra!), So the first thing a tester needs to do is get on with the team and find out what worries her most, enlist its support. My allies and partners in the team were the distribution manager and developers.
  • There is no ready-made ideal solution, and it needs to be sought. The tester does not impose his rules on anyone, he adapts to the team and changes his approaches with it, while keeping in mind the image of a bright future and gently introducing measures to achieve it)).
  • Too severe restrictions and standardization are not a method. If you overdo it, teams can lose flexibility.

The rules of interaction, which helped us accelerate the development of this product, cannot be transferred in a pure form to other products of the Directorate – they are arranged differently, and the needs of the developers there differ. But the principle of creating these rules will be the same: establish the order of the calculation of the code, find the optimal points for testing, document the code and API.

In parallel with working inside the products, we are developing rules that are designed to facilitate the interaction between the products, and we will tell about this in the following materials. So in the Big Data Directorate, a strategy for developing product quality is gradually being formed.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *