What to do? We need preliminary preparation for each of the blocks of the development process: task decomposition, evaluation and planning, development itself, research testing, release. Preparation does not consist in simply throwing out old parts from the process, but in their adequate replacement, which gives an increase in quality.
Going from a waterfall to a scrum
A few years ago, we realized that our development process, built on the classic waterfall, needs to be rebuilt in order to deliver value to users faster and more often. Scrum was great for this, because it allowed each sprint to end in increments.
We introduced scrum events and short iterations. In theory, everything looked good:
- for the release at the end of the week on Wednesday we need to have ready-made functionality,
- test it on thursday
- fix bugs
- roll out to the prod on friday.
In reality, it turned out differently, because in fact we did not change the process, but only put the waterfall inside the weekly sprint. As a result, the functionality was most often ready not for the environment, but for Friday, because we could not correctly evaluate the tasks or in the middle of the sprint new tasks arrived, more priority ones. Testing was generally out of sprint.
Then we took the preparation of acceptance test scripts at the beginning of the sprint. And it immediately gave its result. Scenario preparation is approximately 60% of the testing time. Go through the ready-made scenarios quickly, and before the start of development we learn about non-standard cases as a bonus and we can immediately take them into account in planning.
QA process steps
Kick-off meeting, example mapping, acceptance scripts
The product manager brings the user story to the team or the technical lead brings the technical history of the development of the component.
The first thing to do is decompose the story. For this:
- The team forms a common understanding of the requirements for history for all participants, including with the help of an exhaustive number of questions to the product manager. At the same time, it helps to find requirements that were not taken into account initially. For meetings, we use the example mapping framework (test case map), which significantly increases their effectiveness. It is important not to apply the framework formally, without understanding its work, because it will not work, and the team will have a negative attitude towards such changes. More on example mapping: in Russian, in English.
- The UX designer designs custom behavior and creates mockups.
- The developer designs the technical side of the implementation.
- The QA engineer develops acceptance criteria for each story and creates acceptance scenarios on their basis: not a draft, but a complete list of tests that need to be done to ensure that everything is checked.
Acceptance scenarios (acceptance criteria / definition of done) – not just a list of test cases, but the result of an exhaustive detailed decomposition of the task, after which you should have the state “there is nothing more to discuss here.”
Backlog Grooming and Sprint Planning
At this stage, we evaluate, among other things, coverage problems and think over the research tests that may be needed: load testing, security testing, conduct consumer testing, etc. Then, in sprint planning, we explicitly take tests to cover in the sprint or set acceptance criteria for the main tasks, where tests are also taken into account explicitly.
Testing is an integral part of the task, and writing tests is the normal work of the developer. But not everyone is accustomed to this yet, therefore it is better to take the test coverage in the sprint clearly at least in the early stages explicitly. Now, fortunately, we are already faced with cases where developers themselves remind us that the scripts did not work out for a specific task.
If we introduce restrictions and rules (for example, you cannot blink a task if all acceptance scenarios are not automated and successfully passed), then the only way to speed up time to market is to improve quality. We can faster only if we can better.
Improving quality reduces the number of iterations and development time. In our experience, this allows us to reduce development time by more than 2 times.
Direct development and manual testing
The main difficulty here is the large number of development iterations. For example, one of the features in our product went through 26 iterations. Why? Because earlier in the development process the engineer instead of self-testing gave the code immediately to QA, which often led to the presence of errors and many improvements.
It could look like this:
- The developer implements the task, but does not test it thoroughly, as he knows that the QA engineer will check everything behind him.
- QA-engineer finds errors and returns the task for revision.
- The developer corrects the errors found, but allows new ones.
- The cycle repeats many times.
As a result, no one can guarantee the quality of functionality. The developer does not remember what he did in the last iteration, and the QA engineer does not know what and at what point he checked: the point is the blurred look of both (it is difficult to look at the same thing many times in a row) and that everyone is still busy several features at different stages of development.
What to do with it? We could transfer manual testing from QA engineers to developers, but this could lead to a loss of quality. Changes in processes are needed when they guarantee an increase in the quality of the result. Therefore, we did not just remove manual testing, but replaced it with new tasks giving the best quality:
- Preparing Acceptance Scenariosthanks to which the developer knows exactly what needs to be checked, and he has every opportunity to do it.
- Test coverage at different levels. We release it daily, and about 30 teams make changes to the code. At the same time, our website, frontend and backend are three monoliths that are divided into modules and components, but still there are interconnections that can break.
- Test Automation We cover tests immediately during development, for this, all QA engineers in the company are able to write autotests. In different teams, test coverage is organized differently: in some teams, developers write all types of tests (unit tests, integration, component, unit, e2e), in others, QA covers API tests or prepares all autotests.
- Verification of positive scenarios with product oouner. This allows the team to better understand the idea of the production and once again shake the story.
- Verification of layout and design. This stage takes place together with the designer and the client developer before the merge of the request.
Our product works in different browsers, several desktop and mobile applications. Due to the large number of changes that affect many browsers and applications, we are not able to check today what we implemented yesterday. It is impossible to test everything with high frequency, so automation in our case becomes a necessity, not a fashion.
We have low-level tests.. For example, the logic of the methods should be covered at the unit level. At the e2e level, there are too many cases that cannot be covered (their number, in essence, is equal to the Cartesian product of variations of using different methods).
With a large number of users, there will always be a person who will cause a specific variation, and without low-level tests, she can be skipped in testing. This is one of the main reasons for the appearance of bugs on production.
Now the developer knows that no one will check the functionality behind him, and everything that he freezes will automatically go to production. Therefore, developers are engaged in manual testing. But not because there is no QA engineer in this chain, but because it increases the level of responsibility and quality. The developer in any model must make sure that what he planned is obtained, not blindly trust his experience, but check everything based on it. I would like to add that the developer does not want to engage in manual testing, which stimulates him to cover with tests. And unit tests help him not to double-check the functionality several times, which means we do not transfer the problem of a blurred look from QA to the developer.
It happens that some details cannot be thought out at the previous stages, then a QA engineer is already involved at the time of development for changing scripts or manual testing. But these are isolated cases.
Thanks to these changes, we implement both simple and large complex tasks (for example, a month of work of 5 engineers) we realize in a matter of a few iterations, often in one. We agreed on the inside that tasks on the backend should be implemented in 1-2 iterations, but on a complex front – a maximum of 5 iterations. If the number of iterations grows, this is a signal for us that there is a process problem.
Checkmark and research testing
Having removed the routine tasks of the current testing from the QA engineer, we freed up 80% of his time. Teams can very quickly find what to spend their free QA time on, but this does not always lead to better quality. We spent it on additional testing, which helps to dig deeper and find non-standard cases that we previously missed on the sale.
A large feature is usually implemented by several people, represents a sequence of tasks, is released in parts, and the functionality itself is initially hidden from users (we use a “checkmark” for this). When a feature is in production but still hidden from the user, the QA engineer conducts all research tests that were worked out during grooming: load testing, security testing, conduct consumer testing, etc. For example, he can allocate time and purposefully break the finished functionality as a whole. For this, QA has everything: he understands his device, since he studied it in detail at the installation meetings and during the creation of acceptance scenarios, and his eyes are not blurred, since he almost did not participate in the development process.
The product manager at this stage should make sure that they have implemented the functionality that was planned. He checks the compliance of the result with the statement of the problem, the main positive scenarios and independently works with the feature.
Research testing covers the testing of new functionality as a whole and how it fits into the current product: its consistency, interaction with other functionality, etc.
Release and monitoring
After all the research tests have been completed, we will release the functionality to users (remove the “checkmark”), and the team begins to monitor the feature. The release process itself consists of several stages, but I will write about this another time.
Briefly about everything that we changed in the testing process
Testing now does not occur at the end of the sprint, it is distributed over the entire sprint.
It is not the QA engineer who is responsible for the quality of the result, but the entire team. Previously, QA took responsibility for everything done by the team, because only he was testing and giving the command for release. Now everyone has their own role in supporting quality:
- The designer is responsible for the consistency of the UX in the product and the usability of the feature;
- The developer is responsible for test coverage, including e2e;
- The QA engineer is responsible for the tricky cases of interconnection with other parts of the system and various testing approaches that help to test the whole feature;
- The product manager makes the team realize the feature that users really need. Or rather, that the feature after earning meets all the criteria that were conceived.