How to keep a tester’s nerves or speed up regression from 8 to 2 hours

Kukushiki!

My name is Yulia and I am Mobile QA at Vivid Money.

I’ve been in testing for a long time – I’ve seen so many interesting things. But as practice shows, everyone has the same problems and concerns. The only difference is in the analysis, approaches and implementation of solutions.

In this article I will tell you, HOW TO MAKE IT EASIER FOR A TESTER DURING REGRESS!

I’ll tell you in order:

  1. Our processes (for completeness)

  2. The main problem

  3. Analysis

  4. Solution methods, with the results obtained

A little about our processes

So, applications are released once a week. One day is planned for regression testing, the second for smoke. The rest of the time is spent on developing new features, fixing defects, writing and updating documentation, improving processes.

Regression testing Is a set of tests aimed at detecting defects in areas of an application that have already been tested, but are affected by changes.

Almost all positive verification scenarios are covered by test cases that are conducted in Allure TestOps

Each platform (I mean iOS, Android) has its own documentation and autotests, but everything is stored in one place. Any QA on the team can view and edit them. If new cases are added, they will definitely go through a review. An Android tester reviews iOS and vice versa. This is true for manual tests.

About the test plan for regression:

To conduct regression testing, a test plan is drawn up with manual test cases and autotests, separately for Android and iOS. The tester builds a launch (test plan launch), in which he specifies the release build version and platform. After the launch is created, autotests are launched with the selected cases, and the person responsible for manual testing assigns manual test cases to himself. Each passable case is marked with a status: Passed, Failed or Skipped… During the check, the results are displayed immediately.

At the end of the check, the launcher closes. And based on the results, a decision is made about the readiness for release. Everything seems cool and logical, but of course there are problems that make testers sad

Let’s define the problem

An increase in the volume of tested functionality during regression, and an exit from the time frame.
Or – more and more test cases, and we only have 8 hours maximum!

Previously, all cases were included in the test plan. And with the addition of new functionality, the test plan increased to 300 tests and its passing began to take more time than it was planned. We stopped keeping fit on the working day. In this regard, it was decided to revise the approach to testing, taking into account the time frame and the possibility of maintaining quality.

Analysis and solution

Manual testing is overloaded due to the fact that with each new feature test cases are added, they can be both simple and complex (consisting of transitions between screens). I also had to test interaction with the backend. We spent a lot of time on such checks, especially when bugs appeared and we had to figure out which side of the problem was.

Having described the weaknesses, we decided to refine the approach to automation, and also used impact analysis to highlight solution methods.

Impact Analysis Is a study that allows you to indicate the affected places in the project when developing new or changing old functionality.

What have we decided to change in order to offload manual testing and reduce regression:

  1. Increasing the number of autotests and developing a unified scenario for transferring test cases to automation

  2. Separation of tested functionality into frontend and backend

  3. Changing the approach to the formation of a test plan for regression and smoke

  4. Connecting automatic analysis of changes included in the release assembly

Below I will talk about each point in more detail and what results were obtained after the introduction.

Increasing the number of autotests

Often, when they want to reduce the time for regression during testing, they start with automation. But all the stages took place in parallel. And naturally, some of the checks turned into automation. More details on how the automation process is built in our company will be described in another article.

To make the process the same for both platforms, an instruction was written. It outlines the translation criteria, steps, and tools. I will briefly describe how it happens transfer of test cases to automation:

  1. It is determined what types of checks can be automated. This is done by a manual tester on his own, or by discussing with the team at a meeting.

  2. IN Allure TestOps test cases are being finalized, for example, more descriptions are added or json

  3. The corresponding test cases are transferred to the status need to automate (also in Allure TestOps)

  4. A task is created in Youtrack… It describes what needs to be automated. Links to test cases from Allure TestOps… And a responsible AQA is appointed.

  5. Then, tasks from Youtrack are taken to work based on priorities. After the changes are poured into the necessary branches and have been reviewed, the tasks are closed, and the test cases in Allure transferred to Automated with status Active… The code of autotests is reviewed by the developers.

Often this happens a few days before the next release, and by the day of the regression, some of the test cases can already be automated.

Results:

  • Reduced manual testing burden.

  • A clear and simple mechanism for translation into automation. Everyone is busy – no downtime.

  • More functionality is covered with autotests that are chased every day. Bugs are found earlier.

Backend and frontend separately

Test automation is separate for backend and frontend.

But there are E2E tests that test interoperability.

E2E (end-to-end) or end-to-end testing – this is when the whole system is tested from start to finish. Includes ensuring that all integrated parts of the application function and work together as expected.

Many end-to-end autotests were run from the side of mobile testing, it was necessary to write complex test cases. Often they did not pass due to problems with services or on the backend.

Having worked in this format, we decided that it takes a lot of time to fix autotests. And then E2E tests have to be passed manually.

It was decided to clearly divide the functionality into modules with the allocation of logic on the frontend and backend. Leave a minimum number of E2E tests for manual testing. Simplify and automate the rest of the scripts. And so on the backend, we check the business logic, and on the client, the correct display of data from the backend and ui elements.

This allowed us to identify the areas with the greatest criticality, reduce the time for manual testing, and make the autotest run more stable.

For clarity, here’s a plate:

Functional description

Localization of tests

Simple field validation (for example, when changing a password)

client

Placing ui elements on the screen

client

Rendering ui elements

client

Displaying information from the back

client

Screen Navigation

client

Correct processing and display of errors

client

Complex validation (for example, checking the TIN format)

backing

Collecting profile data

backing

Collection and processing of data on transactions

backing

Creating and saving data when working with maps

backing

Services work

backing

Interaction with DB

backing

Error processing

backing

results

After splitting:

  • It became easier to localize the problem

  • Problems are identified earlier and, accordingly, are resolved faster

  • There is a clear delineation of areas of responsibility. No unnecessary checks on the client.

  • Autotests have become much more stable, because not tied to services or mocks that can fall off at any time. (And this any moment is usually the most inappropriate)

  • The time for the implementation of autotests has been reduced, there is no need to add json to the test cases additionally when writing

Filtered test cases in the regression test plan

Formation of a test plan for regression, based on the blocks in which changes were made. As well as the choice of the main permanent test scenarios.

In order to make it easier to form a plan, we began to use tags.

Example: Regress_Deeplink, Regress_Profile, Regress_CommonMobile

Now, all test cases are divided into blocks, which are marked with a certain tag! There are also required cases that are included in each test regression plan and separate test cases for smoke testing in production.

This allows us to quickly filter and quickly form a specific plan in accordance with the changes made, and not waste time checking what was not affected.

results

The introduction of additional analysis, when forming test plans, helped to reduce the total time for passing the regression testing to only 2 hours from the original 8. We have several test plans – full and light. Usually we pass light and it consists of 98 cases (autotests + manual). As you can see in the screenshot, the full regression plan consists of 297 test cases!

The time for passing Regress iOS light on average is about 2 hours, but when the changes were only in a couple of modules, then you can regress in an hour. This is a big plus, because there is still a margin for bug fixes (if you need to fix something urgently). Also, in the future, it is always possible to look at the reports in which assembly what was checked.

Developed a script with analysis of changes and notification via Slack

The quality of the product depends entirely on all team members. Therefore, in order to understand exactly which module was affected, we turned to the developers with a proposal to inform us about what changes were made to the released version.

At first, we had to remind, clarify and indicate the affected blocks in the tasks. On the one hand, we were able to ease regression by selecting only the cases we needed. But on the other hand, a lot of time was spent on communication, and constant clarifications. Clarity was lost, and there was no complete certainty that everything needed was being checked.

Logically, the following solution arose – to make this process automatic!

A script has been created that collects information on commits. And then, after generating a report on which modules were affected, it sends the necessary information to a special Slack channel.

The script works simply:

  • After each build, receives changes between the previous version of the application and the commits from which the build was assembled

  • Gets a list of files that reflect changes in some screen

  • Groups these changes by features and teams to make life easier for testers

  • We send a message to a special Slack channel with all the information on the changes

results

What advantages did we get by connecting build analytics:

  • Reduced developer time for manual analysis of the changes made

  • Reduced the likelihood of overlooking and under-checking the required functionality

  • Simplified communication on this issue

Naturally, it took time to write the script and integrate it into Slack. But, in the future, it became easier for everyone to track the above process.

Briefly about the main thing

  1. The use of tags in test cases and when generating test plans has reduced the volume of the test plan, and accordingly the time for testing.

  2. The development and use of a script for notification of changes made it possible to clearly understand which modules were affected when developing tasks for the release. Or when fixing bugs. Also, testers stopped distracting developers with such questions.

  3. Automation covered about 46% of test cases, which greatly facilitated manual testing. In addition, there is time left for updating cases and writing new ones.

  4. The separation of testing into backend and frontend helped to determine the localization of problems in advance and timely fix them.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *