how to avoid failures when planning

Hello, I'm Kostya – QA Lead at tekmates. We create digital products for large, small and medium-sized businesses. I worked in testing for 4 years – both in custom development and in my own product. During this time, he had a hand in WEB, Mobile, API, OLAP, IoT projects.

In this article I will tell you about common mistakes in planning testing of mobile and web applications, and, of course, how to avoid them. All the tips are from my practice, so don’t hesitate to tell me in the comments how testing works for you – it will be interesting to pick up working lifehacks.

In addition to advice, I will also show interesting cases: for example, with the help of which automation tools we reduced work within the framework of regression from 2 hours to 20-25 minutes.

So, let's begin. These are the problems I see.

Problems in planning mobile testing

Mistake 1: Wrong balance of devices and testing platforms

Taking any phone and testing it is unproductive. By keeping some working device behind you and testing it every day, you are ignoring one of the principles of testing – the pesticide paradox. Its essence is that if you perform the same checks for a long time, they will one day stop detecting errors.

It is important to understand your target audience and market trends. Based on this, form and adjust the pool of devices and periodically “shake up” testing, changing devices between QA engineers.

To choose the right devices, it is useful to refer to statistics. So, according to Statcounter And Backlinko, over the past year, Android has taken about 70% of the total OS market share on mobile devices, and iOS – 30%. At the start of the project, or during the process, I advise you to collect statistics from the same Statcounter on devices, operating systems, and make a table. For example, like this:

Consider the minimum OS version that your application supports and do not forget to enter it. Even based on such a simple table, you can roughly understand the main coverage and be guided by it.

Since there are so many phone models, it is impossible to test the application on all of them. Therefore, it is necessary to correctly calculate the sample of models for testing on real devices, and cover the rest of the need with simulators.

My team and I try to use at least 2-3 devices from the table on our projects to check the basic functionality, since checking the hardware is indispensable: camera, battery, speakers, vibration motor, calls, etc. We test individual cases using simulators – Android Studio, XCode, Genymotion. For example, this is how we check cases taking into account screen expansion and cases specific to certain combinations of OS and model.

It is clear that there is not always enough money to buy real devices for all needs. But you shouldn’t create a simulator for yourself with a random model and axis. Mix both testing options, because following simple combinatorial logic you will be more likely to cover all the necessary conditions.

Mistake 2: Insufficient preparation of test cases and test data

It's a big problem when QA engineers can't start preparing for testing before the functionality is collected into a build or branch. As a result, day X comes, testing can begin, and then it turns out that:

  • did not review the requirements,

  • did not prepare test data,

  • the degree of test coverage is low.

It was, you found out, do you agree?

It was, you found out, do you agree?

If analytics suffers early in a project, it will greatly impact future testing. As a result, we cannot start testing in a timely manner, because we have to clarify information as we go along, while simultaneously preparing test documentation.

Therefore, it is important to start studying product requirements as early as possible, pursuing the principle Shift Left. Based on the principle:

  • We introduce the need to review analytics for the entire team (especially QA)!, where even before the task goes to DEV, we clarify all the questions, note all the comments and document them. This will allow us to better understand how our functionality will work at the requirements gathering stage, and will also help to avoid errors.

  • We definitely review test documentation, where a second pair of eyes checks how well the test data and test cases are prepared. This is necessary because we can make mistakes – because we are tired, forgotten, or not taken into account. And if there is a second participant in the preparation process, he will look at your work, assess with fresh eyes how ready the test documentation is for work, and help you not to miss the details.

  • Introducing test requirements coverage metricsto track the process: how widely and deeply we have carried out our work.

    Test coverage relative to requirements can be calculated using the formula:

Tcov = (Lcov/Ltotal) * 100%

Where:

  • Tcov — test coverage.

  • Lcov — the number of requirements verified by test cases.

  • Ltotal — total number of requirements.

Don’t forget about the levels of functionality coverage by type of testing: we take into account smoke and regress tests.

Mistake 3: Not enough time to test

A striking example from experience: we worked in Agile, two-week sprints. The application was successfully released into the markets, there are good reviews. But regression after regression testers complained that they did not have enough time to complete tasks. Moreover, when I entered the project as Lead QA, I saw: the test documentation is very thin, the test cases were updated months ago, and a number of basic documents and instructions are missing. All the time during the sprint was devoted to the operating system; there was not enough time to really ensure the required quality of the application. Digging deeper, I found out that the teams planned without evaluation.

Before my eyes, the project manager simply took and threw tasks from the backlog into the sprint, accompanying the speech with a simple “Will we make it?” Due to the fact that there was no evaluation, the backlog grew, the release was delayed – this affected the mood of the team.

My honest reaction to this whole situation

This is a complex problem, but QA should always estimate the time it takes to complete tasks. This helps you plan your sprint wisely, allocate the necessary time for work, and not hope for a miracle. You always need to stick to specific numbers, otherwise success or failure is difficult to measure.

We had a conversation with the departments and determined that each sprint participant responsible for the task was required to estimate it in time. There are many techniques for this – for example, Here And here – which help calculate the time to complete a task.

Most often we use the following techniques:

Three-point estimate

We think like this:

E = (O + R + P) / 3

or

E = (O + 4R + P) / 6

Where:

  • E — estimated execution time.

  • O — optimistic execution time.

  • R — real execution time.

  • P — pessimistic execution time.

Bottom-up estimate

We think like this:

Eg = E1 + E2 + E3 + … + En

Where:

  • Eg — total estimated execution time.

  • E1, E2, E3, En — time to complete part of the task.

Using the Bottom-up method, it is good to evaluate User Story. For example, to display the total time for User Story testing, you need to estimate separately:

  • studying and reviewing requirements,

  • study of design (if any),

  • writing test cases,

  • preparation of test data,

  • and so on.

When we evaluate Agile projects, we first use a rough estimate. We start with a general assessment of the various parts of the project and then continually refine it as new information becomes available. Like Agile planning, estimating occurs continuously and becomes more detailed as the project progresses. Therefore, do not forget to correct yourself here too, review your results if necessary.

As soon as we introduced the assessment, the understanding of the sprint changed dramatically, we began to allocate time to work with incidents, and tasks appeared to create test documentation. Regress and smoke test runs were also included in the sprint under specific numbers. The teams began to feel less tired, the stress level decreased, and the client became happier. We immediately began to understand where and what our results were, because we began to control our time.

Lack of adequate quality control

Mistake 1: Not Using Standard Quality Control Tools

First simple toolWhat we need is a version tracking tool.

You can ask yourself the question: “But this is what developers do?” Although it may not be obvious, many developers ignore this process until the last minute. Good application development is done in stages. As we improve and add new features to the application, it is important to maintain a version numbering system. It will help you understand what each build contains and where those builds will go next.

For example, simply naming a build 1.0 and then downgrading to 1.1 isn't that hard. However, everything changes when we ask questions:

  • What does version 1.0 even mean?

  • Is this the first version of the application?

  • Or is it already released?

If we know the answers to these questions, we can correctly plan our testing forward and even backward.

You can track improvements, features, and changes using Git, SVN, or a simple text file or spreadsheet that we maintain in the project's Confluence section for QA. More importantly, this process also needs to be documented: record what problems there were in this version. When something doesn't work in the current version, it may mean you need to roll back to the last known working version. And if you don’t track it correctly, you can fly by a lot…

Another rather trivial point, but we need to maintain relevance in this system. Make it a habit: after releasing a new build, you need to go and update the information on the testing side. Again, if the development team has already taken responsibility for this, we just need to make sure we are aware and have access to this information.

If the developers haven't done this, you need to go and help them to make sure they share data to document the changes. After all, they will depend on this information just like QA.

Second toolwhich we use is a bug tracking tool.

Planning is difficult unless we can clearly identify what was broken, what is broken now, and what might be broken in the future. What has been fixed, what has been fixed and what still needs to be fixed is also important to understand.

Another example from life: The tester did not file a bug report and asked the developer to correct it verbally. Corrections have been made, but no one has documented:

  • What kind of bug is this?

  • Where was he found?

  • Which build will the fix be in?

  • When will the fix be released?

As a result, we work in a gray space where everything is blurry. This leaves everything about the testing process open to outside interpretation. In the example above, it may seem that the testers are doing nothing; every feature is being programmed perfectly.

Tracking bugs, like tracking versions, requires a lot of effort and time. When we're building a product for release, controlling these things may seem like a waste of time, but it's not! QA engineers need to be aware of problems with the application before moving into the testing phases. And regardless of the role on the project – from developers to managers – this information will be used in planning.

Mistake 2. Lack of information about the main elements when planning

There are four key elements to keep in mind:

Client

From a business perspective, we don't start testing with software or even documentation. We start with the client. It's good if there is a target demographic that we expect to use our app and we can target our resources based on those users.

Information about potential users will help answer the following questions:

  • What devices do they use?

  • What apps are they downloading?

  • Will they pay for the apps?

  • Are they high-tech, sophisticated users?

  • Or are they newbies who are not tech savvy?

  • Will they use the app every day, once a week, or less often – only for very specific tasks?

If you understand the users, it will be easier to determine the path for testing, as well as the needs for equipment, devices and software.

For example, our target audience consists of both tech-savvy users and beginners. This means creating an intuitive user interface and providing detailed documentation or tutorials to support less experienced users. Simply put, lay the foundation for developing an FAQ section within the application.

If we're answering the question of whether our users will pay for apps, we'll probably need to plan for testing in-app purchases and subscriptions, as well as testing the security and reliability of the payment process.

All this can be researched by analysts or product managers. But it is also useful for testers to understand this information and be able to obtain it from the mentioned specialists.

Time

How much time it takes to test depends on the complexity of the application, its stability and how accurately we can estimate our time to work – this can be done using the formulas from the previous block.

If we are testing an updated application or something very simple, it may take significantly less time than if we start testing the application from scratch. Of course, it may not take as long as we plan, but no one will complain if we get everything done earlier.

Dependencies

Developers, managers, analysts, designers, tools and services are all dependencies. These things can impact your schedule and success more than any other element. It is very important not to lose sight of them during test planning.

The application itself

To understand how to test an application, we need to carefully study and understand what it does, how it does it, and why we are developing it. In one of the projects at my work, a significant part of the application's functionality has many integrations with government services. Understanding each of these services greatly helps testing.

We need to understand that some things we decide today may change tomorrow. And unfortunately, we all know that this happens often within projects. Application development is a fluid process and you need to be flexible to succeed.

Ignoring Test Automation

Error. Ignoring the choice of tools for automation

It's clear that automation is cool. Specifically on our projects it:

  1. Saves time and resources.

  2. Increases accuracy and reliability.

  3. Signals problems before activities begin.

  4. Reduces routine work: smoke, regress.

Points 1, 2 and 4 are generally clear, so I’ll describe point 3 in more detail, which prompted our team to think about automation on the project.

As I said earlier, our applications have several integrations with various services. And here’s the situation: a regression is planned, QA takes a release candidate assembly of a mobile application and it turns out that one of (or even more) “500-tit” integration services is unavailable.

We conducted Root Cause Analysis (RCA) and saw that one of the working integration schemes had changed. It is necessary to roll back some changes or roll out a fix, breaking Code Freeze for regression. The time spent studying the problem and the subsequent decision regarding integration – all this could have been avoided if we had an easy way to predict it.

We analyzed the situation and launched API auto-tests. According to the schedule, every morning they showed us the overall picture – whether something had fallen somewhere. And thus they helped to react to the situation in a timely manner.

Examples of tests that can and should be automated

Often, custom development does not provide for full coverage of functionality by autotests. But this does not prevent you from using simple automation tools to make your work easier – for example, integration autotests on Postman.

Postman is very user-friendly, it has a built-in AI assistant, and writing scripts for API tests is quite fast thanks to built-in templates. But you can, of course, choose another tool or framework to suit your needs.

So, on one of the web projects, in a “pilot” format, we decided to study a new framework for automating e2e tests – Playwright. The documentation is clear and simple, there is a built-in mode for writing scripts as you conduct your activities on the screen, support for JavaScript, TypeScript, Python.

Working with Playwright, we spent about six hours learning how to write simple scripts in the framework, and covering the autotests itself took about two full working days.

As a result, the sample of coverage tests amounted to 114 tests, of which 79 tests were automated through e2e tests on Playwright. We use these test cases as part of regression, which reduced our work from 2 hours to 20-25 minutes. And this is just the minimum that we have managed to do so far!

Let me summarize. Here's what, in my experience, you should start automating now:

  • Smoke tests – for example, to check backend hotfixes.

  • Regress tests – to reduce overall time and free up energy.

  • API integration tests – for example, if your application has third-party dependencies.

What's the result?

  • Don't use the same device for all tests; you may miss errors that are specific to other devices or platforms.

  • Regularly update the set of devices: real or simulators for testing.

  • Before starting testing, carefully prepare your test cases and data. If this is not done, there is a risk of incomplete testing of the application's functionality.

  • Adequately estimate the required time for testing. The more accurate the estimate, the lower the risk that you will test superficially and miss serious errors.

  • Use version control and bug tracking systems. This is important for successful quality management and product iterations.

  • When planning, take into account dependencies, time, client profile and the features of the application itself. This will improve the quality of your tests and the final product.

  • Automate!

Now let's discuss: how do you plan your testing and do you consider anything else that I haven't covered?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *