Monologue of a QA Lead who has matured in battles for code quality


Each of us is doomed to notice a flaw
Universe and comb
oxxxyMiron Fedorov about QA

Having matured for a year, we continue the Team Lead’s Monologue and share the experience of testing our product SafePhone

Over the past year, we have brushed the dust off G. Myers’ tome “The Art of Software Testing”, admired the wisdom of the author again and agreed with his statement that it is impossible to cover a more or less complex program with tests by 100%.

We have verified the applicability of the Pareto principle in testing: 80% of product coverage, as a rule, is achieved by 20% of tests. Read about how we define 20% of target tests and improve QA processes under the cut.

Automate cannot be manually tested

Most of the tests should be performed automatically, the QA team should have its own developers who write autotests, and the rest of the testers should do exploratory testing and, together with analysts, determine the test cases that need to be automated first.

This is a wonderful and, we hope, not too distant future that we are striving for. Our main obstacle is device-specific features that need to be manually checked on devices. For example, the prohibition of resetting the device to factory settings or the prohibition of flashing the device from recovery. In principle, working with recovery in Android cannot be automated, because when working in recovery, the main Android does not boot.

Device-specific is not limited to working with recovery. The application that controls the device needs specific rights. They cannot be obtained from public device farms, where UI testing is most common. For example, Device or Profile Owner rights.

But, even with such rights, the device’s response to control commands cannot always be obtained programmatically. For example, it is relatively easy to check the prohibition of the camera, but it is impossible to check the prohibition of the multiplayer mode (from the word – at all). Therefore, in our near future, the volume of manual testing of mobile clients will still be hefty.

To somehow reduce your hand costsjob testing, we invite our clients to “familiarize” 🙂 with new versions before their official release. Customers love the development of the product, and they do not skimp on feedback, and we release a more useful and stable release. Win-win!

What do we cover with autotests, and what do we test by hand?

Finding the cherished 20% of tests that can cover 80% of the code is not easy.
Authors of How Does Google Test? suggest writing a test plan in 30 minutes to discard unnecessary things. But the problem is that developers, testers, support, managers and customers understand the value of a product in different ways.

In order not to argue which of them is less right, we decided first of all to satisfy the “pain” of clients who have more rights in life.

Here’s what we got.

1. First of all, autotests should cover those functions, in which clients encountered bugs.

The client should face the problem no more than once. It doesn’t matter how many major releases the client installs for himself in the future. The problem should not repeat itself in any of them.

2. Routine – next in line for autotests. We refer to routine as functions that take a lot of time to check during manual testing. These are either functions with a large number of combinations of parameters, or functions, when checking which there are many operations of the same type, from the frequent execution of which the tester’s eye starts to twitch gets washed out.

For example, in our product, an administrator can be assigned a role with an arbitrary set of permissions to access the UI. Even at the level of available menu items, there are many powers, but our powers are more atomic and sometimes even extend to individual buttons. Therefore, manually testing even typical roles is not easy.

Another example of automating a routine is filling the system with the necessary data. Of course, you can start working with each new assembly by filling it with data through the UI, but it takes a long time. 5-10 repetitions are enough for this process to finally get bored. Therefore, most of the time, filling is done automatically using the API.

When there are a lot of routines, you need to choose those functions that customers need in the first place. The functions you need are easy to define. It is enough to ask yourself or the client the question: “An error in which function will have to be corrected at night or on a weekend if it happens on sale?”

3. It is necessary to minimize the sacrifice of refactoring. Refactoring is essential. But when one or more development teams tackle it, expect trouble. According to Murphy’s Law, there is bound to be some scenario that the client needs, but which is not covered by the requirements.

To minimize potential damage, you need to agree on targeted behavior. This is where test cases help. If test cases are routine, you can automate them and make sure that the behavior of the system has not changed before and after refactoring. The more refactoring, the greater the need for autotests.

Sometimes, when refactoring is done, there are “innocent victims.” This is when they refurbished in one place, and the behavior of the system changed in another. In real life, the relationships between components are not always described in detail, so the developer must communicate the possible impact of changes on neighboring components.

People don’t think about others often, and developers even less so, so getting an answer about the impact of change is not easy. Developers need to be motivated. For example, recommend force to write autotests for every bug that was discovered as a result of an unaccounted for influence. After several such “awards”, the developer begins to think more about the relationships between the components.

How to improve product quality without autotests?

TestOps is not only about technologies, but also about processes. A couple of years ago, we were faced with the fact that assemblies of different components that were incompatible with each other began to be transferred to testing. Searching for compatible builds is fun but tedious. That’s why we introduced QR codes started instill QA hygiene skills.

Everyone is responsible for quality, not just testers

The first rule of hygiene is checking at the entrance. Testers write a “rider” to the developers in the form of a smoke test, which the developers must pass themselves before transferring the assemblies to the test.
In this case, testers act as internal clients.

There are several advantages at once:

  1. Testers started to study the requirements earlier and give feedback on them before the end of development. Without this, you cannot write a smoke test.

  2. Inconsistent assemblies stopped getting into testing.

  3. At the stage of joint debugging, the developers began to find and fix bugs themselves. Due to this, the number of task returns has decreased several times.

  4. The number of autotests has increased. Developers who are waiting for an API started writing tests for it themselves, so as not to be surprised later that the API does not work as they expected.
    A kind of TDD.

Bureaucratic, prakish, gut

It may seem unpopular, but another important rule of QA hygiene is deliberate bureaucracy.

For new functions, checklists need to be developed, which analysts must agree on. This reduces the likelihood that we will forget to check something important in the pre-release turmoil. It also makes it possible at any time to assess the readiness of a release or sprint by the degree of passing the current checklists.

Autotests must be described in the form of test cases. It has been argued that the best documentation for code is the code itself. But not every tester, analyst or manager can understand the test code, but every developer can read and write in Russian (although he dreams that words are automatically substituted by taboo). Therefore, at least a brief Russian-language description of what and how the test checks is necessary.

If existing functions change, requirements, checklists and test cases need to be updated. Otherwise, the QA team can check the new release against old documents and introduce unnecessary bugs. The developers will study the bugs and understand … that these are not bugs! As a result, everything seemed to work, but it did not seem to work.

Implementation constraints must be fixed before passing it to the test. Otherwise, testing will be the same leapfrog as with irrelevant requirements: “This does not work” – “And it shouldn’t” – “How can you guess about this ?!” “I thought it was obvious.”

Finally

QA allows us not to get bogged down in product support. But the optimal ratio of quality and costs for its development slips away every time we try to fix it in our regulations. It is up to each team to keep trying or to accept. We made our choice and are stubbornly moving forward 🙂 How do you develop QA and involve product teams in it? Share in the comments.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *