What brings the combination of manual and automated testing: the Wrike experience

Reading articles on the topic of web testing, two topics loom conditionally: 1) manual testing is dying out, autotests (hereinafter referred to as autotests are Selenium UI and REST tests) are our everything; 2) automatic testing is not a panacea; manual testing is indispensable. At the same time, from the articles there is a tendency towards an increase in requirements for software quality and product development speed. Wrike is just the case when these requirements are critical.

The product is already 12 years old, but it is still actively growing. Deployments occur once a day, and sometimes two. Therefore, it is critically important for us that the regression is carried out exclusively on autotests. However, Wrike (in the company) has over 30 scrum teams, and the staff of the automation team is not rubber. In such conditions, to expect automation of manual scenarios at best, one or two sprints is not an option. The experience of our company says that a manual tester can write autotests independently, subject to certain nuances. In the article I will tell about them and why, in my opinion, this ability not only helps to keep up with the trends, but will also be useful for the tester himself.

Standard process

What process are many teams used to? It varies from case to case, but the common features are about the same. There are departments of automatic and manual testing. Manual testers can be distributed among scrum commands. In this case, automation, as a rule, have no relation to a specific team.

When working with new functionality, the tester creates test scripts, some of which he marks in a predetermined way for automators. In addition, if there are already cases in which adjustments are made, then they are also noted in order to update the code. Then the marked tests are transferred to the automation department. The team of automation takes the task of fixing the current and writing new autotests in one of the following sprints. In addition to programming test scenarios, the tasks of the automator include running autotests, analyzing the results, as well as supporting and developing the test project. It turns out that the automation department acts as an outsource executor, and manual testers are a kind of customers.

The customer additionally spends time compiling a detailed and accurate TOR, periodically discussing implementation methods and selecting the necessary tests. There are also risks that during the absence of autotests bugs may be skipped. Do not forget that there is a layer of technical problems that could only be rolled up on automatic tests, which would save a lot of time. Such tasks will have to be checked by hand in the part where automation is still missing.

The contractor, not being very immersed in the functionality that the team was engaged in, will take time to superficially immerse himself in the task and in the awareness of the TOR. At the same time, it is likely that the test is not accurately translated into the code, because of which it will check not what we would like. Accordingly, the efficiency of the test base is reduced.

The automation team, being the only contributor to the test project, has full control over its code base, which allows it to be easily developed in any direction. However, time for this becomes insufficient due to the increasing load from other teams. The problem can be solved by expanding the staff, but then the cost of automation will exceed its effectiveness. Even if you remove part of the load, giving manual testers the opportunity to run tests and analyze fallen ones, this will not bring the proper result. Since they do not have tools for debugging tests, they may not understand that the test crashed due to a change in xpath and so on.

Accordingly, at the output we get that the autotests with this scheme do not keep up with the growth of the product, which leads to poor coverage of the code. Due to an inaccurate interpretation of TK, tests may skip bugs. When they are out of date for a long time, the fallen ones are not repaired immediately, and it is difficult for manual testers to immediately tell which part of the system is well covered by automation. Autotests become some kind of black box, to which testers are mistrustful. Hence, the number of unnecessary manual checks is growing, the terms of tasks are being stretched, and the quality is decreasing in the long run.

You can work with these shortcomings, but the larger the product and the company, the more painful for the participants in the process, and most importantly, it is difficult to follow the trend of increasing speed and improving quality. The tester himself becomes a hostage to the routine and practically does not remain on the development of time.

Wrike way

So, how it works on the example of the team in which I work. There are automatic and manual testing teams. The initial data are still similar, but then the differences begin. Manual testers are distributed among their scrum teams. Each scrum team has its own autotester. Sometimes it can be allocated not to one, but to two teams, if the load allows.

When working with new functionality, the tester writes check lists, according to which he then conducts manual checks. The minimum necessary part of the tests from this checklist is automated. The tester himself writes these autotests at the moment when the feature is in development or testing. Further, the written code is given to the reviewer for review. With rare exceptions, a task without autotests cannot be issued.

Of course, there is no requirement in Wrike to write autotests by manual testers. This remains at the discretion of the team. You can give everything to the automation. You can confine yourself to fixing broken and / or writing new tests by analogy, and delegate more complex tasks (creating new tests or expanding old back-end handles, Page Object or test steps and classes) to a dedicated automation tool. It all depends on you, but it’s stupid to miss those advantages that independent writing of automatic tests gives.

Our entire regression is based on autotests, and the responsibilities of manual testers include running and analyzing autotest failures. For each branch the team is working on, they run auto tests as the initial and final guarantor of quality. Therefore, for those who write autotests themselves, it is much easier to understand why the test running on their branch fell. Sometimes tools like rerun and a report in Allure are really enough, where you can understand the reason for the test crash from the screenshot and steps. However, often the best assistant is the ability to run tests locally, play around with steps or run them in debug mode, see the expected and real xpath. Without experience working with a test project, this will take a lot of time, or it will be necessary to distract the dedicated automation tool.

In addition, the independent writing of autotests makes it possible to run them before the feature is released. The tester always knows the degree of coverage of its part of the system, and technical tasks roll only at automatic tests, which significantly saves team time and resources. The tests themselves are always relevant, as crashes are adjusted before release. Broken tests are corrected immediately in the same branch where new ones are written.

A manual tester is maximally immersed in the task of the team, so the necessary minimum of automatic tests is selected, covering most cases. The sample is revised several times during testing, as during manual checks, functionality is studied in more detail with all the nuances. Accordingly, the efficiency of such tests is growing. Writing autotests allows you to better understand the architecture of the application, the components used, and the front-end to back-end interaction. Ultimately, this knowledge helps a more conscious and effective approach to product testing. For example, if some team makes changes to the general component, then you are more likely to know in advance whether your scope will or will not be affected, since when working with xpath you understand which components are used in your part of the application.

It can be argued that writing autotests takes time. Yes, tasks will be released one to three days later than usual, but in the long run it pays off. Moreover, there are optimization methods. For example, while a feature is in development, you can draw up the necessary checklists and make a blank for tests, thereby saving time. If you have a ready-made functionality framework, it is possible to add or correct existing xpaths, if necessary, create a new Page Object or adjust the steps. Then, at the stage of writing autotests, after manual checks, you just need to add the blocks of code in the correct order.

Thanks to the framework developed by our automation team, writing autotests for the most part involves compiling code from blocks – like Lego. This simplicity allows you to quickly adapt to manual testers and start writing autotests by analogy with existing ones. From my own experience, I’ll say that it took about two weeks from the moment I went to work at Wrike until the first autotests I wrote, together with other tasks.

Quality control of written automated tests is carried out through code review. Not a single test branch gets into the release without a review. This is a good training moment, because the tester draws useful information from the comments on his code and builds up the experience of good solutions: for example, it manages the standard Java library more efficiently or defines xpath more precisely. Next time it will be clear how best to work with a particular situation.

Of course, the development of a test project, framework, and the training of manual testers occupy the resources of automation, especially at the initial stage, but, as it seems to me, these efforts are fully paid off. We have many improvements in the automated testing environment that make our work easier. The product itself has good coverage, so you can rely on regression. This helps to speed up the process of rolling out features to the user environment and greatly protects the nerves of testers.

According to the experience of our team, this is one of the best processes for working with a large and rapidly developing product in a large company. Moreover, it is in line with current trends in improving the quality of software and the speed of its delivery to users. The tester himself practically gets rid of the routine, develops in several directions and looks at the application from several angles.

Briefly about the main thing

For convenience, I will highlight the advantages for a manual tester in one place, so that it is easier to realize their significance individually or all together:

  • A more complete picture is formed about the level and quality of automation of your scopes;
  • Autotests are available before the feature is released, which makes it possible to quickly check its quality at any time;
  • The efficiency of autotests increases, as does the efficiency of testing in general;
  • A more informed and effective approach to testing is being formed;
  • Getting rid of monotonous manual regressions and long evaluation tests;
  • Personal growth and development of competencies.

To summarize

Of course, there is no silver bullet. What is suitable for one company may be sharply rejected by another. In the case of Wrike, the product grows extremely fast and there is no time for lengthy manual regressions and evaluative testing. We have this role performed by automatic tests, which cover almost every component of a huge product. This helps maintain quality, optimize resources, and provide new functionality to users faster.

The bad news is it can’t do without bugs, but in our case, most often these are some kind of extreme cases. The good news is that bugs during the fix are also overgrown with autotests.
For some reason, it has become so common in the community that the idea of ​​writing autotests by manual testers is rejected. There are two most popular arguments from testers: “They don’t pay extra for this” and “We have enough work already.” For me personally, both arguments fall apart when I realize that I can run automatic tests at the time of developing a feature and in a short time understand how it works correctly. It costs a lot. Our job is to improve and maintain the quality of the product, so every opportunity is used to facilitate it. From the moment I started writing autotests, the routine in my work has become less and more awareness.

P. S. This article reflects only the experience of our team and may not correspond to your beliefs. Therefore, I will be glad to know about the approaches that guide you in your work. I will also be happy with healthy criticism and the opportunity to discuss the article in the comments.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *