we analyze the questions of the test automation meetup

If such a feature is needed, then it can be done, of course. It is enough to transfer the necessary confi when running tests: web, or mweb. But we don’t do that, we have a web and a mobile version – two different applications (not adaptive), because of this there are quite a lot of differences, but there are also similarities, as a rule, in the elements. We make the base page referenced by the desktop and mobile pages, in fact, inherited.

That is, we only reuse locators and update them, respectively, in one place. And we are already writing tests separately for each platform. It turns out that about 20% of tests are duplicated, but this is a sacrifice in the name of clarity and lack of overhead due to this versatility. It seems to me that autotests should first of all be understandable, and not contain any additional logic inside themselves.

2. How do you run tests in 15 parallel threads and, if it’s not a secret, run them on hardware or in the cloud?

We run tests in 10 threads, use the built-in CodeceptJS features… Our autotests are atomic, everything started very simply.

The run is performed on our pieces of iron, for them we have allocated more powerful servers (32 cores each) – taking into account the fact that there are three agents on each server, this number is optimal (10 workers). Also, Playwright itself allows you to scale, it launches the browser once, and all the parallelization already takes place in contexts, something like atomic tabs. Due to this, he definitely eats fewer resources.

3. What tools do you use for screenshot testing?

This task consists of two stages: take a reference screenshot and compare it with the actual one. We make screenshots using Playwright, and compare using utilities… True, we tweaked it a bit for Playwright, Allure, and updating the benchmarks through TeamCity. Also recently released testrunner from Playwright, which knows how to compare screenshots out of the box.

4. How do you solve the problem of the version mismatch between the web browser and the web driver? Hard code, some frameworks? Or hasn’t this happened with you?

Playwright addresses this issue at the architectural level. You pick the version of Playwright you want to use and it will bring the right browsers with you. If you want to update browsers, just update the version of the library. There is no layer in the form of a web driver at all, since it is used DevTools protocol

5. Tell me, why do you need CodeceptJS? Why not use Playwright itself with their own runner?

When the instrument was chosen, Playwright did not yet exist. And CodeceptJS had practically no competitors, all other JS frameworks were more sharpened for Unit testing (Jest, Mocha, AVA, etc.).

But on the new project, we just used the Playwright testrunner, it performs all tasks out of the box, up to supporting Allure. We will not rewrite the current autotests, but we plan to use it in new projects.

6. How much CPU and RAM do you have in one browser instance?

It strongly depends on what is spinning there: recently, for example, we saw one Chrome, which uses up to 140% of one CPU and about 400MB of RAM. It is best to tune the number of workers upon launch; we tried to run it in a different number of threads. On our hardware and project, 10 is optimal.

7. Do you run normally or headless? Is there a big difference in CPU and RAM?

Here we have a simple approach: we write tests locally in headful mode, and in CI we run headless, there we don’t need an interface.

8. Has the stability of your tests improved after switching from Selenium Webdriver to Playwright? If so, how much?

We did not have a set of identical tests on two instruments, so I cannot give exact numbers. Unambiguously increased stability due to the built-in expectations Playwright itself. We also got the opportunity to add awareness to our autotests thanks to access to all requests. For example, we often expect a successful response from the backend before starting the next step – doing it very simply

Using this approach, we also check the submission of web analytics. If you are interested, I will soon tell you more about this and give a master class at the conference TestDriven Conf 2022

9. Selenoid does not support Playwright. Do you have enough parallel threads when running tests? What will you do when scaling? Indeed, when you run a huge number of tests on one virtual machine, due to the creation of many contexts, for sure, tests can be flashed at times of peak CPU load.

Selenoid helped us out quite a lot when we used Webdriver, now the need for it is gone. With the increase in the number of tests, such a question really arises, I will say even more – the current 30 minutes are already a lot for us. Most likely, now we will move towards impact analysis, for a smarter sampling of tests. There are already a couple of prototypes: both clumsy – in the form of parsing a commit and trying to guess the affected functionality, and more expensive – in the form of self-written code analyzers.

10. What tools do you use to test the API?

We use Behat for API tests and Swagger for documentation. By the way, we recently attached the Allure report to the Behat tests, but we are not yet uploading them to TMS. One of the tasks for the future is to add the ability to run tests through Allure TestOps for any sample (for example, for a specific endpoint) for all layers of autotests (including UI tests on mobile phones).

That’s all, we hope it was useful. Join our cozy QA chat in Telegramto read more questions and answers, as well as not to miss the announcements of new meetups.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *