Another way to deploy python autotests web application based on integration with QASE

Hello! I’m Ilya. I am a beginner auto tester. In this article, I will briefly talk about how, under a combination of several positive circumstances, we managed to quickly raise the autotest system for our web application.

Tested web application
Tested web application

Experience can be useful for testers who work in companies where autotesting is just beginning to develop and there are no full-time server maintenance/deployment specialists.

Web application stack and backend parts

Backend: Python+ MongoDB + elasticsearch

Frontend: React application

Docker and Yarn are used to deploy the application on the server.

What goals did we want to achieve by creating the infrastructure described below:

  1. Critical functionality is tested on every GitHub push/merge for 15 minutes, giving developers the ability to quickly get test results and fix bugs without losing context.

  2. Full regression testing takes place every night and in the morning gives a clear picture of the current state of the product.

The final solution turned out like this:

  • Server on current Ubuntu. Docker and Yarn were used to raise the backend and frontend web applications, Flask was raised to receive webhooks, and the autotests themselves, written in python, use the services of Selenium, Selenoid, Pytest.

  • After the push to GitHub, autotests are launched on the server, and after the tests are completed, the test report comes to the Slack chat.

  • The entire infrastructure costs $60/month (the server is paid, everything else is free)

Each test cycle looks like this:

  1. Fresh backend or frontend code is pushed to GitHub;

  2. GitHub sends a webhook to the server where the tests are running on the push event;

  3. When the server receives a webhook, a bash script is run;

  4. The bash script pulls updates from the repository (from which the command came) and starts Yarn to raise the frontend;

  5. When the frontend bash script has risen, it gives a command to run a python function, which, with the help of a free API sends a command to QASE to start a test run. With the second request from QASE, we get all the information about the autotests themselves (more on that below). After that, Pytest runs autotests;

  6. Each autotest that is run triggers the following events:

    1. The web application database is cleared;

    2. Selenoid launches a separate container with the browser of the selected version, the required screen resolution, and during the test execution it records a video of how the test passes;

    3. In parallel with how the test goes, the time of the test execution is recorded;

    4. At the end of the test, information about the successful completion of the test (or failure), along with the duration of the test, is sent via API to QASE so that this information is stored within the current test run;

    5. At the end of the test, the container with the browser is killed and a new test starts.

  7. At the end of the last API test, we send information about the completion of the test run to QASE;

  8. As soon as QASE receives the command to complete the test run, it will generate a test report with beautiful graphs, all the errors found and send this report to the Slack chat of testers, where testers, developers and superiors can see the report.

How are autotests implemented?

The elements that make up autotests can be divided into 5 levels:

Selectors (Buttons, text, pictures and other web application interface elements.)

Actions with selectors (These are the simplest python functions that allow you to operate with selectors. Example: find a button and click on it. Find a field and insert a value there.)

An example of selectors and actions with them.
An example of selectors and actions with them.

Functions (This is again about python functions, only more complex. From the point of view of the code, the functions operate on actions with selectors, determine whether this or that action should be performed at the moment. Example: our web application has a user profile. It can be edited. All operations for editing this profile are collected in one function.Another function: user registration.There are several ways to register, and all of them are described inside one function.)

Function order (Each autotest is a certain sequence of actions in a web application. The order of functions will determine exactly how events will develop in our autotest. Example: first, the registration function must register a user, then the second function must create a “program” (a specific part of our web application, will not go into details), then the third function will invite another user to this program, and so on.)

Thus, QASE can store the autotest itself and the order of functions performed in it.  For some functions, the parameters contain the number of the test case containing the input data for this function.  The following picture will be just an example of input data for action.program_create()
Thus, QASE can store the autotest itself and the order of functions performed in it. For some functions, the parameters contain the number of the test case containing the input data for this function. The following picture will be just an example of input data for action.program_create()

Input data for functions (Almost all written functions expect that they will receive data as input that the function can operate on. For example: the function for creating a program expects information from us about the name of the program, its description, cost, and many other information that will allow the function to create exactly whatever program the tester wants.)

Sample input for a function.  This piece of code is stored in QASE at number 39. It is this number that we indicate in the function parameters when we describe in QASE the order in which functions in the test case are executed.
Sample input for a function. This piece of code is stored in QASE at number 39. It is this number that we indicate in the function parameters when we describe in QASE the order in which functions in the test case are executed.

Why store some of the information in QASE?

Thus, we divide the work on autotests into two parts. Guys who know python and understand selectors work with the code.

Colleagues work in QASE, who can collect any number of autotests from functions and input data, thereby checking the product up and down from a variety of points of view.

Since the tests consist of elements stored in different systems, before running the tests, you need to organize their collection into a single whole:

A file that collects all elements into a single whole.  Calling this file with the pytest command will start running the autotests.
A file that collects all elements into a single whole. Calling this file with the pytest command will start running the autotests.
  1. First of all, let’s get all the data about the test cases being run from QASE. We do this with the qase_get_data() function, which receives the full list of test cases via the API, and for each specific test, the order in which the functions within this test are executed and the input data for the functions.

  2. With the help of parameterization, pytest collects all autotests in one queue.

  3. With exec() (use this function only when you are sure that the received data is secure) we pass pytest information about the functions that will be executed in each test.

  4. Next, pytest will run autotests according to the queue.

Get a test report

Since before the start of the run we initiate the beginning of the test run in QASE and, as the tests pass, we pass information about the test results to QASE, it becomes quite simple to get a test report.

QASE has Slack integration. And on the trigger of the end of the test run, the report will automatically be sent to the desired Slack chat, where any interested person can read it.

An example of a small report in Slack on passing 6 autotests.  One fell.  Clicking on the name of a test run in the browser will open a full test report listing all errors, successfully passed test cases, and a history of previous runs.
An example of a small report in Slack on passing 6 autotests. One fell. Clicking on the name of a test run in the browser will open a full test report listing all errors, successfully passed test cases, and a history of previous runs.

Instead of a conclusion, I want to talk about the nuances

  • The server we have chosen is loaded by 60% during a single-threaded launch of autotests and it is not possible to run tests in parallel on it, although this possibility is incorporated in the architecture. In other words, when it is necessary to reduce the time for passing autotests, the budget will need to be raised to a conditional $300 in order to take a server with 32 gigabytes of RAM.

  • some autotests require you to upload files (avatars, documents, etc.) to the web application. We also bring such files to QASE and download from there at the moment when it comes time to run a test case in which these files are used.

  • so that autotests can be run without being tied to GitHub, they made a separate route on Flask, which allows you to pass in the parameters the numbers of test cases that now need to be run. If nothing is specified in the parameters, then a run test will be launched with all available autotests.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *