Nobody likes being loaded. Only if it’s not a microservice in hh.ru.
Hello everyone, my name is Ilya, I’m a backend developer in the Architecture team. In this article I will talk a little about load testing.
Types of performance testing
It is customary to distinguish several types of testing. Latency testing – latency testing, Throughput testing – throughput testing. Endurance testing, also known as stability testing, is many hours of testing with an average load level. It is carried out, for example, to detect memory leaks.
There is also degradation testing (Degradation testing) – checking the system’s performance in case of performance degradation or other problems. The capacity planning test is carried out to determine the resources required to operate the system under given conditions. Stress testing is carried out to assess the performance of the system as a result of unforeseen circumstances, as well as to identify the ability of the system to regenerate.
And, of course, load testing – Load testing. It is conducted to evaluate system performance and response time for a given target load.
How is load testing done in hh.ru
What we check: we additionally load the site during prime time, using analytics data to generate user requests, thereby simulating the real load. What we check: we use the Atlassian Bamboo continuous integration system to manage test runs. We use Yandex.Tank as a load source.
How we check: we prepare the data and form a load profile every week. According to the description of the url category, we form the relative proportions of each category in the total load based on analytical data. Every day we form requests that we will send – according to the profile, taking into account proportions and using analytics. We then run the tests and send the requests received in the previous step.
To collect and analyze the results, we use our own plugin for Yandex.Tank. Fortunately, the tool provides opportunities for expansion. The results are sent to the corporate messenger. In the summary, you can find information about the duration of the statistics of responses and errors. Errors are analyzed, and based on the results of the analysis, measures are taken to improve the stability of the site.
Technologies and tools
There is a need in the company for load testing of individual services: both in production and in the test environment. The goal is to control possible degradation. To cover this need, we have developed a service for managing load tests based on our open source framework. frontic, and under the hood we use Yandex.Tank. The service provides the ability to start and stop tests, get information about their status, as well as save and reuse parameter presets with which they will be launched.
As I said earlier, Yandex.Tank is extended using plugins. Therefore, to send notifications and collect and analyze the results, we were able to easily use the plugins that we already had. We decided to use Yandex.Tank because we already had experience using the tool, we liked the extensibility, we understand how it works, and it is also important for us to be able to reuse the code base.
Yandex.Tank itself is written in Python and provides a convenient API, so it was not difficult for us to integrate it into frontic and set up the necessary binding. The business requirements were: the ability to save and reuse parameters, access to the history of launches, the ability to execute on a schedule and be guaranteed to stop execution on request at any time. In addition, the service implements additional logic, for example, for interacting with the database and working with operating system signals. And also – convenient integration with the internal resources of the hh.ru ecosystem.
You can interact with the service both from the command line and using a graphical interface integrated into the site’s administrative panel. Thanks to this, our teams can run individual tests for specific services and, based on the data received, carry out activities aimed at improving the stability of the site. Now I’ll show you how it works.
Speaking and showing
Instead of concluding, I will simply show how it all looks. The graphical interface of the service consists of several pages. On this page, tests are configured using presets:
A preset is a reusable set of parameters that can be created, modified and deleted. There are small conveniences in the form of filtering, but the most interesting thing is that tests are launched from here.
After running, you can see the status and some additional information on the next page, where all tests are displayed:
For convenience, you can use filtering. Also, on this page there is an important thing – the stop button:
In an ideal world, manual stopping is not supposed to be used, since tests are able to stop on their own in case of errors or increase in response time. All this thanks to the configuration capabilities of Yandex.Tank.
And finally, the last page in our interface is a page with information about a particular launch:
On it you can see detailed information about the test with some service information that can be useful when debugging.