Part 3. Performance testing tools

In this article I will talk about our approach to performance testing and what tools we use.

Choosing an approach to organizing performance testing

Performance testing is the process of measuring and evaluating application performance characteristics such as response time, throughput, load capacity, and resiliency. When testing performance, we are faced with such tasks as choosing a test environment, choosing parameters to measure, choosing a load model, analyzing and interpreting the results obtained, choosing criteria for accepting test results, and choosing testing tools. Our main goal at the Bank is to make performance testing mandatory for every release to avoid performance problems in production. In addition, we want to save money and time on this process. To do this, we considered two approaches to organizing performance testing: product and service.

Product approach

In the product approach, each product team has its own performance engineer, who is responsible for developing, running, and analyzing performance tests for their application or service. This approach has the following disadvantages:

  • High price. Finding and hiring a qualified performance engineer is expensive and difficult. Additionally, as the number of projects increases, more and more performance engineers will be needed, increasing salary and training costs.

  • Low efficiency. Performance engineers may have different levels of knowledge and experience, which can lead to different quality of performance testing on different projects. Additionally, performance engineers may use different tools and methodologies, which can lead to incompatibility and duplication of data and code.

  • Loss of context. Performance engineers may not have a sufficient understanding of the logic behind the applications and services they are testing. This can lead to improper planning, modeling, and analysis of performance tests.

Service approach

The service approach has a separate PerfOps team that provides performance testing as a service to product teams. This team develops and maintains common performance testing tools and libraries, and advises and assists product teams as needed. This approach has the following advantages:

  • Reduced cost. We don’t need to hire many performance engineers for every project. We can use one PerfOps command for all our applications and services. This reduces the cost of salaries and training, and also allows you to optimize the use of resources.

  • Increased efficiency. The PerfOps team has a high level of knowledge and experience in the field of performance testing. She can provide high quality performance testing on all our projects. Additionally, the PerfOps team uses common tools and methodologies to ensure interoperability and reuse of data and code.

  • Preserving context. The PerfOps team does not develop and run performance tests in-house. It provides tools and support for product teams to develop and run performance tests for their applications and services themselves. This way, product teams maintain context across their products and can better plan, simulate, and analyze performance tests.

This is what the general process looks like in a service approach. A task is received to create or change functionality. After implementing the task, smoke tests are launched and the product team itself determines the impact of this task on the performance of the product. If there is no influence, then the task is sent to production. If there is impact, the product team writes performance tests using the tools and knowledge base provided by the PerfOps team. Then there are the stages of running and analyzing performance tests. In the area marked with a dotted line, PerfOps engineers may be involved. If the product team cannot find a performance problem on its own, then it involves PerfOps expertise and they look for it together. Once the problem is fixed, the task is sent to production.

Selecting a Performance Test Tool

In this part of the article, I will talk about our choice of performance testing tool. At the initial stage of introducing the new concept, we used Apache JMeter to develop and run performance tests. The PerfOps team had a lot of experience with this tool. Apache JMeter is a powerful and flexible tool for testing the performance of various types of applications such as web applications, REST API, SOAP API and others. However, we encountered some problems when using Apache JMeter on a large number of projects, namely:

  • Difficulty in managing test scenarios. Apache JMeter stores test scripts as XML files, which are difficult to read and edit. It is also difficult to reuse code and data between different tests and projects. We used Git to store and synchronize test scripts, but this did not solve the problem of the complexity of the XML format.

  • Lack of autocomplete and syntax highlighting. Apache JMeter does not have a built-in code editor that supports autocompletion and syntax highlighting. This makes it difficult to write and debug test code, especially when using scripting elements such as BeanShell, JSR223 or Groovy.

  • Mismatch between the testing language and the development language. Our area of ​​responsibility for the API was with QA, who wrote UI tests for web projects using Playwright (TS/JS). This meant that they used TypeScript or JavaScript to develop UI tests and Apache JMeter to develop performance tests. This created additional cognitive load and prevented code and libraries from being reused between UI and performance tests.

Therefore, we decided to switch to another performance testing tool that would better suit our needs and the specifics of our projects. We chose k6 is a modern and lightweight performance testing tool that allows you to write tests in JavaScript using various APIs and modules. k6 has the following advantages over Apache JMeter:

  • Ease of managing test cases. k6 stores test scripts as regular JS files that are easy to read and edit. It’s also easy to reuse code and data between different tests and projects. We still use Git to store and synchronize test scripts, but this does not pose a format complexity issue.

  • Availability of autocomplete and syntax highlighting. k6 has a built-in code editor that supports autocompletion and syntax highlighting. This makes it easier to write and debug test code, especially when using various APIs and modules such as HTTP, WebSocket, GraphQL and others.

  • Correspondence between testing language and development language. Our area of ​​responsibility for the API is with QA, who write UI tests for web projects using Playwright (TS/JS). This means that they use TypeScript or JavaScript to develop UI tests as well as to develop performance tests. This reduces cognitive load and allows code and libraries to be reused between UI and performance tests.

Performance testing scheme at Alfa Bank

This diagram describes the process of testing the performance of our application using various tools and technologies. The purpose of testing is to determine the performance characteristics of the application under different loads and identify possible bottlenecks and errors.

To test performance we use the following workflow. Our code with testing scripts is stored in the code repository Bitbucket, which is our code versioning tool. To automate the process of building, deploying and running tests, we use the tool CI/CD Tekton, which allows you to create flexible and scalable work pipelines. We chose the main tool for generating load and measuring performance k6, which is a modern and powerful tool for testing the performance of web applications. K6 installed on a separate dedicated stand Load Generator, which has sufficient resources to create a high load. From the stand Load Generator we send requests to our application under test, which is located in Red Hat OpenShiftwhich is our tool for orchestrating containers with microservices.

During performance testing we collect the following data from the load generator:

  • Resource utilization metrics – metrics of software and hardware resources: CPU, memory, disk, etc. These metrics are necessary to understand whether the generator is overloaded during tests, as this may affect the accuracy of the results.

  • Test metrics – results of performance measurements using performance testing tools. These results include network connection times, request response times, and network addresses. Each query executed during the test has at least one record with numeric metrics and meta information.

  • Logs – (console) logs of the performance measurement tool. These logs are necessary to track problematic situations during test execution, such as errors, warnings, failures, etc.

Some of the data that is stored in Kafkapass through the data processor, in our case it is Logstash. Logstash is a tool for collecting, enriching and transforming data from various sources. It allows us to filter, parse, aggregate and modify data from Kafka according to our needs.

One of the destinations for our data from Logstash is Grafana. Grafana is a platform for data visualization and analysis. It allows us to create beautiful and interactive graphs and dashboards from our data. It also allows us to set up different alerts and notifications if something is wrong with our data. With Grafana, we can easily track and monitor various metrics and indicators of our data.

Another destination for our data from Logstash is OpenSearch. OpenSearch is an open and free search engine that allows us to index, store and search our data. It allows us to perform different types of searches on our data such as full text search, faceted search, geospatial search, etc. It also allows us to perform different types of analytics on our data, such as aggregation, grouping, sorting, etc. With OpenSearch, we can easily find and extract the information we need from our data.

Conclusion

In conclusion, we can say that we use a modern and effective set of tools and technologies to test the performance of our application. This allows us to obtain reliable and detailed data about the performance characteristics of the application under different loads and identify possible bottlenecks and errors. This way we can improve the quality of our product and increase the satisfaction of our customers.

I want to summarize my series of articles in which I talked about what tools we use in testing at Alfa Bank. I hope that these articles were useful and interesting for you. That you have learned a lot about how we ensure the quality of our product. In these articles I told you about the following topics:

  • Part 0. QA tools at Alfa Bank – in this article I talked about testers, showed what tools and plugins we use.

  • Part 1. Test automation tools – in this article I delved deeper into our automation tools. He told me why they chose Playwright, Appium and Browserstack.

  • Part 2. Test management tools – in this article I talked about the Allure TestOps test management system. He told us why they chose Allure and gave examples of their work.

  • Part 3. Performance testing tools – in the final article I told you what our performance testing scheme looks like and what tools we use for this.

I thank you for reading my articles. I will be happy to answer your questions and discuss any testing-related topics with you. If you want to know more about our company and our projects, you can join us in the telegram channel Alpha Wednesday. Thank you for your attention!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *