Load Testing in Web Application Development

One of the critical stages of web application development is load testing. This process is necessary to identify potential bottlenecks in the system and to ensure that the application can handle the expected number of users and requests without degrading performance.

Introduction to Load Testing

Load testing is the process of simulating different levels of user activity to evaluate the performance of a web application. The main goal of such testing is to determine how effectively the system copes with increasing traffic and where its limits are. During the process of load testing, critical points of failure are identified and the maximum throughput of the application is estimated.

Basic methods of load testing

Peak Load (Spike Testing): This method focuses on assessing the system's behavior under sudden load increases. It allows you to determine how stable the application is when there are sudden bursts of user activity.

Long-term testing (Soak Testing): This method is used to test the stability of the system under a stable load over a long period of time. It helps to identify long-term problems such as memory leaks or database failures.

Stress Testing: This type of testing evaluates the performance of a system under a load greater than expected. The goal is to determine the point at which the application begins to fail and how quickly it recovers from the overload.

Ramp-Up Testing: This technique gradually increases the load on the system until it reaches a certain level. This allows you to identify threshold values ​​at which performance problems begin.

Tools for Load Testing Java Web Applications

There are many tools available for effective load testing of Java web applications, each with its own features and advantages. The most popular ones are listed below:

Apache JMeter: One of the most popular tools for load testing, supporting various protocols, including HTTP, HTTPS, SOAP, and JDBC. JMeter provides extensive functionality for creating complex test scenarios and then analyzing the results. Its versatility and powerful analytical capabilities make it a preferred solution for load testing web applications.

# Пример запуска теста в JMeter
jmeter -n -t test_plan.jmx -l results.jtl -e -o /path/to/output/folder

Gatling: This Scala-based tool is known for its high performance and scalability of tests. Gatling offers a convenient DSL (Domain-Specific Language) for creating test scripts and has powerful analysis tools, making it ideal for complex and resource-intensive load tests.

class BasicSimulation extends Simulation {
  val httpProtocol = http.baseUrl("https://localhost:8080")
  val scn = scenario("Basic Scenario")
    .exec(http("request_1").get("/"))
  setUp(scn.inject(atOnceUsers(1000)).protocols(httpProtocol))
}

Locust: A load testing tool written in Python. Locust allows you to develop test cases in Python, which makes it especially attractive to developers already familiar with the language. It easily integrates with other systems and offers flexible load management capabilities, making it suitable for various types of projects.

from locust import HttpUser, TaskSet, task

class UserBehavior(TaskSet):
    @task
    def index(self):
        self.client.get("/")

class WebsiteUser(HttpUser):
    tasks = [UserBehavior]
    min_wait = 5000
    max_wait = 9000

Apache Benchmark (ab): This is a lightweight tool designed to quickly perform basic web server performance tests. Apache Benchmark is easy to use and is suitable for quickly assessing the performance of web applications under basic load. However, its functionality is limited and it is not designed to perform complex testing scenarios.

# Пример запуска теста с 1000 запросами и 100 параллельными пользователями
ab -n 1000 -c 100 http://localhost:8080/

Example of Load Testing Using Apache JMeter

Below is a detailed example of performing load testing of a Java web application using Apache JMeter. This example covers all the key steps: from installing JMeter to analyzing the test results.

Installing Apache JMeter

  1. Download and install: Download the latest version of Apache JMeter from official website.

  2. Launching JMeter: After installation, run JMeter using the command jmeter in the command line.

Creating a Test Plan

  1. Creating a new test plan: Open JMeter and create a new test plan.

  2. Adding a Thread Group: The thread group determines the number of users, the ramp-up time, and the number of test execution cycles.

ThreadGroup threadGroup = new ThreadGroup();
threadGroup.setName("Example Thread Group");
threadGroup.setNumThreads(100);  // Указание числа пользователей (потоков)
threadGroup.setRampUp(10);       // Время разгона в секундах
threadGroup.setLoopCount(1);     // Количество циклов

Adding Elements to a Test Plan

HTTP Request Sampler: Add HTTP Request Sampler to specify HTTP requests to the web application under test. Specify the domain, port, path, and request method.

HTTPSampler httpSampler = new HTTPSampler();
httpSampler.setDomain("example.com");
httpSampler.setPort(80);
httpSampler.setPath("/api/test");
httpSampler.setMethod("GET");

Test Plan Configuration: Add the configured thread group and HTTP Request Sampler to the test plan.

TestPlan testPlan = new TestPlan("Example Test Plan");
testPlan.addThreadGroup(threadGroup);
threadGroup.addSampler(httpSampler);

Adding listeners to analyze results

View Results Tree: This element allows you to visualize the results of each query in the form of a tree.

ViewResultsTree resultsTree = new ViewResultsTree();
testPlan.add(resultsTree);

Summary Report: Provides summary statistics of test results.

SummaryReport summaryReport = new SummaryReport();
testPlan.add(summaryReport);

Running the test and analyzing the results

Running the test: Set up and run a test plan using the JMeter engine.

JMeterEngine jmeterEngine = new StandardJMeterEngine();
jmeterEngine.configure(testPlan);
jmeterEngine.run();

Analysis of results: After completing the test, analyze the results to identify bottlenecks and potential points of failure.

for (SampleResult result : resultsTree.getResults()) {
    System.out.println("Response time: " + result.getTime());          // Время ответа
    System.out.println("Response code: " + result.getResponseCode());  // Код ответа сервера
}

This example demonstrates how you can use Apache JMeter to create and perform load testing of Java web applications and analyze the test results to improve system performance and reliability.

Converting Load to RPS (Requests Per Second) in Apache JMeter

When the system load should be expressed in requests per second (RPS) instead of the number of users, the approach to setting up a test plan in Apache JMeter requires some modifications. This is especially useful when the load on different addresses may differ when executing requests in parallel. Let's look at the step-by-step instructions for setting up such a test.

Step 1: Setting the load_msg_sec variable

  1. Adding a variable to a test plan:

    • Click on the root element of the test plan in JMeter.

    • Add a new variable named load_msg_sec. You can choose any name for the variable, but for clarity we will use load_msg_sec.

  2. Setting a variable value:

    • The variable value can be either a constant value or a parameterized value. For a parameterized value in the column Value enter the following:

${__P(load, 50)}

Here load is the name of the parameter, and 50 — default value if the parameter is not passed.

Step 2: Adding a Constant Throughput Timer

  1. Adding a timer:

  2. Setting up a timer:

${__jexl3(${load_msg_sec} * 60 * #percent#)}

Here:

  • ${load_msg_sec} — the number of requests per second, defined by the variable load_msg_sec.

  • 60 — a coefficient for converting requests per second to requests per minute (since JMeter expects the value in requests per minute).

  • #percent# — percentage of the total number of requests that will be used when testing several addresses in parallel. This parameter is determined according to the load testing methodology.

Step 3: Setting up Basic Authentication

When testing on rigs protected by Basic Auth, you need to configure the correct sending of the authorization header to avoid 403 errors.

  1. Addition BeanShell PreProcessor:

    • Click on your Thread Group.

    • Add BeanShell PreProcessor: RBMAddPre ProcessorsBeanShell PreProcessor.

    • Paste the following code into BeanShell PreProcessor:

    import org.apache.commons.codec.binary.Base64;
    byte[] encodedUsernamePassword = Base64.encodeBase64("login:password".getBytes());
    vars.put("base64HeaderValue", new String(encodedUsernamePassword));
    
  2. Addition HTTP Header Manager:

    • In the same thread group add HTTP Header Manager.

    • Add the following heading:

    • Variable ${base64HeaderValue}used in the header value, was set to BeanShell PreProcessor.

Best Practices for Load Testing

  1. Realistic scenarios: Test scenarios should be as close as possible to the real conditions of application operation. This allows you to accurately simulate loads and predict the behavior of the system, identifying potential problems before they appear in real operation.

  2. Monitoring and analysis: Effective load testing requires careful monitoring of system resources and analysis of the obtained data. Using specialized tools allows you to quickly identify bottlenecks and take measures to eliminate them, thereby optimizing system performance.

  3. Regularity of testing: Continuous load testing is essential to identify problems early, especially after changes to the code or infrastructure. Regular testing helps maintain high stability and performance of the application.

  4. Integration with CI/CD: Incorporating load testing into your continuous integration and delivery (CI/CD) processes ensures that performance is automatically monitored after each code change, minimizing the risk of performance issues in production.

  5. Combining tools: Using various load testing tools (e.g. Apache JMeter, Gatling, Locust) helps to get a comprehensive overview of the application performance. This allows you to identify different aspects of the system's behavior under load and develop more comprehensive solutions for their optimization.

  6. Multi-level testing: Testing under varying load levels, from minimal to extreme, allows you to gain a deeper understanding of your application's behavior and determine its limits. This helps you identify how your system responds to different load scenarios and where problems may arise.

  7. Documentation and reporting: Maintaining detailed documentation and creating reports on test results is important for analyzing and subsequently improving the system. This allows you to track trends, draw informed conclusions, and plan further optimization steps.

  8. Optimization and customization: The results of load testing should be used to optimize the system. This may include tuning servers, optimizing database queries, improving algorithms, and other measures aimed at improving the performance of the application.

  9. Collaboration with the team: Effective testing requires close collaboration with developers and other stakeholders. Collaboration allows for rapid response to identified issues and improvements to the product.

Example of a web application load testing protocol

Document Describes the results of performing load testing of the system.

1. Test object

The test object is a web application that has been subjected to load testing to assess its performance and stability under various load levels.

2. Test objectives

  • Compliance check the developed system meets the requirements.

  • Determination of the maximum and peak system performance.

  • Stability check operation of the system for a long time under load.

3. Basic profile

The basic testing profile includes the following parameters, calculated based on performance requirements and expected user behavior:

Code

Requirement

Meaning

1

Total number of requests to the system per month

30,000,000

2

Peak number of requests per month

3,300,000

3

Peak number of requests per week

900,000

4

Peak number of requests per day

110,000

5

Number of users per month

300,000

6

Number of users per week

60,000

7

Number of users per day

10,000

Estimated number of requests per hour based on maximum number of requests:

Based on Customer Journey Map (CJM) The objects that create the main load are highlighted:

Object

URI

Percentage of requests per hour

Number of requests per hour

Object 1

/

27%

11 340

Object 2

/…/

15%

6 300

Object 3

/…/

11%

4 620

Object 4

/…/

5%

2 100

Object 5

/…/

16%

6 720

Object 6

/…/

5%

2 100

Object 7

/…/

6%

2 520

Object 8

/…/

15%

6 300

The load on the objects was distributed proportionally to the target audience. Requests are distributed evenly throughout the hour:

4. Test plan

  1. Profile development load testing.

  2. Setting up monitoring systems to track performance.

  3. Script development load generation.

  4. Preparing test data (eg value variants for search queries) and creating scripts to load them into the database before testing and clean them up after.

  5. Conducting load tests.

  6. Analysis of results load testing.

5. Types of tests performed

Finding Peak Performance

  • Load: stepped.

  • Initial load: 40% of the base profile.

  • Step: 30% increase from base profile.

  • Duration of the stage: 30 minutes or until system failure.

  • Test Completion Criteria: system failure.

  • Success Criteria: The maximum system performance exceeds the base profile values.

Checking the stability of the system

  • Load: uniform with smooth acceleration.

  • Maximum load: basic profile.

  • Acceleration time: 15 minutes.

  • Duration of the main stage: 4 hours.

  • Test Completion Criteria: timeout or system failure.

  • Success Criteria: complete the test after a period of time without system failures.

system performance requirements

1. Resource utilization during tests

To achieve sustainable performance and system stability under load, server resource utilization during testing should meet the following parameters:

These requirements were established based on expert judgement and serve as a guideline since no specific system requirements were stated.

2. Requirements for the number of unsuccessful requests

To ensure system reliability, the number of unsuccessful requests should not exceed the following indicators, depending on the intensity of the incoming load:

  • Maximum Performance Search Test (last stage) – no more than 0.1% of the total number of requests.

  • Maximum Performance Confirmation Test (during the entire test) – no more than 0.1%.

  • Stability test (during the entire test) – no more than 0.1%.

The periods of result aggregation correspond to the periods of constant load in each of the tests.

3. Requirements for response time and intensity of typical steps of business operations

To ensure high performance and fast response, the system must meet the following response time requirements when performing typical business operation steps:

  • 90% of requests must be processed within 50 ms.

  • 99% of requests must be processed within 1000 ms.

  • 99.9% of requests must be processed within 2500 ms.

These requirements ensure predictable system behavior under load and minimize the likelihood of delays that are critical to business processes.

Configuration of the stand for Load Testing (LT)

Component

Meaning

CPU Cores

XX

RAM

XXXGB

Search for maximum performance

Maximum system performance is defined as the load at which resource utilization (CPU and RAM) begins to exceed the permissible limits specified in the “Resource Utilization Requirements” section. The search for maximum performance is conducted through a series of tests with a step-by-step increase in load relative to the base profile.

Step increase in load:
The step is calculated using the formula:
Step = Base profile value * 0.4 = 4.8 ≈ 5 requests per second

Duration of each step is 30 minutes.

Test results:

Step

RPS

CPU

RAM

Average response time (ms)

Minimum response time (ms)

Maximum response time (ms)

Error %

Result

1

12 q/s

31.5%

6%

853

342

2332

0.01%

Corresponds

2

17 q/s

44.8%

6%

823

339

3132

0.00%

Corresponds

3

22 q/s

58.6%

6%

851

353

2041

0.00%

Corresponds

4

27 q/s

73.6%

7%

841

337

2652

0.00%

Corresponds

5

32 q/s

88.0%

7%

764

342

2332

0.00%

CPU exceeds threshold, performance is at maximum

Charts:

Step 1

Test results graph from jMeter

Test results graph from jMeter

CPU Usage Graph (Grafana)

CPU Usage Graph (Grafana)

RAM Usage Graph

RAM Usage Graph

Step 2

Test results graph from jMeter

Test results graph from jMeter

CPU usage graph (Grafana).png

CPU Usage Graph (Grafana)

RAM usage graph.png

RAM Usage Graph

Step 3

Test results graph from jMeter.png

Test results graph from jMeter

CPU usage graph (Grafana).png

CPU Usage Graph (Grafana)

RAM Usage Graph.jpg

RAM Usage Graph

Step 4

Test results graph from jMeter.png

Test results graph from jMeter

CPU usage graph (Grafana).png

CPU Usage Graph (Grafana)

Step 5

Test results graph from jMeter.png

Test results graph from jMeter

CPU usage graph (Grafana).png

CPU Usage Graph (Grafana)

RAM usage graph.png

RAM Usage Graph

Conclusion: The maximum system performance is XX requests per second, which exceeds the expected load on XX%. The system meets performance requirements.

System stability test

The stability test is carried out under a load corresponding to the base profile over a long period of time (4 hours).

Stability test results:

RPS

CPU

RAM

Average response time (ms)

Minimum response time (ms)

Maximum response time (ms)

Error %

Result

12 q/s

33.6%

7%

503

343

4116

0.00%

Corresponds

Charts:

Test results graph from jMeter

Test results graph from jMeter

CPU Usage Graph (Grafana)

CPU Usage Graph (Grafana)

RAM Usage Graph

RAM Usage Graph

Conclusion: During 4 hours of system stability testing, the failure rate was 0%, the load was evenly distributed. The system meets the performance indicators described.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *