Load Balancer and Reverse Proxy in microservice architecture

Article author: Artem Mikhailov

Microservice architecture is one of the the most popular approaches to building complex applications today. This approach breaks down a large application into a number of small, self-contained services that work together to achieve a common goal.

However, when working with microservices, there are some difficulties in managing the load on the application. This is where two important components come to the rescue – Load Balancer and Reverse Proxy.

Load Balancer helps distribute the load across multiple service instances. For example, if you have multiple web server instances, then Load Balancer will choose one of them to process requests from the client in order to balance the load between them.

Reverse Proxy, on the other hand, is an intermediary between the client and the server. It accepts a request from the client and redirects it to the desired service. This allows you to hide the complex architecture of the application from the outside world.

The purpose of this article is to review the use of Load Balancer and Reverse Proxy when working with microservices. We’ll take a look at how these components help manage application load and keep your data secure.

Load Balancer and Reverse Proxy in microservice architecture

load balancer

in a microservice architecture, this is an extremely important tool for ensuring high availability and fault tolerance. Its job is to distribute incoming traffic among multiple instances of microservices so that the load is evenly distributed and there is no system overload.

One of the main features of the Load Balancer is the ability to detect unavailable or broken service instances and redirect traffic to more healthy instances. This significantly increases the level of reliability and stability of the system.


There are various types of Load Balancer, each of which can be used depending on specific needs and tasks. For example, Round Robin is used to evenly distribute traffic between instances, Least Connection is used to select the most free instance, and IP Hash is used to more accurately route traffic based on the source IP address.

In a microservice architecture, using a Load Balancer is a prerequisite for the efficient operation of many microservices. It helps you handle increased workloads and scale to high request volumes, while also providing higher availability and fault tolerance.

The way Load Balancer works is that it acts as an intermediary between clients and microservices. When clients send a request for a service, it passes it to the Load Balancer, which then selects a specific microservice instance that will be responsible for executing that request.

Load Balancer can work on different algorithms: Round Robin, Least Connection, IP Hash or Weighted Round Robin. With the help of these algorithms, it can regulate the load on each instance of the service, evenly distributing traffic between them and thus avoiding overloading one of them.

In addition, Load Balancer constantly checks the health and availability of each service instance using service monitoring mechanisms. If an instance stops responding or goes down, Load Balancer automatically removes it from the list of available services and redirects traffic to another running instance.

Thus, the principle of operation of Load Balancer is to distribute traffic between instances of microservices, ensure load uniformity and fault tolerance, as well as automatically monitor the status of services and instantly respond if one of them fails. All this allows to ensure high performance and stability of the system.

In a microservice architecture, where many microservices interact with each other, the use of reverse proxy is a necessity. It allows you to provide load balancing and high availability, as well as ensure the security of interaction between microservices and clients.

The task of Reverse Proxy is to accept requests from clients and redirect them to the appropriate microservices. It acts as an intermediary between clients and microservices. It is important to note that Reverse Proxy can only work with HTTP and HTTPS protocols.


The main task of Reverse Proxy is to provide load balancing for microservices. It distributes requests from clients to various microservices, taking into account their current load. This avoids overloading a single microservice and maintains high availability.

In addition, Reverse Proxy can perform the following functions:
– Security support. Reverse Proxy can encrypt and decrypt messages between clients and microservices, which allows you to provide protection against attacks and hacks.
— Caching. Reverse Proxy can cache the responses of microservices, which allows you to speed up the system and reduce the load on microservices.
— Traffic routing. Reverse Proxy can redirect traffic to different microservices depending on their location or other criteria.

Benefits of Using Reverse Proxy
– Higher availability. Reverse Proxy eliminates single point of failure and ensures high availability of microservices.
– Higher reliability. Load balancing between microservices avoids overload and system failures.
– Higher speed. Caching and traffic optimization allow you to speed up the system and improve its performance.
– Higher level of security. Reverse Proxy provides system protection against attacks and hacks.

The Reverse Proxy workflow consists of the following steps:

1. The client sends a request to the reverse proxy.
2. Reverse proxy receives the request and analyzes it.
3. Reverse proxy checks if the requested information or page is in the cache. If the page is in the cache, it is sent back to the client.
4. If the page is not in the cache, the reverse proxy sends the request to a server that is online and receives a response.
5. Reverse proxy analyzes the response from the server and, if necessary, processes it.
6. Reverse proxy sends the received response from the server to the client.

A reverse proxy can perform tasks such as load balancing servers, cache memory, or determining the correct location of a server to process a request from a client.

Creating a microservice architecture using Load Balancer and Reverse Proxy

When you’re building a microservice architecture, it’s important to be able to distribute traffic across different application instances and keep everything under control. For these purposes, we can use two main components: Load Balancer and Reverse Proxy.

The Load Balancer acts as a traffic controller that distributes requests to the various application instances behind it. This maximizes resource utilization and reduces response time.

Reverse Proxy works as an intermediary between the client and the server, which hides the real IP address of the server, and also provides an additional layer of security and encryption.

Let’s look at an example of creating a microservice architecture using Docker and code. Suppose we have 3 applications running on ports 5000, 6000 and 7000. We can run these applications in Docker containers using the command:

docker run -p 5000:5000 app_1
docker run -p 6000:6000 app_2
docker run -p 7000:7000 app_3

Now, in order to perform load balancing, we need to create a container with our Load Balancer – in our case, this is Nginx. Create a file nginx.conf:

http {
  upstream app_servers {
    server app_1:5000;
    server app_2:6000;
    server app_3:7000;
  }

  server {
    listen 80;
    server_name example.com;

    location / {
      proxy_pass http://app_servers;
      proxy_redirect off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
  }
}

As you can see from the code above, we are configuring Nginx to use our Load Balancer, and set it to listen on port 80, which will accept requests from clients. We then set up our application addresses, and set up proxy attributes that allow us to send requests with the correct headers.

We can use the same Nginx application to set up the Reverse Proxy. Create the nginx.conf configuration file:

http {
  server {
    listen 80;
    server_name example.com;

    location / {
      proxy_pass http://internal-server-ip:port;
      proxy_redirect off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
  }
}

In the example above, we define a Reverse Proxy server that accepts requests on port 80 and sends them to internal-server-ip on the given port. Next, we set up the proxy attributes and send requests with the correct headers.

Consider one more example using Load Balancer and Reverse Proxy in microservice architecture.

Suppose we have several microservices that run on different hosts and ports. Service names: Users, Orders, Payment. Each service is designed to perform certain tasks, for example, the Users service is responsible for managing users.

To provide load balancing between services, we can use HAProxy as a Load Balancer and Nginx as a Reverse Proxy.

HAProxy is a professional and high performance Load Balancer that allows you to distribute traffic across multiple servers.

To create a microservice architecture using HAProxy, you need to follow these steps:

1. Configure HAProxy for load balancing between servers. Let’s create a file haproxy.cfg with the following content:

frontend http_front
    bind *:80
    mode http
    option http-server-close
    default_backend http_back

backend http_back
    mode http
    balance roundrobin
    option forwardfor
    server users_service1 users-service1:80 check
    server users_service2 users-service2:80 check
    server orders_service1 orders-service1:80 check
    server orders_service2 orders-service2:80 check
    server payment_service1 payment-service1:80 check
    server payment_service2 payment-service2:80 check

In the example above, we configure port 80 on the HAProxy frontend to balance requests across multiple backends, where each backend represents a different service.

2. Configure Nginx as Reverse Proxy to hide IP addresses and provide security and encryption. Let’s create a file nginx.conf:

http {
  upstream users {
    server haproxy_ip:80;
  }

  server {
    listen 80;
    server_name users.example.com;

    location / {
      proxy_pass http://users;
      proxy_redirect off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
  }
}

Conclusion

In the example above, we configure Nginx to Reverse Proxy mode, in our case we configure the HAProxy address as an upstream, which then balances requests between services. We then set up proxy attributes that allow us to send requests with the correct headers.

So, we have seen in practice how Load Balancer and Reverse Proxy can improve the performance of a microservice architecture. By balancing the load between servers and managing network traffic, you can achieve higher fault tolerance and application performance.

Reverse Proxy helps to avoid direct access to internal resources, protecting them from external threats, and also simplifies server request management. Load Balancer, on the other hand, solves the problem of traffic distribution by redirecting requests to free resources and reducing the load on specific servers.

The rational use of these technologies is based on understanding the essence of the task and the scope of the project. The use of Load Balancer and Reverse Proxy can have an impact on the company’s business processes and its overall success.

Finally, I would like to recommend free webinar from OTUS, which will cover the basics of domain-driven design and its application to domain-driven design. At the webinar, you will understand how DDD helps in architecture building.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *