Knight’s move: how to receive messages in Kafka via Nginx

Hello, Habr! I am Alexey Kashavkin, a cloud operations engineer at G-Core Labs, and have been administering OpenStack for the past five years. Today I’ll talk about crutch non-standard use of technologies or how to put Kafka brokers behind Nginx. If for some reason you have no other way to receive messages – this note is just for you.

Why is it needed

The problem came up when I happened to set up a Kafka cluster to set up a test bed to proof-of-concept for a new project. Before creating the service itself – the central storage of logs – we decided to conduct PoC testing and find out how Kafka will work with Elasticsearch. They immediately began to make the stand with encryption (SSL / TLS), but Kafka refused to accept the existing PEM certificates. Regulations forbade me to solve the problem directly, since certificates are curated and brought to our servers by another department using Puppet. I had to make a knight’s move and, in this form, try to add them to Nginx. However, it turned out that the web server could not proxy into Kafka out of the box, so we had to attach modules to it. How I managed to do this I described below – I hope this mini-instruction will help someone in similar conditions.

It is clear that such a solution will seem strange to someone, but it can come in handy in legacy projects where there is no way to write your own publisher and integrate it into the code, but there is an interface for displaying any information via HTTP that you can use.

What to do

The method I will describe is rather succinct. For him, you need to collect two dynamic modules Nginx:

  1. echo

  2. kafka-log

After the build is complete, the configuration file for the modules to work will not cause any difficulties in writing. I will just give a simple example with subsequent comments to it, and you can already add it taking into account your peculiarities:

load_module /etc/nginx/modules/ngx_http_echo_module.so;
load_module /etc/nginx/modules/ngx_http_kafka_log_module.so;

http {
    
  kafka_log_kafka_brokers kafka1:9092,kafka2:9092,kafka3:9092;
    
  server {
    listen	443 ssl; 

    ssl_certificate 	/etc/nginx/certs/your_cert.crt;
    ssl_certificate_key /etc/nginx/certs/your_cert_key.key;
          
    if ($request_uri ~* ^/(.*)$) {
      set $request_key $1;
    }
        
    location / {
      kafka_log kafka:$request_key $request_body;
      echo_read_request_body;
    }
  }
}

Let’s analyze the configuration file in order:

  1. Modules:

    • ngx_http_echo_module.so needed to pass the request body when directives are not used proxy_pass, fastcgi_pass, uwsgi_pass and scgi_pass;

    • ngx_http_kafka_log_module.so – a module for transferring messages from Nginx to Kafka topics.

  2. kafka_log_kafka_brokers – Kafka brokers directly.

  3. Next is the block http with block server, which contains the condition for $request_uri… This condition is needed to remove “/” of $request_uri, as rewrite you can’t use it here – it will call return and will complete processing before the directive echo_read_request_body… In a variable $request_key we get the name of the topic without a backslash. It was – / topic_name, became – topic_name

  4. Block location specifies work with two modules:

    • kafka_log – send the request body to the Kafka topics;

    • echo_read_request_body– we read the request body to pass it to $request_body

That’s all – now you can receive messages for Kafka by sending them to a URL like https://example.com/topic_name

PS Keep in mind that this method may have significant limitations and is in many ways inferior to the capabilities of your own publisher. For our project, we later redesigned the certificates and added them to the Kafka configuration.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *