Building a Min.io dev cluster in containers

Hi! In this article we will consider deploying a min.io dev cluster in containers (docker-compose) with tls, site-replication and touch on tiering a little. We will also set up monitoring of our cluster in prometheus+grafana.

The article is a step-by-step guide for deploying a cluster minio in bitnami containers. We will also deploy a single-node minio nearby. We will combine these 2 minios using site-replication. We will create a test user, a test policy, a test bucket and try to work with it.

Preparation

Let's start by cloning the repository https://github.com/yubazh/minio-compose

git clone git@github.com:yubazh/minio-compose.git

To use TLS we need to generate certificates. Certificates can be generated using a script generate_ssl_cert.sshwhich is located in the certsgen directory. The script uses openssl. Before running the certificate, you must place your IP address in the file certsgen/server-ext.cnf instead of the address 192.168.0.199. It is also worth noting that the certificate will be issued to *.local. The domain names minio{1..4}.local will be used for interaction between minio nodes.

cat certsgen/server-ext.cnf       
subjectAltName=DNS:*.local,IP:0.0.0.0,IP:127.0.0.1,IP:192.168.0.199

Let's look at the script itself:

# удаляем существующие сертификаты
rm *.pem

# генерируем certificate authority с ключом 
# 1. Generate CA's private key and self-signed certificate
openssl req -x509 -newkey rsa:4096 -days 365 -nodes -keyout ca-key.pem -out ca-cert.pem -subj "/C=XX/ST=XXX/L=XXXX/O=XXXXX/OU=XXXXXX/CN=*.local/emailAddress=mail@gmail.com"

echo "CA's self-signed certificate"
openssl x509 -in ca-cert.pem -noout -text

# генерируем сертификат с ключом
# 2. Generate web server's private key and certificate signing request (CSR)
openssl req -newkey rsa:4096 -keyout server-key.pem -out server-req.pem -subj "/C=XX/ST=XXX/L=XXXX/O=XXXXX/OU=XXXXXX/CN=*.local/emailAddress=mail@gmail.com" -nodes

# 3. Use CA's private key to sign web server's CSR and get back the signed certificate
openssl x509 -req -in server-req.pem -CA ca-cert.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -days 60 -extfile server-ext.cnf

echo "Server's signed certificate"
openssl x509 -in server-cert.pem -noout -text

echo "Command to verify Server and CA certificates"
openssl verify -CAfile ca-cert.pem server-cert.pem

# удаляем директорию certs, в которую мы положим вновь созданные сертификаты для minio
echo "Deleting certs dir"
rm -rf ../certs

# создаем заново директорию ../certs и помещаем туда необходимые сертификаты
echo "Creating certs dirs"
mkdir -p ../certs/CAs
cp ca-cert.pem ../certs/ca.crt
cp ca-cert.pem ../certs/CAs/ca.crt
cp server-cert.pem ../certs/public.crt
cp server-key.pem ../certs/private.key 
chmod +r ../certs/private.key

# удаляем существующие директории для хранения данных самого minio 
echo "Deleting minio dirs"
rm -rf ../minio*

# создаем новые директории для хранения данных minio 
echo "Creating minio dirs"
mkdir ../minio1
mkdir ../minio2
mkdir ../minio3
mkdir ../minio4
mkdir ../miniosolo
sudo chown -R 1001:1001 ../minio*

In short, we delete all previously created certificates in the certsgen directory, delete the ../certs directory (in which we will put new certificates and pass them into containers), delete old directories for storing minio persistent data and create new ones with the necessary rights. Let's run the script:

cd certsgen
./generate_ssl_cert.ssh

If everything is ok, we will see the output of our certificates, as well as messages about deleting old directories and creating new ones.

Main min.io cluster

Now let's break down the main compose and deploy the cluster.

For deployment we will use minio containers from bitnami (https://github.com/bitnami/containers/tree/main/bitnami/minio). In docker-compose we will set the admin password, container hostnames, configure “clustering”, https, persistent storage, healthcheck, and also nginx, which will act as a proxy in front of our cluster.

It is worth noting separately that the documentation states that the minimum number of nodes in a minio cluster is 4. It is also stated that in the production environment it is necessary to use minimum 4 disks on each node. This is all related to the internals of minio, and how it breaks everything down into objects and stores those objects later. The minio website has handy calculator for production solutions: https://min.io/product/erasure-code-calculator.

I would also like to note that the documentation states to use on disks for storing “data” file system xfs.

Let's take a closer look at the docker-compose.yaml file, namely the first minio container (the other 3 are completely identical) and the container with nginx:

cat docker-compose.yaml            
version: '3.7'

services:
  # first cluster
  minio1:
    # рестарт в случае падения - всегда
    restart: always
    # указываем использование образа bitnami minio с тегом 2024.7.26
    image: bitnami/minio:2024.7.26
    # отдельно указываем название контейнера
    container_name: minio1
    # указываем hostname по которому к нему можно будет обращаться внутри docker network
    hostname: minio1.local
    # блок с переменными, которые будут загружаться в minio
    environment:
      # указываем админский логин и пароль
      - MINIO_ROOT_USER=minioadmin
      - MINIO_ROOT_PASSWORD=MBVfbuu2NDS3Aw
      # указываем использование distributed mode (кластера)
      - MINIO_DISTRIBUTED_MODE_ENABLED=yes
      # указываем из каких нод будет состоять кластер.
      # в нашем случае это будут контейнеры, обращаемся к ним по хостнеймам
      - MINIO_DISTRIBUTED_NODES=minio1.local,minio2.local,minio3.local,minio4.local
      # данную переменную подсмотрел где-то на стаковерфлоу, фиксит некоторые падения
      - MINIO_SKIP_CLIENT=yes
      # указываем явное использование https
      - MINIO_SCHEME=https
      # указывает порт, по которому можем попасть на web-ui
      - MINIO_CONSOLE_PORT_NUMBER=9001
      # следующие 2 переменные следует добавлять при использовании nginx перед кластером
      # первая переменная указывает адрес всего кластер
      - MINIO_SERVER_URL=https://minio.local
      # вторая указываем конкретное расположение web-ui
      - MINIO_BROWSER_REDIRECT_URL=https://minio.local/minio/ui
      # обе эти переменные решают вопрос взаимодействия и переадресации между нодами minio и nginx
    volumes:
      # указываем директорию для хранения данных. монтируем в /bitnami/minio/data
      # обратите внимание что в разных контейнерах (разных сборщиков) разное место монтирования 
      - ./minio1:/bitnami/minio/data
      # монтируем директорию с сертификатами, для взаимодействия по https
      - ./certs:/certs
    healthcheck:
      # производим очень простой healthcheck
      test: [ "CMD", "curl", "-k", "https://localhost:9000/minio/health/live" ]
      interval: 30s
      timeout: 20s
      retries: 3
...
...
...
  minio:
    # контейнер с nginx. указываем используемый образ
    image: nginx:1.19.2-alpine
    # указываем имя контейнера
    container_name: minio
    # указываем hostname, по которому сможем взаимодействовать с контейнером внутри docker сети
    hostname: minio.local
    volumes:
      # монтируем конфиг nginx
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      # монтируем сертификаты, для взаимодействия с нодами по https
      - ./certs:/certs
    ports:
      # выбрасываем порт 443, причем только в nginx. так как соединения будет принимать только nginx
      - "443:443"

The other 3 minio containers have identical settings. Only the ordinal numbers in the name, hostname and mount directory are different.

Let's look at the nginx.conf configuration file. It is presented in general form in the minio documentation (https://min.io/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html). We just need to tweak it a little. Pay attention to the upstream section:

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  4096;
}

http {
   include       /etc/nginx/mime.types;
   default_type  application/octet-stream;

   log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                   '$status $body_bytes_sent "$http_referer" '
                   '"$http_user_agent" "$http_x_forwarded_for"';

   access_log  /var/log/nginx/access.log  main;
   sendfile        on;
   keepalive_timeout  65;

   # include /etc/nginx/conf.d/*.conf;
   upstream minio_s3 {
      least_conn;
      server minio1.local:9000;
      server minio2.local:9000;
      server minio3.local:9000;
      server minio4.local:9000;
   }

   upstream minio_console {
      least_conn;
      server minio1.local:9001;
      server minio2.local:9001;
      server minio3.local:9001;
      server minio4.local:9001;
   }

   server {
      listen       443 ssl;
      listen  [::]:443 ssl;
      server_name  minio.local;

      # Allow special characters in headers
      ignore_invalid_headers off;
      # Allow any size file to be uploaded.
      # Set to a value such as 1000m; to restrict file size to a specific value
      client_max_body_size 0;
      # Disable buffering
      proxy_buffering off;
      proxy_request_buffering off;
      ssl_certificate      /certs/public.crt;
      ssl_certificate_key  /certs/private.key;
      ssl_protocols TLSv1.2 TLSv1.3;
      ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
      ssl_prefer_server_ciphers off;
      ssl_verify_client off;

      location / {
         proxy_set_header Host $http_host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;

         proxy_connect_timeout 300;
         # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
         proxy_http_version 1.1;
         proxy_set_header Connection "";
         chunked_transfer_encoding off;

         proxy_pass https://minio_s3; # This uses the upstream directive definition to load balance
      }

      location /minio/ui/ {
         rewrite ^/minio/ui/(.*) /$1 break;
         proxy_set_header Host $http_host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_set_header X-NginX-Proxy true;

         # This is necessary to pass the correct IP to be hashed
         real_ip_header X-Real-IP;

         proxy_connect_timeout 300;

         # To support websockets in MinIO versions released after January 2023
         proxy_http_version 1.1;
         proxy_set_header Upgrade $http_upgrade;
         proxy_set_header Connection "upgrade";
         # Some environments may encounter CORS errors (Kubernetes + Nginx Ingress)
         # Uncomment the following line to set the Origin request to an empty string
         # proxy_set_header Origin '';

         chunked_transfer_encoding off;

         proxy_pass https://minio_console; # This uses the upstream directive definition to load balance
      }
   }
}

Let's look at the certs directory. We will see that they placed the public.crt certificate in it, which is in fact the certsgen/server-cert.pem file (i.e. roughly speaking, the server certificate). Also, the key from the above-mentioned certificate – private.key, which is the certsgen/server-key.pem file. And they placed the original self-signed ca-cert.pem, renaming it to ca.crt. They also created the certs/CAs directory and copied ca.crt into it. The CAs directory is necessary for minio, since by default it looks for the ca.crt it needs in it.https://min.io/docs/minio/linux/operations/network-encryption.html) All certificates are forwarded to containers with minio.

Let's run the cluster version:

docker compose -f docker-compose.yaml up -d

Let's wait a minute until everything starts up and health checks are done. Let's do it

docker ps -a

And we should see all our containers rise:

docker ps

docker ps

Go to webui

Now let's add our ca.crt to the browser and try to enter the webui.

I use Firefox, so I will describe how to add a certificate to Firefox: Go to settings => in the search bar enter “cert” => click on the button “View Certificates” =>

View Certificates

View Certificates

In the window that appears, select the far right tab “Authorities” => then below “Import” => now select our ca.crt from certs/ca.crt => check the necessary boxes => click “OK” => “OK” again.

Also, let's add a local entry to /etc/hosts:

cat /etc/hosts
...
127.0.0.1 minio.local

Now open the browser and go to https://127.0.0.1/minio/ui/login and enter the login and password that we specified in docker-compose.yaml in the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD variables. Pay attention to the lock next to our address, it should be without an exclamation mark, which will indicate that the connection is secure.

TLS

TLS

After successful login, we will get to the minio web interface. We can make sure that everything is fine by selecting the items on the left side of the screen: monitoring => metrics. We will see summary information on the cluster:

Built-in monitoring

Built-in monitoring

Creating a policy

Now let's create a policy. Go to “policies” => create policy. Enter testpolicy in the line with the name, and add your block Statemenet.

  "Statement": [
    {
      "Action": [
        "s3:PutBucketPolicy",
        "s3:GetBucketPolicy",
        "s3:DeleteBucketPolicy",
        "s3:ListAllMyBuckets",
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::testbucket"
      ],
      "Sid": ""
    },
    {
      "Action": [
        "s3:AbortMultipartUpload",
        "s3:DeleteObject",
        "s3:GetObject",
        "s3:ListMultipartUploadParts",
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::testbucket/*"
      ],
      "Sid": ""
    }
  ]

This policy gives full control over the bucket named testbucket. Click “Save”

testpolicy

testpolicy

Creating a user

Now let's create a user. Go to Identify => Users => Create User. Enter in the Name field: testuser. In the password field: testpassword. Also, just below, among the existing policies, select the testpolicy we created earlier. Click Save.

User

User

Creating a bucket

Well, let's create a test bucket. Go to Buckets => Create Bucket. In Name, enter testbucket and be sure to enable versioning. We will need it when enabling site replication later. Click create bucket.

testbucket

testbucket

s3cmd

Let's try to interact with the bucket from the operating system. Install the s3cmd utility

sudo apt install -y s3cmd 

Let's set it up, create a file ~/.s3cfg. I will give an example of my config:

# Setup endpoint
# указываем наш сервер
host_base = 127.0.0.1
host_bucket = 127.0.0.1
# оставляем по дефолту
bucket_location = us-east-1
# указываем что используем https
use_https = True

# указываем логин и пароль нашего тестового пользователя
# Setup access keys
access_key = testuser
secret_key = testpassword
 
# Enable S3 v4 signature APIs
# при выполнении большого количества тестов, 
# у меня возникла необходимость подключить указанную ниже директиву
# однако при выполнении дефолтных действий она не нужна
# поэтому я ее закоментил, но оставил на всякий случай
# signature_v2 = False

Now we need to install the certificate in our operating system (I have ubuntu)

sudo cp certs/ca.crt /usr/local/share/ca-certificates/ca-minio.crt
sudo update-ca-certificates

Let's do a test. Let's try to get a list of buckets in our s3:

s3cmd ls

s3cmd ls

We see that our testbucket is reflected. If instead of success you get a message about problems with the certificate, you can attach it to the s3cmd utility manually using the ca-certs argument:

s3cmd --ca-certs ./certs/ca.crt ls s3://

We get the same conclusion:

Let's put our nginx.conf file into a bucket and check it in the web-ui:

Go to webui, object browser and open testbucket. The file is in place:

testbucket

testbucket

Separate minio

To organize site-replication or tiering, we will need a second minio. However, we do not have this in a cluster solution, so we will deploy a separate minio from one container.

Let's look at docker-compose.yaml. I'll only dwell on the differences from the cluster version:

cat docker-compose-solo.yaml 
version: '3.7'

services:
  # minio solo
  miniosolo:
    restart: always
    image: bitnami/minio:2024.7.26
    container_name: miniosolo
    hostname: miniosolo.local
    # в данном случае мы обязательно выбрасываем порты, по которым будем
    # взаимодействовать с нашим minio
    # 9001 - webuil 9000 - api
    ports:
      - '9000:9000'
      - '9001:9001'
    environment:
      # также указываем login & password
      - MINIO_ROOT_USER=minioadmin
      - MINIO_ROOT_PASSWORD=2eG1~B/j{70d
      - MINIO_SKIP_CLIENT=yes
      # указываем использование https
      - MINIO_SCHEME=https
      # указываем порт для web-ui
      - MINIO_CONSOLE_PORT_NUMBER=9001
    extra_hosts:
      # вносим в /etc/hosts данного контейнера связку minio.local c 192.168.0.199
      - "minio.local:192.168.0.199"
    volumes:
      - ./miniosolo:/bitnami/minio/data
      - ./certs:/certs
    healthcheck:
      test: [ "CMD", "curl", "-k", "https://localhost:9000/minio/health/live" ]
      interval: 30s
      timeout: 20s
      retries: 3

We deploy minio and check the container:

docker compose -f docker-compose-solo.yaml up -d
docker ps

We make sure that the container has risen. After that, we go to the webui: https://127.0.0.1:9001/login (please note that this mini is available on specific ports. for webui – 9001).

and log in using variables from docker-compose-solo.yaml. Make sure the connection is secure (the same lock next to the address). And also make sure that this is a completely different minio. It has only one node in metrics. And there are no users, policies or buckets.

Site Replication

Let's set up replication between these clusters

Go to the webui of the main cluster to the site replication tab at the very bottom of the administration panel. Click Add Sites and fill in all the fields. At the moment, we will use the admin account, but for these purposes it is better to create a separate account:

Site Replication

Site Replication

Here we will need the previously entered entries in /etc/hosts. We will access the first (main) cluster via minio.local. And the second cluster via the external IP address (192.168.0.199 – we registered it at the very beginning, when creating the certificate). Click Save and get:

We can go to the Replication Status tab and make sure that all entities have been transferred. We can also go to webui stand-alone minio (https://127.0.0.1:9001/) and see the contents of its bucket:

We see that the bucket has been created, but the number of objects = 0. Let's check it using the s3cmd utility. Let's change s3cfg, specifying to use stand-alone minio (let's add only the port, leave the rest the same):

cat ~/.s3cfg                
# Setup endpoint
host_base = 127.0.0.1
host_bucket = 127.0.0.1
bucket_location = us-east-1
use_https = True
 
# Setup access keys
access_key = testuser
secret_key = testpassword
 
# Enable S3 v4 signature APIs
#signature_v2 = False

Let's try to load the nginx.conf file:

We see that everything went well. As far as I understand, this is a display “feature”. Now in the web-ui we see that the object is 1.

Tiering

We will not dwell on this point in detail. I will only describe my main conclusions. Minio cannot divide disks into ssd and hdd. It simply collects them all in one pile. Moreover, as far as the documentation indicates – according to the principle of the weakest link. That is, if the disks are of different sizes, then in fact only the capacity in the disk corresponding to the smallest disk will be used.

Well, since minio can't divide disks into ssd and hdd, it can't create buckets only on specific disks. That is, the bucket is spread out in general, not on specific disks. It follows that to use tiering, you need adjacent clusters, or rented buckets that are deployed on the corresponding disks. That is, for example, your own cluster on ssd as hot storage. And a rented s3 bucket somewhere else as cold storage.

To configure tiering, there is a corresponding section on the left side of the panel: tiering. After connecting a bucket in this section, we can use it in our specific bucket: buckets => testbucket => lifecycle => add lifecycle rule. That is, in fact, tiering in minio = lifecycle management.

In the rules, we set which specific files will be uploaded to which tiering bucket after what specific period of time. For example, we can store files in our bucket for only 1 month. After that, they are all uploaded to a remote bucket. Moreover, they remain accessible from our main bucket. I will also note that there is no movement of files back. If the files were uploaded to a tiering bucket, there is no setting to place them back in hot storage (at least I did not find one).

It is worth noting separately. When setting up site replication + tiering, we need to set up each cluster's own tiering separately. The documentation specifically states that tiering non-replicating entity.

Comments are welcome on this section. I can't say that this part is described in the documentation very clearly. Here, answers to questions can only be obtained after raising the cluster and independent tests.

Mini client

To set up monitoring we will need the minio client utility. Let's install it according to the instructions in the official documentation: https://min.io/docs/minio/linux/reference/minio-mc.html

After installation, let's add our main cluster to the minio client:

# в случае если у вас уже установлена утилита midnight commander, 
# то обратитесь к mc например через ~/minio-binaries/mc
~/minio-binaries/mc alias set mainminio https://127.0.0.1 minioadmin MBVfbuu2NDS3Aw

We will see a success message:

Let's check and get brief information about the cluster:

mini client

mini client

Monitoring

As we saw earlier, the minio web interface already has a monitoring section where you can view some information about the cluster. However, we will deploy prometheus, configure scrapeconfig and plot the collected metrics in grafana.

First, let's generate a token for collecting metrics:

~/minio-binaries/mc admin prometheus generate mainminio

We get the following:

We need to copy the bearer_token and replace it in the file prometheus/prometheus.yml in two places (lines 7 and 16):

global:
  scrape_interval: 15s
  scrape_timeout: 10s
  evaluation_interval: 15s
scrape_configs:
- job_name: minio-job-server
  bearer_token: #####ВОТ_ЗДЕСЬ######
  metrics_path: /minio/v2/metrics/cluster
  scheme: https
  tls_config:
    ca_file: /certs/ca.crt
    #insecure_skip_verify: true
  static_configs:
  - targets: [minio.local]
- job_name: minio-job-bucket
  bearer_token: #####И_ВОТ_ЗДЕСЬ######
  metrics_path: /minio/v2/metrics/bucket
  scheme: https
  tls_config:
    ca_file: /certs/ca.crt
    #insecure_skip_verify: true
  static_configs:
  - targets: [minio.local]

In this scrapeconfig we tell Prometheus to collect metrics from minio.local by going to https://minio.local/minio/v2/metrics/cluster and https://minio.local/minio/v2/metrics/bucket. We also specify which ca.crt to use when collecting metrics (/certs/ca.crt – we will pass the certs directory to Prometheus).

Let's look at docker-compose-monitoring.yaml:

version: "3.5"

services:
  prometheus:
    image: prom/prometheus:v2.51.2
    container_name: prometheus
    # указываем прометеусу что нужно подхватить в качестве конфиг-файла наш, 
    # который лежит в /etc/prometheus/prometheus.yml
    # это конфиг, который мы правили немного выше
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
    # "выбрасываем" порт 9090
    ports:
      - 9090:9090
    volumes:
      # монтируем директорию prometheus с нашим конфигом
      - ./prometheus:/etc/prometheus
      # для хранения данных используем volume
      # мне не нужно хранить данные в директории с кластером minio
      # поэтому будем класть их в volume
      - prom_data:/prometheus
      # прокидываем директорию certs с нашими сертификатами
      - ./certs:/certs
    healthcheck:
      test: wget --no-verbose --tries=1 --spider localhost:9090 || exit 1
      interval: 5s
      timeout: 10s
      retries: 3
      start_period: 5s  
  
  grafana:
    image: grafana/grafana:10.4.2
    container_name: grafana
    ports:
      # порт, по которому мы будем ходить в grafana
      - 3000:3000
    environment:
      # указываем логин и пароль от админа
      # AUTH
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=admin
      # указываем возможность просматривать дашборды без аутентификации
      - GF_AUTH_ANONYMOUS_ENABLED=true
    volumes:
      # монтируем директорию provisioning, в которой настройки по 
      # добавлению datasources и dashboards
      - ./grafana/provisioning:/etc/grafana/provisioning
      # монтируем директорию c json наших дашбордов
      - ./grafana/dashboards:/var/lib/grafana/dashboards
    depends_on:
      # перед разворачиванием графаны, дожидаемся готовности prometheus
      - prometheus
    healthcheck:
      test: curl --fail localhost:3000
      interval: 5s
      timeout: 10s
      retries: 3
      start_period: 10s

# раздел описания томов (volume)
volumes:
  prom_data:

Raising monitoring:

docker compose -f docker-compose-monitoring.yaml up -d

Using docker ps we make sure that both containers are healthy. We give grafana some more time to rise and go to http://127.0.0.1:3000. Select the menu at the top left (three horizontal lines) and go to dashboards.

Minio Dashboard

Minio Dashboard

Minio dashboard provides information on the cluster as a whole.

Minio Bucket Dashboard displays information specifically for buckets:

Please note that Prometheus needs some time to collect at least some metrics. Also, if you see that only some of the fields are empty, then apparently these metrics are still empty. For example, this can happen if there are no buckets at all, then the dashboard with buckets will display “no data”. Or, for example, if there have been no requests to the API yet, then the S3 Api Request will also be empty.

Splitting into virtual machines

If you want to deploy your minio cluster on different machines, not just one, then compose can be slightly transformed. Let's consider compose of the first container separately. Note that in this case we forward ports 9000 and 9001 for communication with the container from the outside. And also hard-code the addresses of all our virtual machines with containers in /etc/hosts (extra_hosts section)

version: '3.7'

services:
  minio1:
    restart: always
    image: bitnami/minio:2024.7.26
    container_name: minio1.local
    hostname: minio1.local
    ports:
      - '9000:9000'
      - '9001:9001'
    environment:
      - MINIO_ROOT_USER=minioadmin
      - MINIO_ROOT_PASSWORD=MBVfbuu2NDS3Aw
      - MINIO_DISTRIBUTED_MODE_ENABLED=yes
      - MINIO_DISTRIBUTED_NODES=minio1.local,minio2.local,minio3.local,minio4.local
      - MINIO_SKIP_CLIENT=yes
      - MINIO_SCHEME=https
      - MINIO_CONSOLE_PORT_NUMBER=9001
      - MINIO_SERVER_URL=https://minio.local
      - MINIO_BROWSER_REDIRECT_URL=https://minio.local/minio/ui
    extra_hosts:
      - "minio1.local:192.168.0.55"
      - "minio2.local:192.168.0.56"
      - "minio3.local:192.168.0.57"
      - "minio4.local:192.168.0.58"
      - "minio.local:192.168.0.59"
    volumes:
      - /mnt/minio1:/bitnami/minio/data
      - ./certs:/certs
    healthcheck:
      test: [ "CMD", "curl", "-k", "https://localhost:9000/minio/health/live" ]
      interval: 30s
      timeout: 20s
      retries: 3

# раздел с разворачиванием node exporter
  node_exporter:
    image: prom/node-exporter:v1.8.2
    container_name: node_exporter
    command:
      - '--path.rootfs=/host'
    network_mode: host
    pid: host
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'

nginx listing:

version: '3.7'
services:
  nginx:
    image: nginx:1.19.2-alpine
    hostname: nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/certs
    extra_hosts:
      - "minio1.local:192.168.0.55"
      - "minio2.local:192.168.0.56"
      - "minio3.local:192.168.0.57"
      - "minio4.local:192.168.0.58"
      - "minio.local:192.168.0.59"
    ports:
      - "443:443"
  node_exporter:
    image: prom/node-exporter:v1.8.2
    container_name: node_exporter
    command:
      - '--path.rootfs=/host'
    network_mode: host
    pid: host
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'

And also look at nginx.conf. In the nginx configuration file, we are primarily interested in the upstream section, where we list our nodes:

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  4096;
}

http {
   include       /etc/nginx/mime.types;
   default_type  application/octet-stream;

   log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                   '$status $body_bytes_sent "$http_referer" '
                   '"$http_user_agent" "$http_x_forwarded_for"';

   access_log  /var/log/nginx/access.log  main;
   sendfile        on;
   keepalive_timeout  65;

   # include /etc/nginx/conf.d/*.conf;
   upstream minio_s3 {
      least_conn;
      server minio1.local:9000;
      server minio2.local:9000;
      server minio3.local:9000;
      server minio4.local:9000;
   }

   upstream minio_console {
      least_conn;
      server minio1.local:9001;
      server minio2.local:9001;
      server minio3.local:9001;
      server minio4.local:9001;
   }

   server {
      listen       443 ssl;
      listen  [::]:443 ssl;
      server_name  minio.local;

      # Allow special characters in headers
      ignore_invalid_headers off;
      # Allow any size file to be uploaded.
      # Set to a value such as 1000m; to restrict file size to a specific value
      client_max_body_size 0;
      # Disable buffering
      proxy_buffering off;
      proxy_request_buffering off;
      ssl_certificate      /certs/public.crt;
      ssl_certificate_key  /certs/private.key;
      ssl_protocols TLSv1.2 TLSv1.3;
      ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
      ssl_prefer_server_ciphers off;
      ssl_verify_client off;

      location / {
         proxy_set_header Host $http_host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;

         proxy_connect_timeout 300;
         # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
         proxy_http_version 1.1;
         proxy_set_header Connection "";
         chunked_transfer_encoding off;

         proxy_pass https://minio_s3; # This uses the upstream directive definition to load balance
      }

      location /minio/ui/ {
         rewrite ^/minio/ui/(.*) /$1 break;
         proxy_set_header Host $http_host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_set_header X-NginX-Proxy true;

         # This is necessary to pass the correct IP to be hashed
         real_ip_header X-Real-IP;

         proxy_connect_timeout 300;

         # To support websockets in MinIO versions released after January 2023
         proxy_http_version 1.1;
         proxy_set_header Upgrade $http_upgrade;
         proxy_set_header Connection "upgrade";
         # Some environments may encounter CORS errors (Kubernetes + Nginx Ingress)
         # Uncomment the following line to set the Origin request to an empty string
         # proxy_set_header Origin '';

         chunked_transfer_encoding off;

         proxy_pass https://minio_console; # This uses the upstream directive definition to load balance
      }
   }
}

Conclusion

I couldn't find any ready-made composes on the Internet, so I decided to post mine. I hope it will help someone in their work.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *