how to cut a monolithic project into pieces

Diving into the world of containerization with Docker is the path to optimized application deployment, and the key to simplifying the lives of developers and system administrators. My name is Andrey Averkov, I started in IT in 2008 as an analyst-designer of IT systems, 11 years as a developer and recent years in management positions. Now I am the team leader of a development team of 9 people in the Cocos group. We are engaged in the creation and support of CPA platforms (gdeslon.ru, fxpartners.ru, ads.mobisharks.com), as well as a landing page generation project – lpgenerator.ru. We have extensive experience in dividing products into parts, so today we have collected the most basic and necessary things for working with Docker. In our cheat sheet with @Egorov_Ilja you will find everything you need for a successful start with Docker: from basic concepts and installation to advanced techniques for working with containers.

What are containers and Docker

Before we talk about Docker, let’s dive a little into the theory of what containers are and why they are needed. If you are in the know, you can go straight to the second part (a little lower), there will only be the base. And forgive me right away, there are few pictures and a lot of code.

Containers represent a method for standardizing the deployment of applications and separating them from the overall infrastructure. The application instance runs in an isolated environment that has no impact on the underlying operating system.

For developers, this means that they do not have to worry about what environment their application will operate in, or whether the necessary settings and dependencies are available. They can simply build the application, package all the dependencies and settings into a single image. This image is easy to run on different systems, without fear that the application will not be able to start.

Docker is a platform for developing, delivering and running containerized applications. Docker allows you to create containers, automate their launch and deployment, and manage their lifecycle. This platform provides the ability to run multiple containers on a single host machine, providing a high degree of flexibility and scalability when developing, testing and deploying applications.

Docker Daemon is a server that runs in the background. It listens for requests from the Docker CLI and manages the lifecycle of containers. This server is responsible for performing various tasks, including starting and stopping containers, as well as regulating networks and ports, as well as maintaining container logs.

Docker CLI represents the Docker command line interface. Through commands entered at the command line, the user instructs Docker Daemon to perform specific actions, such as raising, starting, or stopping containers.

It is important to note that the Docker CLI can be installed either on a local system or configured for remote access by interacting with the Docker Daemon via REST API. This mechanism provides flexibility in managing Docker from different environments.

Docker Daemon's functionality extends beyond simple container lifecycle management. This subsystem also regulates network settings, ports, and container logging. Listed below are the most commonly used commands for interacting with Docker Daemon, which cover various aspects of managing containers and system resources.

Docker CLI cheat sheet

Basic Commands

docker run <имя_образа> # запуск контейнера на основе указанного образа
docker ps # показать список активных контейнеров
docker ps -a # показать все контейнеры, включая остановленные
docker stop <идентификатор_контейнера> # остановить контейнер
docker rm <идентификатор_контейнера> # удалить контейнер
docker images # показать список всех локальных образов
docker pull <имя_образа> # загрузить образ из Docker Hub
docker rmi <идентификатор_образа> # удалить локальный образ

Creating and working with images

docker build -t <имя_образа>:<тег> <путь_к_Dockerfile> # сборка образа на основе Dockerfile
docker tag <старый_тег> <новый_тег> # пометить образ новым тегом
docker tag <имя_образа>:<старый_тег> <новый_репозиторий>/<новый_тег> # переименовать и пометить образ для загрузки в другой репозиторий
docker push <имя_репозитория>/<имя_образа>:<тег> # отправка образа в Docker Hub или другой реестр

Networks and ports

docker network ls # показать список сетей
docker run -p <локальный_порт>:<контейнерный_порт> <имя_образа> # определить соответствие портов при запуске контейнера

Working with Docker Compose

docker-compose up # запустить сервисы, определенные в файле docker-compose.yml
docker-compose downмостановить и удалить сервисы, описанные в файле docker-compose.yml

Working with Docker Volumes

docker volume create <имя_volume> # создать Docker Volume
docker run -v <имя_volume>:<путь_в_контейнере> <имя_образа> # запустить контейнер, подключив Volume

Logging and Monitoring

docker logs <идентификатор_контейнера> # показать логи контейнера
docker stats <идентификатор_контейнера> # отобразить статистику использования ресурсов контейнера

Dockerfile

Commands for creating an image are recorded in a raw text document – a Docker file:

ИНСТРУКЦИЯ аргумент(s)

where is a command for the Docker Daemon, and is the argument itself or specific values ​​that are passed to .

N.B.: instructions are case insensitive, but they are usually written in caps to visually distinguish them from arguments.

The instructions explain what Docker Daemon must do before, during, or after running a container from an image.

Basic Dockerfile Instructions
  • specifies the base image from which to create a new one. Most often, is used for images with an operating system and pre-installed components.

  • specifies which commands need to be executed inside the container during image build. This way you can install dependencies or update packages to the required version.

  • and copies files from the local file system to the container. Most often, the source code of the application is copied.

  • sets the working directory for subsequent instructions. This way you can work sequentially with files in different directories.

  • defines the default arguments when starting the container.

  • specifies the command that will be executed when the container starts.

Example Docker file for a Python application:

# Используем базовый образ с Python
FROM python:3.8

# Устанавливаем зависимости
RUN pip install flask

# Копируем исходный код в образ
COPY . /app

# Указываем рабочую директорию
WORKDIR /app

# Определяем команду для запуска приложения
CMD ["python", "app.py"]

Docker Image

In order to create an image from a Dockerfile and run a container, you need::

  1. Go to the directory where the dockerfile is located.

  2. Use the docker build command to create an image from a file.

  3. If necessary, check the images with the docker images command.

  4. Start the container from the image with the docker run command.

When working with images, you can use tags to indicate the version of the images. By default, Docker assigns the tag during assembly.

#пример сборки образа с явным указанием тега
docker build -t my-python-app:v1.0

To push an image to the Docker Hub registry, use the following commands:

docker tag my-python-app:v1.0 username/my-python-app:v1.0
docker push username/my-python-app:v1.0

Loading an image from the registry is performed with the command:

docker pull username/my-python-app:v1.0

Docker images are static. But containers are mutable. To “refresh” an image, you can run a container from it, make changes, and save the state to a new image. This is done using the command :

docker commit -m "Добавлены изменения" -a "Автор" container_id username/my-python-app:v1.1

The Docker image is a standard format that ensures compatibility with Docker Daemon on any platform. This characteristic allows you to transfer projects between different systems without difficulty – containers are packaged in images and are easily moved. Isolating all dependencies and components within the image ensures that the project will successfully install on the target platform with Docker without the need for additional configuration.

Thus, a standardized image format combined with an isolated container runtime makes the process of transferring projects between different systems seamless and efficient.

Container (Docker Container)

We already briefly talked about containers at the beginning, but let's consolidate and get to the point. A container is an image instance (instance) running in an isolated environment. One running server process is “packed” into one container.

It is, of course, possible to place several processes in one container, even to create a monolithic structure – Docker does not impose strict restrictions in this regard. However, it should be noted that this approach is considered a mistake in the design of microservice architecture. Docker provides tools for configuring how containers interact with the external environment and other containers, as well as managing resource consumption. Therefore, there is no good reason to try to fit all the components into one container.

Additional features of Docker include the ability to customize the interaction between containers and their external environment, as well as throttling resource consumption. If you want the container to work with its own copy of the data without affecting the original, you can mount a directory from the host system into the container itself. This process is performed using the appropriate Docker command, which gives an additional level of flexibility when working with data inside the container.

docker run -v /путь/к/хост-директории:/путь/в/контейнере имя_образа

Docker Volumes are repositories that are associated with a container, but are not tied to its lifecycle. This means that any data that the container sends to Volumes will persist even if the container is stopped or destroyed.

#команда для создания Volume в контейнере
docker run -v my_volume:/путь/в/контейнере имя_образа

Docker registry

The Docker Registry, or Docker Registry, is a public image repository that performs several important functions. This service allows you to:

  • Centralized storage of images and their versions: Docker Registry provides a single place to store container images and their various versions, making them easily accessible.

  • Accelerate deployment: Images are downloaded from the registry directly to the target system, which speeds up the deployment process and prepares containers for operational use.

  • Automation of the processes of building, testing and deploying containers: Docker Registry integrates into development processes to automate the building, testing, and deployment of containers.

Docker Hub is a public registry that hosts publicly available images such as Linux distributions, databases, programming languages, and more. Importantly, organizations can create their own private Docker registries to store sensitive data, providing an additional layer of security and access control to container images.

To pass environment variables to the container, use the <-e> flag together with the command:

docker run -e MY_VARIABLE=value имя_образа

The container can export ports to communicate with the “outside world”. This is especially true for web applications where ports may be used to access the web server.

docker run -p 8080:80 имя_образа

You can impose limits on the resources used by the container, such as the amount of RAM or the number of CPU cores.

docker run --memory 512m --cpus 0.5 имя_образа

Creating a private Docker registry

Installing Docker Distribution

Docker Distribution is the official implementation of the Docker Registry protocol. Let's install it on a server that will serve as a private registry.

docker run -d -p 5000:5000 --restart=always --name registry registry:2

This command starts a private registry on port 5000. Optionally, you can configure HTTPS using an SSL certificate.

Using the Private Registry

You can now use the registry to store and distribute private Docker images.

# помечаем образ   
docker tag my-image localhost:5000/my-image
# отправляем образ в приватный реестр
docker push localhost:5000/my-image

Running a monolith on a server without Docker

In the README of the application you can find detailed instructions on how to deploy it on the server. For example, let's take the README of a monolithic application, which we deliberately shortened:

Installing packages:

sudo apt-get install -y build-essential libssl-dev zlib1g-dev libbz2-dev \
       libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
       xz-utils tk-dev libffi-dev liblzma-dev python-openssl git npm redis-server vim ffmpeg

Installing pyenv:

$ curl https://pyenv.run | bash
$ echo 'export PATH="$HOME/.pyenv/bin:$PATH"' >> ~/.bashrc && \
echo 'eval "$(pyenv init -)"' >> ~/.bashrc && \
echo 'eval "$(pyenv virtualenv-init -)"' >> ~/.bashrc && source ~/.bashrc

Python 3.6.9 installations:

pyenv install 3.6.9

If you have Ubuntu 20+ installed and an error occurs when trying to install, use the following commands:

$ sudo apt install clang -y
$ CC=clang pyenv install 3.6.9

Creating a virtual environment:

$ pyenv virtualenv 3.6.9 cpa-project

Activating the virtual environment:

$ pyenv activate cpa-project

Installing NodeJS 8.11.3:

$ npm i n -g
$ sudo n install 8.11.3
$ sudo n # в появившемся окне выбираем версию 8.11.3

Cloning a project

   $ git clone git@github.com:User/cpa-project.git

Let's go to the project:

$ cd cpa-project

Installing python dependencies (make sure the virtualenvironment is active):

$ pip install -U pip
$ pip install -r requirements.txt

Installing NodeJS dependencies:

$ npm install

We carry out migrations for the database and create test data:

$ python manage.py migrate
$ python manage.py generate_test_data

Building the client part

$ npm run watch # Для разработки с автоматическим ребилдом
$ npm run build # Для продакшена

As you can see, to launch a monolith on the server, many commands are written and executed, and the process requires human participation.

Build the same project in Docker

Let's see how the same application can be deployed in a Docker environment. First, let's select the services for the compose file and come up with names for them:

Compose file
yaml
version: '3'
services:
 {stage}-project-ex--app:
   container_name: {stage}-project-ex--app
   build:
     context: ..
     dockerfile: Dockerfile
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   depends_on:
     - {stage}-project-ex--redis
     - {stage}-project-ex--clickhouse
     - {stage}-project-ex--postgres
     - {stage}-project-ex--mailhog
   volumes:
     - ..:/app/
     - ./crontab.docker:/etc/cron.d/crontab.docker
   command: /start
   labels:
     - "traefik.enable=true"
     - "traefik.http.routers.{stage}_fp_app.rule=Host(`web.{stage}.project-ex.io`)"
     - "traefik.http.services.{stage}_fp_app.loadbalancer.server.port=8000"
     - "traefik.http.routers.{stage}_fp_app.entrypoints=websecure"
     - "traefik.http.routers.{stage}_fp_app.tls.certresolver=stage_project-ex_app"

 {stage}-project-ex--app-cron:
   container_name: {stage}-project-ex--app-cron
   build:
     context: ..
     dockerfile: Dockerfile
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   depends_on:
     - {stage}-project-ex--redis
     - {stage}-project-ex--clickhouse
     - {stage}-project-ex--postgres
     - {stage}-project-ex--mailhog
   volumes:
     - ..:/app/
     - ./crontab.docker:/etc/cron.d/crontab.docker
   command: sh -c "printenv >> /etc/environment && crontab /etc/cron.d/crontab.docker && cron -f"

 {stage}-project-ex--front:
   container_name: {stage}-project-ex--front
   build: ./frontend-builder
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   depends_on:
     - {stage}-project-ex--app
   volumes:
     - ..:/app/

 {stage}-project-ex--clickhouse:
   container_name: {stage}-project-ex--clickhouse
   image: yandex/clickhouse-server:20.4.6.53
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   volumes:
     - /home/project-ex/stands/{stage}/docker_data/clickhouse/data:/var/lib/clickhouse
     - ./docker_data/clickhouse/schema:/var/lib/clickhouse/schema
     - ./docker_data/clickhouse/users.xml:/etc/clickhouse-server/users.xml
     - ./docker_data/clickhouse/project-ex.xml:/etc/clickhouse-server/users.d/default-user.xml
   labels:
     - "traefik.enable=true"
     - "traefik.tcp.routers.{stage}_fp_clickhouse.rule=HostSNI(`*`)"
     - "traefik.tcp.routers.{stage}_fp_clickhouse.entryPoints=clickhouse"
     - "traefik.tcp.routers.{stage}_fp_clickhouse.service={stage}_fp_clickhouse"
     - "traefik.tcp.services.{stage}_fp_clickhouse.loadbalancer.server.port=8123"

 {stage}-project-ex--postgres:
   container_name: {stage}-project-ex--postgres
   image: postgres:13.11-alpine
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   stdin_open: true
   tty: true
   volumes:
     - {stage}-project-ex--postgres:/var/lib/postgresql
   labels:
     - "traefik.enable=true"
     - "traefik.tcp.routers.postgres.rule=HostSNI(`*`)"
     - "traefik.tcp.routers.postgres.entryPoints=postgres"
     - "traefik.tcp.routers.postgres.service=postgres"
     - "traefik.tcp.services.postgres.loadbalancer.server.port=5432"

 {stage}-project-ex--redis:
   container_name: {stage}-project-ex--redis
   image: redis:alpine
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   volumes:
     - {stage}-project-ex--redis:/data

 {stage}-project-ex--mailhog:
   container_name: {stage}-project-ex--mailhog
   image: mailhog/mailhog:v1.0.1
   env_file:
     - ".env.{stage}"
   networks:
     - stage_project-ex_network
   labels:
     - "traefik.enable=true"
     - "traefik.http.routers.{stage}_fp_mailhog.rule=Host(`mail.{stage}.project-ex.io`)"
     - "traefik.http.services.{stage}_fp_mailhog.loadbalancer.server.port=8025"
     - "traefik.http.routers.{stage}_fp_mailhog.entrypoints=websecure"
     - "traefik.http.routers.{stage}_fp_mailhog.tls.certresolver=stage_project-ex_app"

volumes:
 {stage}-project-ex--postgres:
   name: {stage}-project-ex--postgres
   driver: local
 {stage}-project-ex--redis:
   name: {stage}-project-ex--project-ex
   driver: local

networks:
 stage_project-ex_network:
   external: true
   name: stage_project-ex_network

Docker Compose is a tool for running multi-container applications in Docker. The .yaml file specifies all the necessary settings and commands. Containers are launched from a compose file using the command.

In the .yaml file we can see from which containers (and which versions) our previously monolithic application is running. And the key word {stage} is the branch in GitLab from which the container will be raised. If desired, we can run containers from different branches on one server.

Just because we split the application into microservices, it does not cease to be monolithic. The microservice nature of a product is laid down at the stage of its design and creation, when each task is separated into a separate service.

The line in compose composes our container. The file contains the following instructions:

dockerfile
FROM python:3.6.9-buster

ENV DJANGO_SETTINGS=advgame.local_settings

ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update \
 # dependencies for building Python packages
 && apt-get install -y build-essential \
 # psycopg2 dependencies
 && apt-get install -y libpq-dev \
 # Translations dependencies
 && apt-get install -y gettext \
 # Cron
 && apt-get install -y cron \
 # Vim
 && apt-get install -y vim \
 # cleaning up unused files
 && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
 && rm -rf /var/lib/apt/lists/*

# Set timezone
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Have to invalidate cache here because Docker is bugged and doesn't invalidate cache
# even if requirements.txt did change


ADD ../requirements.txt /requirements.txt
RUN pip install -r /requirements.txt

COPY ./docker-compose/start.sh /start
RUN chmod +x /start

# Copy hello-cron file to the cron.d directory
COPY ./docker-compose/crontab.docker /etc/cron.d/crontab.docker
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/crontab.docker
# Apply cron job
RUN crontab /etc/cron.d/crontab.docker

COPY . /app

WORKDIR /app

Next, we wrote a file that will allow us to manage the configuration of the project:

make
dir=${CURDIR}
project=project-ex
env=local
interactive:=$(shell [ -t 0 ] && echo 1)
ifneq ($(interactive),1)
  optionT=-T
endif

uid=$(shell id -u)
gid=$(shell id -g)
# Команда для docker-compose exec
c=
# Параметр для docker-compose exec
p=

dc:
  @docker-compose -f ./docker-compose/$(env).yml --env-file=./docker-compose/.env.$(env) $(cmd)

compose-logs:
  @make dc cmd="logs" env="$(env)"

cp-env:
  [ -f ./docker-compose/.env.$(env) ] && echo ".env.$(env) file exists" || cp ./docker-compose/.env.example ./docker-compose/.env.$(env)
  sed -i "s/{stage}/$(env)/g" ./docker-compose/.env.$(env)
  @if [ "$(env)" = "local" ] ; then \
     sed -i "s/{domain}/ma.local/g" ./docker-compose/.env.$(env) ; \
  fi;
  @if [ "$(env)" = "dev" ] ; then \
     sed -i "s/{domain}/dev.project-ex.io/g" ./docker-compose/.env.$(env) ; \
  fi;

cp-yml:
  @if [ ! "$(env)" = "local" ] ; then \
     [ -f ./docker-compose/$(env).yml ] && echo "$(env).yml file exists" || cp ./docker-compose/stage.example.yml ./docker-compose/$(env).yml ; \
     sed -i "s/{stage}/$(env)/g" ./docker-compose/$(env).yml; \
  fi;

init:
  docker network ls | grep stage_project-ex_network > /dev/null || docker network create stage_project-ex_network
  @make cp-env
  @make cp-yml
  [ -f ./docker-compose/.env.$(env) ] && echo ".env.$(env) file exists" || cp ./docker-compose/.env.$(env).example ./docker-compose/.env.$(env)
  @make dc cmd="up -d"
  @make dc cmd="start $(env)-$(project)--postgres" env="$(env)"
  sleep 5 && cat ./docker-compose/docker_data/pgsql/data/init_dump.sql | docker exec -i $(env)-$(project)--postgres psql -U project-ex
  @make dc cmd="exec $(env)-$(project)--app python ./manage.py migrate" env="$(env)"
  @make ch-restore env="$(env)"
  @make build-front env="$(env)"
  @make collect-static env="$(env)"

create_test_db:
  @make dc cmd="exec $(env)-$(project)--postgres dropdb --if-exists -U project-ex project-ex_test" env="$(env)" > /dev/null
  @make dc cmd="exec $(env)-$(project)--postgres createdb -U project-ex project-ex_test" env="$(env)"
  cat ./docker-compose/docker_data/pgsql/data/init_dump.sql | docker exec -i $(env)-$(project)--postgres psql -U project-ex project-ex_test

bash-front:
  @make dc cmd="exec $(env)-$(project)--front sh" env="$(env)"

creates a kind of short command alias for managing services. In it you can initialize the project, recreate the database, assemble the front, and so on. We will use the same commands in the GitLab CI file. Next we run the command to initialize the project.


Let's move on to another compose file of the auxiliary service. This service does not require frequent reboots; it is enough to configure it “once”, and then it itself monitors all Docker daemon containers.

Compose file
yaml
version: '3'
services:
 stage-project-ex--traefik:
   image: "traefik:v3.0.0-beta2"
   container_name: "stage-project-ex--traefik"
   command:
     - "--log.level=DEBUG"
     - "--providers.docker=true"
     - "--providers.docker.exposedbydefault=false"
     - "--entrypoints.web.address=:80"
     - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
     - "--entrypoints.websecure.address=:443"
     - "--entrypoints.postgres.address=:5432"
     - "--entrypoints.clickhouse.address=:8123"
     - "--entrypoints.mongo.address=:27017"
     - "--certificatesresolvers.stage_project-ex_app.acme.httpchallenge=true"
     - "--certificatesresolvers.stage_project-ex_app.acme.httpchallenge.entrypoint=web"
     - "--certificatesresolvers.stage_project-ex_app.acme.email=it@email-ex.com"
     - "--certificatesresolvers.stage_project-ex_app.acme.storage=/letsencrypt/acme.json"
   restart: always
   ports:
     - 80:80
     - 443:443
     - 5432:5432
     - 8123:8123
     - 27017:27017
   networks:
     - stage_project-ex_network
   volumes:
     - "/opt/letsencrypt:/letsencrypt"
     - "/var/run/docker.sock:/var/run/docker.sock:ro"

networks:
 stage_project-ex_network:
   external: true
   name: stage_project-ex_network

Port settings

The first compose file doesn't say anything about ports. The main reason is that we will access the project by domain name using Traefik. Applications work separately from compose files and project versions: Traefik learns about the appearance of new containers from Docker Daemon, and the configuration for the application is written in the compose file after the keyword .

Traefik proxies traffic to the container based on the hostname (not only via HTTP/HTTPS), requests an LE certificate, and renews it itself. In this case, there is no need to indicate which IP or hostname to proxy to or change the Traefik config.

If we raise local containers with a local domain name, requesting an LE certificate will not work. Therefore, you will have to communicate with the web via HTTP, and in Traefik you will have to disable the redirect to HTTPS

The image version traefik:v3.0.0-beta2 was not chosen by chance; it supports various domain names for routing to PostgreSQL containers. In the example above, using beta2 is not necessary, since any request on port 5432 will be proxied to a single PostgreSQL container.

When there are several postgres containers

To ensure work with multiple PostgreSQL containers and configure routing by domain names, it is necessary to generate a self-signed Wildcard certificate for the local domain and integrate information about it into the Traefik configuration.

This process is required solely to provide external access to PostgreSQL containers directly. When using a Docker network where the containers are located, the use of Traefik becomes redundant.

In we add:

yaml
    command:
      - "--providers.file.filename=/conf/dynamic-conf.yml"
    volumes:
      - "./tls:/tls"
      - "./conf:/conf"

In conf/dynamic-conf.yml we register the certificate files:

yaml
tls:
  certificates:
    - certFile: /tls/something.com.pem
      keyFile: /tls/something.com.key

In the tls/ directory we put the Wildcard certificate files created by the Bash script <mkcert.sh something.com>:

bash
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

 DOMAIN_NAME=$1

if [ ! -f $1.key ]; then

  if [ -n "$1" ]; then
    echo "You supplied domain $1"
    SAN_LIST="[SAN]\nsubjectAltName=DNS:localhost, DNS:*.localhost, DNS:*.$DOMAIN_NAME, DNS:$DOMAIN_NAME"
    printf $SAN_LIST
  else
    echo "No additional domains will be added to cert"
    SAN_LIST="[SAN]\nsubjectAltName=DNS:localhost, DNS:*.localhost"
    printf $SAN_LIST
  fi

  openssl req \
    -newkey rsa:2048 \
    -x509 \
    -nodes \
    -keyout "$1.key" \
    -new \
    -out "$1.crt" \
    -subj "/CN=compose-dev-tls Self-Signed" \
    -reqexts SAN \
    -extensions SAN \
    -config <(cat /etc/ssl/openssl.cnf <(printf $SAN_LIST)) \
    -sha256 \
    -days 3650

  echo "new TLS self-signed certificate created"

else

  echo "certificate files already exist. Skipping"

fi

We edit the of the postgres container in the compose file:

yaml
    labels:
      - "traefik.enable=true"
      - "traefik.tcp.routers.qa222_postgres.rule=HostSNI(`qa222.something.com`)"
      - "traefik.tcp.routers.qa222_postgres.entryPoints=postgres"
      - "traefik.tcp.routers.qa222_postgres.service=qa222_postgres"
      - "traefik.tcp.services.qa222_postgres.loadbalancer.server.port=5432"
      - "traefik.tcp.routers.qa222_postgres.tls=true"

CI/CD project

Below we have attached our GitLab CI file, in it we can see the previously announced commands. In order to deploy the project on the server, ci/cd was configured, which allows:

  • check the code quality;

  • run tests;

  • collect builds of docker images;

  • deliver it all to the server.

This ci uses makefile aliases, which we wrote about above.

yaml
variables:
 APP4_ENV: "gitlab"

default:
 tags:
   #gtilab runner tag
   - dev-project-ex-1

stages:
 - ci
 - delivery
 - build
 - deploy

.before_script_template: &build_test-integration
 before_script:
 - echo "Prepare job"
 - sed -i "s!env=local!env=${APP4_ENV}!" ./Makefile
 - make cp-env
 - make cp-yml
 - make up

.verify-code: &config_template
 stage: ci
 <<: *build_test-integration
 only:
   refs:
     - merge_requests
     - develop
     - master

Linter:
 <<: *config_template
 script:
   - make build
   - make linter

Tests:
 <<: *config_template
 script:
   - make tests

Delivery:
 stage: delivery
 script:
   - echo "Rsync from $CI_PROJECT_DIR"
   - sudo rm -rf "/home/project-ex/stands/dev/project-ex/!\(static|node_modules\)"
   - sed -i "s!env=local!env=dev!" ./Makefile
   - rsync -av --delete-before --no-perms --no-owner --no-group
     --exclude "node_modules/"
     --exclude "__pycache__/"
     --exclude "logs/"
     --exclude "docker-compose/docker_data/clickhouse/data/"
     $CI_PROJECT_DIR/ /home/project-ex/stands/dev/project-ex
 only:
   - develop
 except:
   - master

Build:
 stage: build
 script:
   - echo "cd /home/project-ex/stands/dev/project-ex"
   - cd /home/project-ex/stands/dev/project-ex
   - echo "make cp-env"
   - make cp-env
   - echo "cp-yml"
   - make cp-yml
   - echo "build"
   - make build
 only:
   - develop
 except:
   - master

Build-front:
 stage: build
 script:
   - echo "cd /home/project-ex/stands/dev/project-ex"
   - cd /home/project-ex/stands/dev/project-ex
   - echo "build-front"
   - make build-front
 only:
   changes:
     - '*.js'
     - '*.css'
     - '*.less'
   refs:
     - develop
     - master

Deploy:
 stage: deploy
 script:
   - cd /home/project-ex/stands/dev/project-ex
   - mkdir -p logs
   - make restart
   - make migrate
   - make collect-static
 only:
   - develop
 except:
   - master

Pros and cons of Docker

Docker uses the host operating system kernel, which imposes certain restrictions on its use. The tool only works on a 64-bit Linux installation with kernel version 3.10 or later. In addition, it is focused on server applications and does not always support working with graphical interfaces. It is important to note that incorrect container configuration or insufficient security measures can pose a threat to the entire system.

However, despite these limitations, Docker's benefits greatly outweigh it. Using common operating system kernels and isolating applications through namespaces and cgroups eliminates the need to run separate virtual machines for each task. Docker also optimizes the distribution of resources between containers and provides application lifecycle management, including starting, stopping, scaling, and updating containers. There's also a robust Docker Hub ecosystem, home to hundreds of pre-built images, and an active community where you can discuss questions or find solutions.

It's important to remember that while there are benefits to microservice architecture, a monolithic approach also has its place and may be appropriate, especially depending on the project requirements and the competencies of the development team.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *