Cool features in Docker Compose – profiles and templates

Now we will tell you a story. A story about how we developed an API and decided to write E2E tests for it. The tests were simple, they described and checked the functionality of the API, but they turned out to be tricky in terms of launch. But first things first.

In this article, we will consider the solution we came to using the example of a simple Docker Compose configuration.

Manually running tests

We were looking for a convenient tool for writing E2E tests for APIs. Almost immediately we came across the tool Karate. We looked through the documentation and first decided to run the tests manually with the following command:

java -jar karate.jar .

But it turned out that to do this you need to install Karate and a Java runtime for it, and then write instructions about it for three operating systems: Windows, Linux and macOS.

Using Docker Compose

To avoid installing multiple tools (and even specific versions!), we decided to run tests in Docker, where all dependencies are described in the Dockerfile, and when building a container, they are installed into the container itself automatically. Below is an example of our Dockerfile for running Karate tests. We were inspired by the official documentation. To run tests in Docker, we decided to use Docker Compose, as it allows us to run several services at once with one command. In our case, this is an API, a database, and a container with tests:

To launch all services, we wrote docker-compose.yml file.

version: '3.8'

services:
  db:
    image: postgres:13
    container_name: 'db'
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: postgres
    ports:
      - 5432:5432

  api:
    container_name: 'api'
    build:
      context: .
      dockerfile: Api/Dockerfile # Путь до докерфайла, из которого собирается
                                 # образ API
    ports:
      - 5000:80
    depends_on: # При запуске API миграции применяются к базе данных автоматически,
                # поэтому API запускается, только когда база данных уже запущена
      - db

  karate_tests:
    container_name: 'karate_tests'
    build:
      dockerfile: KarateDockerfile # Путь до докерфайла, из которого собирается
                                   # образ Karate (скачивается karate.jar файл,
                                   # а Java уже существует внутри контейнера)
      context: .
    depends_on:
      - api # Если API не будет запущен, тесты упадут, поэтому ждём запуска API
    command: ["/karate"] # Запускаем тесты из папки /karate
    volumes:
      - .:/karate # Монтируем папку с тестами в папку /karate
    environment:
      API_URL: 'http://api'

KarateDockerfile

FROM openjdk:11-jre-slim

RUN apt-get update && apt-get install -y curl

RUN apt-get install -y unzip

RUN curl -o /karate.jar -L 'https://github.com/intuit/karate/releases/download/v1.3.0/karate-1.3.0.jar'

ENTRYPOINT ["java", "-jar", "/karate.jar"]

Two Docker Compose Files

It looks good and convenient, but not the whole team needs all the containers. For example, the developer only needs the database, since he runs the API in his IDE.
We decided to split the Docker Compose file into two: docker-compose.yml — remained unchanged and docker-compose-db.yml — containing only a container with a database:

docker-compose-db.yml

version: '3.8'

services:
  db:
    image: postgres:13
    restart: always
    container_name: 'db'
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: postgres
    ports:
      - 11237:5432

Running tests in the pipeline

The convenient configuration is ready, which means you can run tests in the pipeline to create a merge request to the main branch. For these purposes, we used the file docker-compose.ymlas it has everything you need to run tests.
We wrote a pipeline for GitHub, in which we launched all containers, including those with the tests themselves.

run: |
LOGS=$(docker-compose --file docker-compose.yml up --abort-on-container-exit) # запишем все логи в переменную, 
                                                                              # чтобы в дальнейшем их разобрать и проверить,
                                                                              # прошли тесты или нет
# проверим, что нет упавших тестов
if [ "$FAILED" -gt 0 ]; then
  echo "Failed tests found! Failing the pipeline..."
  exit 1
fi
# проверим, что тесты в целом прошли, чтобы избежать ложного успешного завершения пайплайна
if [ "$PASSED" -eq 0 ]; then
  echo "No tests passed! Failing the pipeline..."
  exit 1
fi

It turned out to be a bit of a hack, but we haven’t found a better way to crash the pipeline when Karate tests fail.
In fact, we can use two files when building containers by specifying them via –file flag:

docker-compose --file docker-compose.yml --file docker-compose-db.yml up

But this solution is not the best idea. You can get confused between the files, and you will have to check each time that the same service is not present in several files at once. And there will be a lot of files – as many as there are startup configurations. We have defined at least:

  • local environment — the database and API are launched;

  • db-only — only the database is launched for interaction with it by the service launched in the IDE;

  • e2e-local-environment — the API, database, and container with Karate tests are launched, which test the API running in Docker;

  • e2e-production-environment — We are supporters of TDD (Test-Driven Development) and testing in production. That is why we want to run tests not only in the feature branch before merging into the main branch, but also after deploying to production. This configuration runs only the karate_tests container, which is targeted at production.

Profiles

We dug into the Docker Compose documentation and found a handy tool — profiles. This feature will help to separate services as needed, and at the same time they will all be in one file. Then, to run the necessary services, you will need to specify the argument in the docker-compose up command –profile with the desired profile name.

docker-compose --profile db-only up

We have merged all containers into one file again. docker-compose.yml and distributed profiles between them in order to select for launch only those containers that are needed now.

version: '3.8'

services:
  db:
    image: postgres:13
    container_name: 'db'
    profiles: ['db-only', 'e2e-local-environment', 'local-environment'] # только при запуске с этими профилями сервис будет запущен
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: postgres
    ports:
      - 5432:5432

  api:
    container_name: 'api'
    profiles: ['e2e-local-environment', 'local-environment']
    build:
      context: .
      dockerfile: Api/Dockerfile
    ports:
      - 5000:5000
    depends_on:
      - db

  karate_tests:
    container_name: 'karate_tests'
    profiles: ['e2e-local-environment']
    build:
      dockerfile: KarateDockerfile
      context: .
    depends_on:
      - api
    command: ['/karate']
    volumes:
      - .:/karate
    environment:
      API_URL: 'http://api'

Templates

Now we need to run the same E2E tests, but on the deployed service. To do this, we need to replace the address in API_URL with the API address in production.
We created a second karate_tests service, where the API_URL environment variable has a different value.

version: '3.8'

services:
  # остальные сервисы (API, database)

  local_karate_tests:
    container_name: 'local_karate_tests'
    profiles: ['e2e-local-environment']
    build:
      dockerfile: KarateDockerfile
      context: .
    depends_on:
      - api
    command: ['/karate']
    volumes:
      - .:/karate
    environment:
      API_URL: 'http://api'

  production_karate_tests:
    container_name: 'production_karate_tests'
    profiles: ['e2e-production-environment']
    build:
      dockerfile: KarateDockerfile
      context: .
    depends_on:
      - api
    command: ['/karate']
    volumes:
      - .:/karate
    environment:
      API_URL: 'https://my-deployed-service.com'

But it turns out that these two services that run Karate tests almost completely duplicate each other. Only the API_URL changes.
We solved this problem using the Docker Compose feature — extends a block that allows a service to inherit from some other service and override only the part of the configuration that differs between the inherited service and the successor.
We have created a template base_karate_tests in file docker-compose.yml. It contains the data that does not change for containers: KarateDockerfile, from which the image is built, the command to run, and volume.
Now apply this template to services using the extends block like this:

version: '3.8'

services:
  # остальные сервисы (API, database)

  base_karate_tests:
    build:
      dockerfile: KarateDockerfile
      context: .
    command: ['/karate']
    volumes:
      - .:/karate  

  local_karate_tests:
    container_name: 'local_karate_tests'
    profiles: ['e2e-local-environment']
    extends:
      service: base_karate_tests
    environment:
      API_URL: 'http://api'

  production_karate_tests:
    container_name: 'production_karate_tests'
    profiles: ['e2e-local-environment']
    extends:
      service: base_karate_tests
    environment:
      API_URL: 'https://my-deployed-service.com'

One template does not take up much space. However, if we have a system in which there are more templates and inherited services, the template itself can be placed in a separate file and referenced. To do this, let's create a file templates.yml.

version: '3.8'

services:
  base_karate_tests:
    build:
      dockerfile: KarateDockerfile
      context: .
    command: ['karate', '/karate']
    volumes:
      - .:/karate

We will describe here all the necessary templates, and in docker-compose.yml We will use the file parameter, in addition to service, to apply the template, but from a different file.

extends:
  file: templates.yml
  service: base_karate_tests

For a small system this is not necessary, but for more complex automation it may be just the thing.

Results

Using Docker Compose, we were able to successfully and happily run E2E API tests both in the merge request pipeline before closing a feature and after deploying that feature to production.

When running in Docker Compose, we went through the following evolution of the solution:

  • Separate files for running specific services, where the manifest is duplicated;

  • Linking services to profiles in which they should be launched;

  • Moving duplicate service code into templates.

As a result, we have a compact Docker Compose file, from which we can launch only the necessary services with one command and only one profile argument. Also, this solution is normally expanded when adding new launch modes.

If you have encountered similar problems, please tell us how you solved them?

Authors: Kolesnikova Anna, Shinkarev Alexander
Proofreading and feedback: Yadryshnikova Maria, Chernykh Viktor, Sipatov Maxim, Magdenko Yulia
Design: Margarita Shur

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *