The best way to create multiple environments for a Spring Boot application using Docker Compose
In a new article from the team Amplicode I'll tell you how you can create multiple Docker Compose files for different needs. For example, for production and development, without drowning in copy-paste.
The article is also available in video format, so you can watch or read – whichever is more convenient for you!
Watch on Rutube || Watch on YouTube
So today we want to get two Docker Compose files. One will be useful for production and will contain three services: our Spring Boot application, PostgreSQL and Kafka, and the second will be used during development and will also include PostgreSQL and Kafka, as well as tools for working with them, namely pgAdmin and Kafka UI.
Docker Compose for production!?
Note that there is no clear opinion regarding the use of Docker Compose in production. Some believe that Docker Compose is not suitable for production, and that it is better to use advanced orchestration systems such as Kubernetes. Others think it's quite acceptable under certain conditions. In general, as is often the case in programming, the answer depends on the context.
Ways to solve the problem
Returning to the task at hand: as you can see, Kafka and PostgreSQL must be present in both Docker Compose files, and there are several ways to solve this problem.
Using one Docker Compose file with profiles
The first way we can solve the problem is to abandon two Docker Compose files altogether, and create only one, in which we describe all the services and use profiles to indicate which services are needed during development and which are useful only in production.
#многие свойства сервисов не указаны для простоты восприятия
services:
spring-petclinic:
image: spring-petclinic:latest
profiles:
- prod
postgres:
image: postgres:16.3
profiles:
- prod
- dev
kafka:
image: confluentinc/cp-kafka:7.6.1
profiles:
- prod
- dev
kafkaui:
image: provectuslabs/kafka-ui:v0.7.2
profiles:
- dev
pgadmin:
image: dpage/pgadmin4:8.12.0
profiles:
- dev
However, this option is a bit limited in terms of use. For example, I will not be able to specify for different profiles that they should use different ports or init scripts.
Creating separate Docker Compose files for each environment
The second way I can choose to solve this problem is to simply copy the services I need into different Docker Compose files. However, if you stick with this option, you may run into all the typical problems that come with copying and pasting content, such as changing the version of a service in one Docker Compose file but forgetting to do so in another. And if initially I had only two Docker Compose files, in the future there may suddenly become four, then eight, and so on. Keeping all this in working order will become more and more difficult.
Using include and extends to reuse services
Finally, there is a third method, which perhaps not everyone knows about. This method involves the use of structures include And extends in Docker Compose to reuse services.
Design include
allows you to include one Docker Compose file within another, essentially providing the equivalent of running multiple Docker Compose files from the terminal.
File services.yaml
:
#многие свойства сервисов не указаны для простоты восприятия
services:
postgres:
image: postgres:16.3
kafka:
image: confluentinc/cp-kafka:7.6.1
File app-compose.yaml
:
#многие свойства сервисов не указаны для простоты восприятия
include:
- services.yaml
services:
spring-petclinic:
image: spring-petclinic:latest
This approach is actually quite convenient if you just need to reuse the same services for different Docker Compose files without any additional configuration.
And this approach is also supported by Amplicode from a visual display point of view:
But if you need to fine-tune the service to suit your needs, then the keyword is more suitable here extends
.
With his help, just like with the help include
you can include services from another Docker Compose file into the current one. At the same time, we have the opportunity to configure its properties.
File services.yaml
:
#многие свойства сервисов не указаны для простоты восприятия
services:
postgres:
image: postgres:16.3
kafka:
image: confluentinc/cp-kafka:7.6.1
File app-compose.yaml
:
services:
spring-petclinic:
image: spring-petclinic:latest
postgres:
extends:
service: postgres
file: services.yaml
kafka:
extends:
service: kafka
file: services.yaml
#так как это расширение указанного сервиса,
#мы можем доконфигурировать его свойства
ports:
- "9092:9092"
In this situation, the option with extends
fits best. Using this option, you can avoid code duplication while maintaining configuration flexibility. By the way, JHipster, the Spring Boot application generator, uses a similar approach. Ilya Kuchmin gave an excellent report about this generator at the latest JPoint. I recommend watching it, the report was very interesting
Let's solve the problem with extends and include
According to legend, we are refining an existing application in which the following Docker Compose file has already been implemented:
services:
spring-petclinic:
image: spring-petclinic:latest
build:
context: .
args:
DOCKER_BUILDKIT: 1
restart: always
ports:
- "8080:8080"
environment:
POSTGRES_HOST: postgres
POSTGRES_DB: spring-petclinic
POSTGRES_USER: root
POSTGRES_PASSWORD: root
KAFKA_BOOTSTRAP_SERVERS: kafka:29092
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health || exit 1
interval: 30s
timeout: 5s
start_period: 30s
retries: 5
depends_on:
- postgres
postgres:
image: postgres:16.3
restart: always
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: spring-petclinic
healthcheck:
test: pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB
interval: 10s
timeout: 5s
start_period: 10s
retries: 5
kafka:
image: confluentinc/cp-kafka:7.6.1
restart: always
ports:
- "29092:29092"
- "9092:9092"
volumes:
- kafka_data:/var/lib/kafka/data
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_NODE_ID: 1
CLUSTER_ID: 8GyRIS62T8aMSkDJs-AH5Q
KAFKA_PROCESS_ROLES: controller,broker
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://0.0.0.0:9092,CONTROLLER://kafka:9093
healthcheck:
test: kafka-topics --bootstrap-server localhost:9092 --list
interval: 10s
timeout: 5s
start_period: 30s
retries: 5
volumes:
postgres_data:
kafka_data:
Docker Compose file for reusable services
In fact, we can use this file for production, with minor modifications. Let's do them. First let's rename it to compose.prod.yaml
:
Next, we will copy the Kafka and PostgreSQL services into a new Docker Compose file with services. It is this file that we will subsequently reuse in other docker compose files intended for different purposes (production, development, test environment, etc.). To do this, in the Amplicode Explorer panel we'll type Docker → New → Docker Compose File and let's give it a name services.yaml
.
Since all the settings I specify in this file will be used in other places where I extend services from this Docker Compose file, those values that are most likely to differ are best removed from here. Therefore, for PostgreSQL, let's remove the environment variables:
We will also delete the port open for external connections, since if you leave it in this service and then redefine it in other Docker Compose files, it will still not be overwritten. The port number values will be combined from the two Docker Compose files, resulting in PostgreSQL being available on two ports.
And in the case of production, I wouldn’t want any service other than a Spring Boot application to open its ports for external connections.
For Kafka, I will only remove the ports; I will leave the environment variables, since they will be the same for the two environments.
File services.yaml
ready:
services:
postgres:
image: postgres:16.3
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB
interval: 10s
timeout: 5s
start_period: 10s
retries: 5
kafka:
image: confluentinc/cp-kafka:7.6.1
restart: always
volumes:
- kafka_data:/var/lib/kafka/data
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_NODE_ID: 1
CLUSTER_ID: zNOJ9oWQQWCJqtCat68MLQ
KAFKA_PROCESS_ROLES: controller,broker
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://0.0.0.0:9092,CONTROLLER://kafka:9093
healthcheck:
test: kafka-topics --bootstrap-server localhost:9092 --list
interval: 10s
timeout: 5s
start_period: 30s
retries: 5
volumes:
kafka_data:
postgres_data:
Docker Compose file for production
Now I need to return the production file to the state it was in. Namely, add services for PostgreSQL and Kafka. To do this, call the menu Generate and in it I will find action Extend Existing Service from Amplicode:
I will select a service and leave the name unchanged.
As a result, the service in the file compose.prod.yaml
will look like this:
For the application to work correctly immediately after launch, I need some tables in the database and records in them. I can initialize the database by specifying the path to the directory in which the initialization scripts are located. To do this without leaving the IDE and to avoid making a mistake when writing the path, you can use the panel Amplicode Designer and section Init scripts. Using it, I will indicate the path to the directory with scripts in the line Source:
Since specifying sensitive information in clear text in Docker Compose files is a rather gross violation of generally accepted security policies, let's use .env
file. Let's start recruiting env_file
and Amplicode will offer us code completion not only for the service attribute:
But also for the name of a file located in the project, with the extension .env
:
What's most convenient is that Amplicode will also display all the data from it in the corresponding sections in the Amplicode Designer panel:
Also, don’t forget to remove environment variables from the service with our Spring Boot application:
And we will indicate .env
file:
All that remains is to add Kafka, for which you won’t have to do any additional steps here. Call the menu in the same way Generate and extend the Kafka service:
As a result, the file compose.prod.yaml
now looks like this:
services:
spring-petclinic:
image: spring-petclinic:latest
build:
context: .
args:
DOCKER_BUILDKIT: 1
restart: always
ports:
- "8080:8080"
env_file:
- prod.env
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health || exit 1
interval: 30s
timeout: 5s
start_period: 30s
retries: 5
depends_on:
- postgres
postgres:
extends:
service: postgres
file: services.yaml
env_file:
- prod.env
volumes:
- ./src/main/resources/db/postgres:/docker-entrypoint-initdb.d:ro
kafka:
extends:
service: kafka
file: services.yaml
volumes:
postgres_data:
kafka_data:
Docker Compose file for development
Now let’s perform similar actions for the Docker Compose file that we need for development.
First, let's expand the PostgreSQL and Kafka services described in the file services.yaml
. As a result, we will have a file compose.dev.yaml
with the following content:
services:
postgres:
extends:
service: postgres
file: services.yaml
ports:
- "5432:5432"
volumes:
- ./src/main/resources/db/postgres:/docker-entrypoint-initdb.d:ro
environment:
POSTGRES_DB: postgres
POSTGRES_USER: root
POSTGRES_PASSWORD: root
kafka:
extends:
service: kafka
file: services.yaml
volumes:
postgres_data:
kafka_data:
I note that for PostgreSQL I opened the port 5432
for external connection, and for Kafka port 9092
. If I didn't do this, I wouldn't be able to reach the database and message broker from outside the Docker network. And, therefore, if I launched the application in debug mode, it would not be able to connect to PostgreSQL or Kafka.
Now all I have to do is add convenient tools for interacting with our services during development. Amplicode suspects that this is what I want and offers to generate them.
Interestingly, for PostgreSQL, it will even automatically configure the connection in pgAdmin, so I don't have to configure anything once I run it and connect. You can just start using it right away.
Click OK and the pgAdmin service is ready:
I will repeat the same for Kafka UI.
That's it:
As a result, after all our manipulations, the file compose.dev.yaml
looks like this:
services:
postgres:
extends:
service: postgres
file: services.yaml
ports:
- "5432:5432"
volumes:
- ./src/main/resources/db/postgres:/docker-entrypoint-initdb.d:ro
environment:
POSTGRES_DB: postgres
POSTGRES_USER: root
POSTGRES_PASSWORD: root
kafka:
extends:
service: kafka
file: services.yaml
pgadmin:
image: dpage/pgadmin4:8.12.0
restart: "no"
ports:
- "5050:80"
volumes:
- pgadmin_data:/var/lib/pgadmin
- ./docker/pgadmin/servers.json:/pgadmin4/servers.json
- ./docker/pgadmin/pgpass:/pgadmin4/pgpass
environment:
PGADMIN_DEFAULT_EMAIL: admin@admin.com
PGADMIN_DEFAULT_PASSWORD: root
PGADMIN_CONFIG_SERVER_MODE: "False"
PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED: "False"
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:80/misc/ping || exit -1
interval: 10s
timeout: 5s
start_period: 10s
retries: 5
entrypoint: /bin/sh -c "chmod 600 /pgadmin4/pgpass; /entrypoint.sh;"
kafkaui:
image: provectuslabs/kafka-ui:v0.7.2
restart: "no"
ports:
- "8989:8080"
environment:
DYNAMIC_CONFIG_ENABLED: "true"
KAFKA_CLUSTERS_0_NAME: 8GyRIS62T8aMSkDJs-AH5Q
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health || exit -1
interval: 10s
timeout: 5s
start_period: 60s
retries: 5
volumes:
postgres_data:
kafka_data:
pgadmin_data:
Conclusion
Now any developer can launch all the services necessary for development in a couple of clicks without any problems. And he will not have any problems associated with the local environment.
Today we learned about two very useful Docker Compose keywords − include
And extends
— and also learned how to use them to solve specific problems.
Subscribe to our Telegram And YouTubeso as not to miss new materials about AmplicodeSpring and related technologies!
And if you want to try Amplicode in action, you can install it absolutely free now, as in IntelliJ IDEA/GigaIDEand in VS Code.