Deploying PostgreSQL, Redis and RabbitMQ in a Kubernetes cluster

In this article, I will not explain why all this is needed, or discuss the advantages and disadvantages of this solution. Think of this article as a guide (note) for quickly deploying a base and queue in a Kubernetes dev cluster.

Content

  1. Introduction

  2. Installing PostgreSQL

  3. Installing Redis

  4. Installing RabbitMQ


Introduction

PostgresSQL, Redis, and RabbitMQ installations are very similar to each other. Three main stages can be distinguished:

I will not explain what PV and PVC are. There is an excellent lecture on this topic, after which you can safely return to my instructions. Before starting work, you need to minimally configure the Kubernetes cluster. Here are the small requirements:

  1. Kubernetes version 1.20+.

  2. One master node and one worker node.

  3. The configured Ingress-controller.

  4. If the cluster is deployed on bare-metal, then you need to replace the external balancer. For example, put MetalLB or PorterLB.

  5. Helm is installed on the virtual machine.

How to create your own lamp dev cluster on bare metal is described in detail in a previous article.

Installing PostgreSQL

Let’s create a resource StorageClassto do this, insert the following configuration into the storage.yaml file:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Let’s apply the manifest:

Let’s create a resource Persistent Volume. To do this, paste the following manifest into the pv.yaml file:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-for-pg
  labels:
    type: local
spec:
  capacity:
    storage: 4Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /devkube/postgresql
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 457344.cloud4box.ru

IN matchExpressions specify the name of the node on which the disk will be mounted. You can view the name of available nodes using the command:

kubectl get nodes

For convenience, we will mount the disk immediately on the master node, although this can be done on any of those available in the list. We will mount the /devkube/postgresql directory. We go to the remote machine and create a directory with the following command:

mkdir -p /devkube/postgresql

Let’s create a resource Persistent Volume:

kubectl apply -f pv.yaml

Let’s check the status:

kubectl get pv

Let’s apply the manifest with Persistent Volume Claim:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pg-pvc
spec:
  storageClassName: "local-storage"
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi

Let’s see the state of the resource:

Resource PVC pending binding. Now is the time to deploy Postgres on a cluster. We pull the Bitnami repository to ourselves:

helm repo add bitnami https://charts.bitnami.com/bitnami

Install Helm Chart with Postgres:

helm install dev-pg bitnami/postgresql --set primary.persistence.existingClaim=pg-pvc,auth.postgresPassword=pgpass

Let’s look at the state PVC:

kubectl get pvc

Resource in status bound, now the Postgres pod will write data to the /devkube/postgresql directory. Let’s look at the state pod, statefulset:

kubectl get pod,statefulset

The database has been successfully deployed, now let’s try to connect to it: create a user, a table and configure access. After the chart is installed, the console will show some ways to connect to the database. There are two ways:

1. Forward a port to the local machine

To do this, you need to install the psql utility on the machine. Let’s check that it’s installed:

psql -V

Export the password from the admin user to an environment variable:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace default dev-pg-postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode)

Perform port forwarding:

kubectl port-forward --namespace default svc/dev-pg-postgresql 5432:5432

The console will be locked after executing the command. In another window, connect to the same machine and connect to the database:

PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432

Or so, but then you have to enter the password manually:

psql --host 127.0.0.1 -U postgres -d postgres -p 5432
2. Create a pod with psql client

Export the password from the admin user to an environment variable:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace default dev-pg-postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode)

Let’s create a pod with the psql utility and execute the command to connect to the database in it:

kubectl run dev-pg-postgresql-client --rm --tty -i --restart="Never" --namespace default --image docker.io/bitnami/postgresql:14.2.0-debian-10-r22 --env="PGPASSWORD=$POSTGRES_PASSWORD" \
      --command -- psql --host dev-pg-postgresql -U postgres -d postgres -p 5432

Create a role (user) and a password for it:

CREATE ROLE qa_user WITH LOGIN ENCRYPTED PASSWORD 'qa-pg-pass';

View the list of roles:

\du

Let’s create a database, which will be owned by the user qa_user:

CREATE DATABASE qa_db OWNER qa_user;

Now let’s turn off:

\q

And connect to the database with the data of the new user (in the second way):

kubectl run dev-pg-postgresql-client --rm --tty -i --restart="Never" --namespace default --image docker.io/bitnami/postgresql:14.2.0-debian-10-r22 --env="PGPASSWORD=qa-pg-pass"  --command -- psql --host dev-pg-postgresql -U qa_user -d qa_db -p 5432

Let’s create a small table:

CREATE TABLE qa_table (id int, name varchar(255));

Let’s add an entry:

INSERT INTO qa_table VALUES (1, 'first');

Now let’s do selectto make sure it works:

SELECT * FROM qa_table;

View the list of tables in the database:

\dt+

Done, the base has been deployed successfully! In the application, you need to specify the following database address:

DATABASE_URI=postgresql://qa_user:qa-pg-pass@dev-pg-postgresql:5432/qa_db

Installing Redis

Redis can be installed in several configurations. We will deploy a variant with two replicas for reading and one replica for writing to the database. First of all, apply the manifest with StorageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
kubectl apply -f storage.yaml

I already had this resource installed, so the manifest didn’t apply.

Next, set up Persistent Volumes. We will reserve 2 GB for each slave replica and 4 GB for the master replica. Let’s create pv-slave1.yaml, pv-slave2.yaml and pv-master.yaml files and paste these configurations into them:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-redis-slave1
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /devkube/redis/slave1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 457344.cloud4box.ru
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-redis-slave2
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /devkube/redis/slave2
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 457344.cloud4box.ru
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-redis-master
  labels:
    type: local
spec:
  capacity:
    storage: 4Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /devkube/redis/master
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 457344.cloud4box.ru

All replicas will store their data on node 457344.cloud4box.ru, although, of course, you can mount disks on different virtual machines. Let’s create three directories:

mkdir -p /devkube/redis/slave1
mkdir -p /devkube/redis/slave2
mkdir -p /devkube/redis/master

Apply configuration:

kubectl apply -f .

Let’s check the created resources:

kubectl get pv

Created PV still vacant and awaiting applications for the use of space. BUT Persistent Volumes Postgres already associated with Persistent Volumes Сlaim (this is how it remained after raising the Postgres database in Kubernetes).

Let’s create PVC for the master replica. They must be created in the namespace in which you will deploy the database. Redis will be deployed in the dev-redis space. Let’s create a space:

kubectl create ns dev-redis
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-for-master-redis
  namespace: dev-redis
spec:
  storageClassName: "local-storage"
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
kubectl apply -f pvc-master.yaml

For slave replicas, create PVC We won’t, they will be automatically created when you install the Helm chart.

It’s time to install Redis. We pull the Bitnami repository to ourselves, if we haven’t already done so (I already have it added):

helm repo add bitnami https://charts.bitnami.com/bitnami

Install the Helm chart with Redis:

helm install dev-redis-chart bitnami/redis --namespace dev-redis --set global.redis.password=redispass,master.persistence.existingClaim=pvc-for-master-redis,replica.replicaCount=2,replica.persistence.storageClass=local-storage,replica.persistence.size=2Gi

In this command I specify:

  • password=redispass — password for authorization;

  • existingClaim=pvc-for-master-redis – the name of the above PVC for the master replica;

  • replicaCount, storageClass, size — number of slave replicas, resource name StorageClass and size PV. This is all you need to automatically create PVC.

After executing the command, several ways to connect to Redis will be displayed in the console. But first, let’s check the status Persistent Volumes:

kubectl get pv

All resources have become associated with a specific PVC. And look at the created resources:

kubectl get pod,svc,statefulset -n dev-redis

Everything is successfully deployed, it’s time to connect and ping the database. Export the authorization password to an environment variable:

export REDIS_PASSWORD=$(kubectl get secret --namespace dev-redis dev-redis-chart -o jsonpath="{.data.redis-password}" | base64 --decode)

Let’s create a pod with redis-cli on board:

kubectl run --namespace dev-redis redis-client --restart="Never"  --env REDIS_PASSWORD=$REDIS_PASSWORD  --image docker.io/bitnami/redis:6.2.6-debian-10-r146 --command -- sleep infinity

We pass inside the created pod:

kubectl exec --tty -i redis-client \
   --namespace dev-redis -- bash

And now you can connect to the master or slave replica of your choice:

   REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h dev-redis-chart-master
   или
   REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h dev-redis-chart-replicas

Let’s connect to the master replica:

Ping the base:

You can write data to any of the 15 automatically created bases (base number zero by default). Let’s take the second one:

Let’s write some data:

And let’s ask them:

Ready! Redis is successfully deployed and ready to use. In the application, you need to specify the following address:

REDIS=redis://redispass@dev-redis-chart-master:6379/0

Installing RabbitMQ

Let’s write a manifest with a resource StorageClass to storage.yaml file:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

And apply it:

kubectl apply -f storage.yaml

I already had this resource installed, so the manifest didn’t apply.

Let’s create Persistent Volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-for-rmq
  labels:
    type: local
spec:
  capacity:
    storage: 4Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /devkube/rabbitmq
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 457344.cloud4box.ru

IN matchExpressions specify the name of the node on which we will mount the disk. You can view the name of available nodes using the command:

kubectl get nodes

For convenience, we will mount the disk directly on the master node, although this can be done on any of those available in the list. Let’s create a directory where RabbitMQ will add up its data:

mkdir -p /devkube/rabbitmq

Let’s create a resource Persistent Volume:

kubectl apply -f pv.yaml

And check the status:

It remains only to create Persistent Volume Claim. It must be created in the namespace in which you will deploy the future queue. RabbitMQ will be deployed in the dev-rmq space. Let’s create it:

kubectl create ns dev-rmq

Let’s write the manifest to the pvc.yaml file:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: rmq-pvc
  namespace: dev-rmq
spec:
  storageClassName: "local-storage"
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi

Applicable:

kubectl apply -f pvc.yaml

And finally, let’s deploy RabbitMQ. We add the Bitnami repository to ourselves, if we have not already done so (I have already added it):

helm repo add bitnami https://charts.bitnami.com/bitnami

Install the Helm RabbitMQ chart:

helm install dev-rmq-chart bitnami/rabbitmq --namespace dev-rmq --set persistence.existingClaim=rmq-pvc,ingress.enabled=true,ingress.hostname=dashboard.dev.rmq.cryptopantry.tech,auth.username=rmq_admin,auth.password=devrmquser,ingress.ingressClassName=nginx

In this command, I specify the following settings:

Helm chart installed. Let’s check that the resources were deployed successfully:

Now navigate in your browser to the domain name you specified in ingress.hostname:

Login with credentials login=rmq_admin And password=devrmquserand go to the Admin tab:

Let’s create a new user qa_user with password qa_pass:

Let’s create a virtual host:

And bind the user to the virtual host:

Ready! RabbitMQ is deployed and ready to use:

In the application, enter the following address:

RabbitMQ=amqp://qa_user:qa_pass@dev-rmq-chart-rabbitmq.dev-rmq:5672/qa_host

That’s all for me, we successfully deployed PostgreSQL, Redis and RabbitMQ in a small dev cluster.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *