How to set up this CI CD of yours?
Our first attempts at CI/CD
What exactly to do? Well, naturally – studying the documentation.
PS Over time, I found myself reading Yandex documentation – I advise everyone to read it.
PPS Well, of course, everything, everything, everything from the gitlab documentation needs to be reprimanded and learned – it was not in vain that they wrote it.
Build
Since this period, the build has remained unchanged (well, almost) – let’s look at it. As I said above, Docker is powerful, naturally, we first implemented it in our product. We wrote Dockerfiles – we write our CI:
# templates/build/kaniko.yaml
variables:
TAG: $CI_COMMIT_SHA
DOCKERFILE: $CI_PROJECT_DIR/Dockerfile
ARGUMENTS: ""
build-kaniko:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.21.0-debug
entrypoint: [ "" ]
script:
- echo "Build with tag $TAG"
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"},\"$(echo -n $CI_DEPENDENCY_PROXY_SERVER | awk -F[:] '{print $1}')\":{\"auth\":\"$(printf "%s:%s" ${CI_DEPENDENCY_PROXY_USER} "${CI_DEPENDENCY_PROXY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- echo "Run this build ${CI_REGISTRY_IMAGE}-${CI_ENVIRONMENT_NAME}"
- >-
/kaniko/executor
--context $CI_PROJECT_DIR
--dockerfile $DOCKERFILE
--destination $CI_REGISTRY_IMAGE:$TAG
--cache=true
$ARGUMENTS
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: on_success
- when: never
What could we need here? Well, first of all – we need to tag our images and save them in the Registry – by default, let our build save images with the $CI_COMMIT_SHA tag – this will allow us to have a unique tag that is tied to a commit – by which it is easy to find one or the other in a two-way order , a trifle, but nice. It is also important to indicate that there is only 1 build.
By setting up Registry:
Deploy
Accordingly, cuber, we want to deploy the application – right?
# templates/deploy/kubernetes.yaml
variables:
DOCKER_IMAGE: $CI_REGISTRY_IMAGE
DOCKER_TAG: $CI_COMMIT_SHA
DEPLOYMENT_DATE: `date +%Y%m%d-%H-%M-%S`
MANIFEST_FOLDER: "k8s"
KUBECONFIG: ""
deploy-k8s:
stage: deploy
image:
name: alpine:3.19
script:
- apk update && apk add gettext
- wget https://storage.googleapis.com/kubernetes-release/release/v1.26.0/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- ./kubectl create secret docker-registry regcred --docker-server=$CI_REGISTRY --docker-username=$CI_DEPLOY_USER --docker-password=$CI_DEPLOY_PASSWORD -n $CI_ENVIRONMENT_NAME || true
- >-
for MANIFEST in $MANIFEST_FOLDER/*; do
envsubst < $MANIFEST | ./kubectl apply -n $CI_ENVIRONMENT_NAME -f -;
done
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: on_success
- when: never
Let's also go over the important stuff. DOCKER_IMAGE and DOCKER_TAG should be marked in our deployment manifest; using envsubst we just generate a manifest file and, in fact, send it to the cuber. KUBECONFIG must be saved in CI/CD > Variables, in file format. I would also like to say that this job allows us to send several files to the cuber at once, within one folder – which can also be specified in envs (or in variables in the gitlab-ci project).
It's also important to note that I'm creating a regcred secret – where the credentials for the registry are stored.
According to regcred information:
What's next?
Now we have automatic deployment. Actually, with the growth of the team and projects, the desire appears, and let’s test it before we start thoughtlessly shove everything into food. Naturally, it’s cool when a business can also poke around with everything on its own, so the decision fell on the dev and prod versions of the product – where at first everything is pushed into dev, later we check that everything is ok – it goes to prod.
The first attempt was as simple as possible. We simply make 2 branches, stable and main (we replace dev), CI for main – create a build, send to dev, CI for stable – create a build, send to prod. It quickly became clear that this was not exactly a good case. I won’t focus on it for a long time, I’ll briefly say that for us the problem is that the image that we tested on dev != the image on prod, but what if some not smart person~~(me)~~ pushes to stable directly ? In general, we abandoned the idea and went to think.
Naturally, a solution was not found quickly; the phrase “Continuous integration. Continuous delivery” was spinning in my head. How about we run the same Docker image throughout CI? Actually, let's go do it.
Deploy in different environments
As a result, it was done so that we got rid of 2 builds for different envs. Now 1 build first goes to dev, and later it flies to prod – efficiency. First, we need to make 2 different deployments to cuber – one for dev, the other for prod. How?
# templates/deploy/kubernetes.yaml
...
.deploy-k8s-template:
stage: deploy
image:
name: alpine:3.19
...
Yes, in general it’s simple, first we add a dot to the beginning of our job, and rename it normally. Now this is considered a template and we can make 2 new jobs from it!
# templates/deploy/kubernetes.yaml
...
deploy-dev:
environment: dev
extends: .deploy-k8s-template
deploy-prod:
environment: prod
extends: .deploy-k8s-template
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: manual
- when: never
What's important here? Yes, in general, nothing special. We use a template and only change the environment – that is, the environment in which ours will be launched.
Useful material:
Release?
We actually have everything – money, cars build, deployment to different environments. The company grew, more services appeared, our monolith was slowly being sawn apart (I hope it will be sawn apart in 10 years). What began to be missing was that it was not clear what version it was. How to roll back? Looking for the latest working commit? A lot of questions and waters passed under the bridge during this period and the idea was to introduce a system of releases and tags. I, as a person who really doesn’t like doing things with my hands, decided to automate this whole thing. What should we do? We went online and found some agreements, semver, but that’s not the problem. Firstly, we need to adjust the commit format – we did it, secondly, semver turns out to only work within 1 language, it’s not in order
I strongly recommend that you familiarize yourself with the “Comite Agreement” and if your CI is somehow inspired by ours, you will have to implement (or adapt)
IT CRINGE PS It’s better, of course, in English – there’s more information there, but who cares about the language
Tag generation
#templates/release/analyze_commit.yaml
variables:
MAJOR_CHANGE_PATTERN: 'BREAKING\sCHANGE:'
MINOR_CHANGE_PATTERN: '^feat(\(.*\))?!?:\s'
PATCH_CHANGE_PATTERN: '^(fix|perf|refactor)(\(.*\))?!?:\s'
RELEASE_NOTE_FILE: "RELEASE_NOTE.md"
CHANGES_PATTERN: '^(feat|fix|docs|style|refactor|perf|test|chore|wip)(\(.*\))?(!)?:\s?.+$'
stages:
- prepare_release
.generate_release_tag: &generate_release_tag
- >-
if [ -z "$LAST_TAG" ]; then
echo "No previous tag found, using the full commit history."
COMMITS=$(git log --oneline)
else
echo "Analyzing commits since $LAST_TAG..."
COMMITS=$(git log --oneline $LAST_TAG..$CI_COMMIT_SHA)
fi
- echo "Commits to analyze ${COMMITS}"
- VERSION_CHANGE=""
- >-
if echo "$COMMITS" | grep -qE "$MAJOR_CHANGE_PATTERN"; then
VERSION_CHANGE="major"
elif echo "$COMMITS" | grep -qE "$MINOR_CHANGE_PATTERN"; then
VERSION_CHANGE="minor"
elif echo "$COMMITS" | grep -qE "$PATCH_CHANGE_PATTERN"; then
VERSION_CHANGE="patch"
fi
- echo "Change type defined $VERSION_CHANGE"
- >-
if echo "$LAST_TAG" | grep -E "^v[0-9]+\.[0-9]+\.[0-9]+$" >/dev/null; then
echo "Valid previous tag found: $LAST_TAG"
MAJOR=$(echo "$LAST_TAG" | cut -d 'v' -f 2 | cut -d '.' -f 1)
MINOR=$(echo "$LAST_TAG" | cut -d '.' -f 2)
PATCH=$(echo "$LAST_TAG" | cut -d '.' -f 3)
else
echo "No valid previous tag found or format is incorrect. Starting from 1.0.0"
MAJOR=1; MINOR=0; PATCH=0
fi
- >-
case "$VERSION_CHANGE" in
major)
MAJOR=$((MAJOR + 1)); MINOR=0; PATCH=0 ;;
minor)
MINOR=$((MINOR + 1)); PATCH=0 ;;
patch)
PATCH=$((PATCH + 1)) ;;
*)
echo "No version change detected stopping the release." exit 0 ;;
esac
- NEW_TAG="v${MAJOR}.${MINOR}.${PATCH}"
- echo "New tag $NEW_TAG"
- echo "NEW_TAG=$NEW_TAG" >> variables.env
.generate_release_note: &generate_release_note
- >-
if [ -z "$LAST_TAG" ]; then
RANGE=""
else
RANGE="$LAST_TAG..$CI_COMMIT_SHA"
fi
- echo "# Release Notes" > "$RELEASE_NOTE_FILE"
- echo "" >> "$RELEASE_NOTE_FILE"
- |-
add_commit_to_release_note() {
commit_data="$1"
commit_hash=$(echo "$commit_data" | sed -n '1p')
commit_author=$(echo "$commit_data" | sed -n '2p')
commit_date=$(echo "$commit_data" | sed -n '3p')
echo "## Commit: $commit_hash" >> "$RELEASE_NOTE_FILE"
echo "**Author:** $commit_author" >> "$RELEASE_NOTE_FILE"
echo "" >> "$RELEASE_NOTE_FILE"
echo "**Date:** $commit_date" >> "$RELEASE_NOTE_FILE"
echo "" >> "$RELEASE_NOTE_FILE"
echo "**Updates:**" >> "$RELEASE_NOTE_FILE"
echo "" >> "$RELEASE_NOTE_FILE"
messages=$(echo "$commit_data" | sed '1,3d')
IFS=$'\n'
for message in $messages; do
if echo "$message" | grep -Eq "$CHANGES_PATTERN"; then
echo "- $message" >> "$RELEASE_NOTE_FILE"
else
echo "> $message" >> "$RELEASE_NOTE_FILE"
fi
done
echo "" >> "$RELEASE_NOTE_FILE"
}
- |-
process_git_log() {
range="$1"
commit_data=""
git log "$range" --pretty=format:"%H%n%an <%ae>%n%ad%n%B%n<ENDCOMMIT>" | while IFS= read -r line || [ -n "$line" ]; do
if [ "$line" = "<ENDCOMMIT>" ]; then
add_commit_to_release_note "$(echo -e "$commit_data")"
commit_data=""
else
commit_data="${commit_data}${line}\n"
fi
done
}
- process_git_log "$RANGE"
analyze_commit:
image:
name: alpine/git
entrypoint: [ '' ]
stage: prepare_release
script:
- apk add --no-cache curl jq
- echo "Analyze commit..."
- LAST_TAG=$(git describe --tags --abbrev=0 $(git rev-list --tags --max-count=1) 2>/dev/null || echo "")
- *generate_release_tag
- *generate_release_note
variables:
SHELL: '/bin/bash'
artifacts:
paths:
- $RELEASE_NOTE_FILE
reports:
dotenv: variables.env
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: on_success
- when: never
Well, here we go in order. To begin with, the purpose of this job is to analyze the commit for changes. The essence of the work is that if there is a fix:, perf:, refactor: in those commits that arrived in main, then we make a PATCH version. If there is a feat: – then we make a MINOR version, and if BREAKING CHANGE – then a major one. The logic is actually as simple as possible.
I will also leave you with a quite suitable formatter for a text Release Note, which we are still behind and which greatly simplifies writing a list of changes for both employees and users.
We are currently testing this job, perhaps over time it will undergo changes or be replaced altogether, but for now that’s it. Maybe I'll refactor it later, but desires no time.
After completing a job, we receive an artifact for executing the next, no less important job.
Release
#templates/release/release.yaml
include:
- local: templates/release/analyze_commit.yaml
stages:
- prepare_release
- release
release:
stage: release
needs:
- job: analyze_commit
artifacts: true
image: registry.gitlab.com/gitlab-org/release-cli:latest
script:
- echo "Release with tag $NEW_TAG"
release:
name: 'Release $NEW_TAG'
tag_name: '$NEW_TAG'
description: RELEASE_NOTE.md
ref: '$CI_COMMIT_SHA'
assets:
links:
- name: "GitLab Registry Image"
url: "https://$CI_REGISTRY_IMAGE:$NEW_TAG"
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: manual
- when: never
Here we use the artifact from the previous job to obtain the tag and the Release Note itself. The job is as simple as possible – it creates a release. What else is there to describe?
Based on materials:
As for the general concept, it is super important to understand the sequence. Everything previous was carried out within the framework of CI on branches, the release itself too, and at this stage it creates a tag – the very treasured one that we wanted to receive. What we need to understand in this logic – from the beginning of the build to the release – is now only preparation for the release (not like a job) – as well as for the prod environment. Now we have a tag by which we can get a version, roll back, operate with the same versions in HELM, and much more. We are still discussing it within the framework of the deployment itself, so we will continue.
Publishing docker images
Since the tag has been created, now jobs that are triggered by the tag can be included in the chain. This is how we define 2 stages – before release and after.
# templates/publish/docker.yaml
stages:
- prepare_publish
- publish
publish_latest:
stage: prepare_publish
script:
- echo "Tagging image $CI_COMMIT_SHA as latest..."
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:latest
rules:
- if: $CI_COMMIT_TAG
when: on_success
- when: never
publish_release_tag:
stage: prepare_publish
script:
- echo "Tagging image $CI_COMMIT_SHA as $CI_COMMIT_TAG..."
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
when: on_success
- when: never
publish_stable:
stage: publish
script:
- echo "Tagging image $CI_COMMIT_SHA as stable..."
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:stable
- docker push $CI_REGISTRY_IMAGE:stable
rules:
- if: $CI_COMMIT_TAG
when: on_success
- when: never
To begin with, we would take our image and simply cut it into different tags – first of all, latest – it still passed dev and we released it. Later the tag itself – which was generated last time. We'll skip the moment with stable for now, but remember – it will be the finale.
Final deployment
# templates/deploy/kubernetes.yaml
...
deploy-dev:
extends: .deploy-k8s-template
environment:
name: dev
action: start
kubernetes:
namespace: dev
deploy-prod:
extends: .deploy-k8s-template
variables:
DOCKER_TAG: $CI_COMMIT_TAG
rules:
- if: $CI_COMMIT_TAG
when: manual
- when: never
environment:
name: prod
action: start
kubernetes:
namespace: prod
This is where we need to fix the issue with deployment in production – now it is exclusively launched after release and has a semantic tag. I also slightly modified the environment.
Conclusion
So we have built the perfect deployment. Peace and success to everyone.
Not all?
I'm a very restless person. What I miss from all of the above is the guarantee that tomorrow I will change something in the scripts and everything will not fall apart. Based on this objective decision, I decided work on the weekend pay a little more attention to this aspect and introduce versioning into our CI project.
# .gitlab-ci.yml
include:
- local: "templates/release/release.yaml"
variables:
GITLAB_CI_LINT_API_URL: "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/ci/lint"
PRIVATE_TOKEN: $GITLAB_ACCESS_TOKEN
TEMPLATES_FILE: templates
stages:
- lint
- prepare_release
- release
validate_templates:
stage: lint
image: alpine:3.19
rules:
- if: $CI_COMMIT_BRANCH
when: always
before_script:
- apk add --no-cache curl jq
script:
- |
find $TEMPLATES_FILE -type f \( -name '*.yml' -o -name '*.yaml' \) -print0 | while IFS= read -r -d $'\0' file; do
printf "Validating %s\n" "${file}"
yaml_content=$(cat "${file}")
data=$(jq --null-input --arg yaml "$yaml_content" '. | {content: $yaml}')
response=$(curl "${GITLAB_CI_LINT_API_URL}" \
--header "PRIVATE-TOKEN: ${PRIVATE_TOKEN}" \
--header "Content-Type: application/json" \
--data "$data" -s)
valid=$(printf "%s" "$response" | jq --raw-output '.valid')
if [ "$valid" != true ]; then
printf "Validation failed for %s:\n" "${file}"
printf "%s" "$response" | jq -r '.errors[]'
exit 1
else
printf "Validation successful for %s\n" "${file}"
fi
warnings=$(printf "%s" "$response" | jq '.warnings | length')
if [ "$warnings" -ne 0 ]; then
printf "Warnings for %s:\n" "${file}"
printf "%s" "$response" | jq -r '.warnings[]'
fi
printf "\n"
done
analyze_commit:
needs: [ ]
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: on_success
release:
needs:
- job: validate_templates
- job: analyze_commit
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: on_success
Let's go in order. This feature, one might say, allows us to import a specific version of CI into the project and not change it unnecessarily. Now it is guaranteed that if CI has once completed a full cycle in a project, then it will work forever.
And now for the content
Firstly, yes, we see a strange job at the first stage – lint. Briefly, it checks all the scripts in the templates folder. That is, what we wrote above about errors – which Gitlab itself does not allow, or I simply did not find (if the file is not .gitlab-ci.yml – then consider it not a script at all – it cannot be checked). The formula is the same as in the Pipeline Editor (as I understand it).
Then there are the actual ready-made jobs for analysis and release – that's all.
As a result, import into your projects may look like this:
include:
- project: 'devops/ci-cd-includes'
file: '/templates/build/kaniko.yaml'
ref: 'v1.7.0'
- project: 'devops/ci-cd-includes'
file: 'templates/deploy/helm.yaml'
ref: 'v1.7.0'
This is definitely the end, I bow.