A practical guide to creating a Helm chart or how to get rid of the routine when working with YAML manifests

Introduction

I love Kubernetes manifests. True, it gives me great pleasure to create each resource separately as a team kubectl apply. But this is only the beginning… When you have more than five such resources, and even more microservices, then managing this entire zoo becomes a pain. You need to manipulate individual manifests; if you have several similar services that differ in small details, then you will have to create your own stack of manifests for each. I am silent about different environments.

You can dump the entire deployment of resources on CI/CD Pipeline and forget about manifests forever. But if you need to expand or, conversely, “collapse” the application, then the above problems cannot be avoided. Thus, in this article I will show my experience of creating a Helm chart and launching it, but before that I will study methods for deploying an application without Helm.

Preface

After deploying several services using YAML manifests and the commandkubectl apply, I decided to create my first Helm chart and thought, why not write an article about it! The author has fairly little experience working with Helm, so there may be errors in the article. Therefore, if you find a typo/gross (and other) error, please select the text and click on Ctrl+Enter. Thank you!

Table of contents

Target

I have already created source code repositories. There you can find two folders – kubectl containing yaml manifests and helm – a chart with the same manifests. The article will contain a minimum of theory – the main part is practice. First, I will show what manifests the application consists of, how they are launched, and then we will try to create our own Helm chart, making the manifests more universal, and deploy the release.

A little about Helm

Helm is a Kubernetes package manager. It makes it easy to launch, update, and rollback applications. The core essence of Helm is the charts.

Chart is a collection of related Kubernetes manifests. Using a chart, you can launch applications from other developers or create your own chart and deploy it.

Release – this is an established chart. You can install as many releases of one chart as you like into one cluster. Each release has its own name, which can be used to name Kubernetes resources.

In my first months of using Kubernetes, I used Helm exclusively to deploy third-party services. Yes, I actively use ingress-nginx And loki-stack (by the way, I used it in the previous article). But recently I got the idea to use Helm for my own services as well. There are several reasons for this:

  1. Helm allows you to manage multiple manifests as a single entity. Our services have 4-7 manifests, some require variable substitution (via envsubst), which also complicates deployment outside the CI/CD pipeline.

  2. All our services are built on almost the same principle, which means a lot of code is repeated. Using Helm’s values.yaml (more on this a little later), you can remove changing values ​​from manifests and use one chart for several applications (releases) at once.

  3. Helm is becoming (or has already become) a must-have technology. Lately, I’ve started to see Helm more often in vacancies for Dev-Ops engineers requiring knowledge of K8s.

Manifestos and their purposes

First, you should indicate that we will deploy to Yandex Managed Kubernetes. Used in services Lockbox (a service from Yandex Cloud for storing secrets) and External Secret Operator for synchronizing Kubernetes with third-party providers (in our case, with Yandex.Cloud services). The following manifests are provided for application deployment:

  • cert-external-secret.yaml – type resource ExternalSecret. A manifest is required to obtain a TLS certificate from Yandex Certificate Manager and its private key. In the future, ExternalSecret will create a secret with these values ​​that will be used in ingress.yaml.

  • cert-secret-store.yaml – type resource SecretStore. It specifies which third-party API to contact to obtain data. In the current manifest, the provider is yandexcertificatemanager.

  • clusterip.yaml – resource of type Service(ClusterIP). Provides access to a running service within the cluster.

  • deploy.yaml – type resource Deployment . Manages pods and ensures that all replicas are deployed. It specifies to launch one replica with a Java image (Spring) and some variables taken from the secret with values ​​from Yandex Lockbox. Uses image from Yandex Container Registry.

  • ingress.yaml – type resource Ingress. Serves to provide access to the service via HTTP(-S). Ingress alone is not enough; you need to separately deploy Ingress Controller (usually ingress-nginx). Uses data from the secret created by the cert-external-secret.yaml manifest to provide access over HTTPS.

  • lockbox-external-secret.yaml – also a resource of the ExternalSecret type, it describes all the variables that need to be obtained from Lockbox.

  • lockbox-secret-storage.yaml – also a resource like SecretStore, the provider is yandexlockbox.

Contents of manifestos

Now let’s turn this whole thing around! You might suggest using the commandkubectl apply -f ./kubectlto avoid running commands for each file separately. The fact is that some configuration files use variables inserted through envsubst. This is a very convenient utility for inserting values ​​into a file. It is very useful for variable substitution in a CI/CD pipeline. On the other hand, when raising resources locally, this adds complexity.

Below, in order of execution, the contents of the manifests and the commands for deploying them will be indicated. Also pay attention to the comments.
PS the contents of the manifestos are indicated only for the purpose of showing how the manifestos will change after the creation of the Helm chart. You do not have to deploy the same resources/use Yandex.Cloud services

I almost forgot! First, let’s create a separate namespace and add a secret with an authorized key to access Yandex Cloud:

$ kubectl create ns kubectl-ns
namespace/kubectl-ns created
$ kubectl --namespace kubectl-ns create secret generic yc-auth \
	      --from-file=authorized-key=authorized-key.json 
secret/yc-auth created

SecretStore (TLS Certificate)

cert-secret-store.yaml
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: spring-app-certificate-secret-store
  namespace: kubectl-ns
spec:
  provider:
    yandexcertificatemanager:
      auth:
        authorizedKeySecretRef:
          name: yc-auth
          key: authorized-key
$ kubectl apply -f ./kubectl/cert-secret-store.yaml
secretstore.external-secrets.io/spring-app-certificate-secret-store created
$ kubectl -n kubectl-ns get ss/spring-app-certificate-secret-store
NAME                                  AGE   STATUS   CAPABILITIES   READY
spring-app-certificate-secret-store   20s   Valid    ReadOnly       True

ExternalSecret (TLS Certificate)

cert-external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: spring-app-certificate-external-secret
  namespace: kubectl-ns
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: spring-app-certificate-secret-store
    kind: SecretStore
  target:
    name: spring-app-certificate-secret
    template:
      type: kubernetes.io/tls
  data:
  - secretKey: tls.crt
    remoteRef:
      key: $CERTIFICATE_ID
      property: chain
  - secretKey: tls.key
    remoteRef:
      key: $CERTIFICATE_ID
      property: privateKey

Pay attention to $CERTIFICATE_ID. Since the certificate ID from Certificate Manager can change periodically, it is bad practice to store it in code. Therefore, you first need to find out the ID of the certificate, write it to the CERTIFICATE_ID environment variable and pass it through envsubst:

$ export CERTIFICATE_ID=<your_certificate_id_here>
$ envsubst \$CERTIFICATE_ID < ./kubectl/cert-external-secret.yaml | kubectl apply -f -
$ kubectl -n kubectl-ns get externalsecret/spring-app-certificate-external-secret
NAME                                     STORE                                 REFRESH INTERVAL   STATUS         READY
spring-app-certificate-external-secret   spring-app-certificate-secret-store   1h                 SecretSynced   True

SecretStore (Lockbox secret)

lockbox-secret-store.yaml
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: spring-app-lockbox-secret-store
  namespace: kubectl-ns
spec:
  provider:
    yandexlockbox:
      auth:
        authorizedKeySecretRef:
          name: yc-auth
          key: authorized-key
$ kubectl apply -f ./kubectl/lockbox-secret-store.yaml
secretstore.external-secrets.io/spring-app-lockbox-secret-store created
$ kubectl -n kubectl-ns get ss/spring-app-lockbox-secret-store  
NAME                              AGE   STATUS   CAPABILITIES   READY
spring-app-lockbox-secret-store   51s   Valid    ReadOnly       True

ExternalSecret (Lockbox secret)

lockbox-external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: spring-app-lockbox-external-secret
  namespace: kubectl-ns
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: spring-app-lockbox-secret-store
    kind: SecretStore
  target:
    name: spring-app-lockbox-secret
  data:
  - secretKey: JDBC_URL
    remoteRef:
      key: $SECRET_ID
      property: JDBC_URL
  - secretKey: DB_USERNAME
    remoteRef:
      key: $SECRET_ID
      property: DB_USERNAME
  - secretKey: DB_PASSWORD
    remoteRef:
      key: $SECRET_ID
      property: DB_PASSWORD

Now you should transfer $SECRET_ID to the file – the ID of the Yandex Lockbox secret.

$ export SECRET_ID=<your_lockbox_secret_id_here>
$ envsubst \$SECRET_ID < ./kubectl/lockbox-external-secret.yaml | kubectl apply -f -
externalsecret.external-secrets.io/spring-app-lockbox-external-secret created
$ kubectl -n kubectl-ns get externalsecret/spring-app-lockbox-external-secret 
NAME                                 STORE                             REFRESH INTERVAL   STATUS         READY
spring-app-lockbox-external-secret   spring-app-lockbox-secret-store   1h                 SecretSynced   True

Service (type: ClusterIP)

clusterip.yaml
apiVersion: v1
kind: Service
metadata:
  name: spring-app
  namespace: kubectl-ns
  labels:
    app-label: spring-app-clusterip-label
spec:
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: http
  selector:
    app-label: spring-app-label
$ kubectl apply -f ./kubectl/clusterip.yaml
service/spring-app created

Deployment

deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-app
  namespace: kubectl-ns
  labels:
    app-label: spring-app-label
spec:
  replicas: 1
  selector:
    matchLabels:
      app-label: spring-app-label
  template:
    metadata:
      labels:
        app-label: spring-app-label
    spec:
      containers:
      - name: spring-app-app
        image: cr.yandex/$REGISTRY_ID/spring-app:$VERSION
        ports:
        - name: http
          containerPort: 8080
        env:
        # --- variables from Yandex Lockbox
        - name: JDBC_URL
          valueFrom:
            secretKeyRef:
              name: spring-app-lockbox-secret
              key: JDBC_URL
        - name: DB_USERNAME
          valueFrom:
            secretKeyRef:
              name: spring-app-lockbox-secret
              key: DB_USERNAME
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: spring-app-lockbox-secret
              key: DB_PASSWORD

For this manifest, you must pass two values: $REGISTRY_ID – registry ID and $VERSION – image version.

$ export REGISTRY_ID=<your_container_registry_id_here>
$ export VERSION=<your_image_version_here>
$ envsubst \$REGISTRY_ID,\$VERSION < ./kubectl/deploy.yaml | kubectl apply -f -
deployment.apps/spring-app created
$ kubectl -n kubectl-ns get deploy/spring-app
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
spring-app   1/1     1            1           17m

Ingress

ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: spring-app-ingress
  namespace: kubectl-ns
spec:
  tls:
    - hosts:
      - spring-app.dev.example.com
      secretName: spring-app-certificate-secret
  ingressClassName: spring-app-class-resource
  rules:
    - host: spring-app.dev.example.com
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: spring-app
              port:
                name: http

We will skip creating ingress.yaml, since to configure it you additionally need to deploy Ingress Controller.

Conclusion: routine

We have deployed all the necessary resources. Now count the number of commands executed, taking into account the assignment of environment variables. Now imagine that you need to do them more often than once in your life? Of course, you can use bash scripts, but then you will have to create your own bash script for each service. Routine, isn’t it? And then Helm comes to our aid!

Before the next section, we will delete all previously created resources:

$ kubectl delete -f ./kubectl/
externalsecret.external-secrets.io "spring-app-certificate-external-secret" deleted
secretstore.external-secrets.io "spring-app-certificate-secret-store" deleted
service "spring-app" deleted
deployment.apps "spring-app" deleted
ingress.networking.k8s.io "spring-app-ingress" deleted
externalsecret.external-secrets.io "spring-app-lockbox-external-secret" deleted
secretstore.external-secrets.io "spring-app-lockbox-secret-store" deleted

Creating a chart

First, let’s create a chart:

$ helm create helm-chart
Creating helm-chart

Let’s take a look at the structure of the newly created chart:

As we can see, the team helm create created several folders and files attached to them. Briefly about each:

  • charts/ – folder, containing third-party chartson which the current depends

  • Chart.yaml – a file containing basic information about the chart.

  • templates/ – a folder containing Kubernetes templates with the ability to format and insert the Helm value.

  • templates/*.tpl – files containing named templates. You can create tpl files and place your own templates in them, and then use them in manifests.

  • values.yaml – file, containing variables, used in templates. Contains standard values; when installing a release, you can specify your own.

We transfer manifestos to the chart and format them

Now our goal is to transfer all manifests from kubectl to the helm chart. The main requirement for a chart is the ability to have more than one release. To do this, we will use the formatting capabilities of Helm, as well as the file values.yaml

Built-in objects

In Helm, in addition to using your own variables, you can use built-in ones. Thus, you can use in templates (further) and manifests such variables as Release.Name (release name),Values.example(example value from values.yaml),Chart.Version(chart version), etc. The full list of Built-in objects can be found here.

values.yaml

The main assistant in order to make the chart more flexible and universal is the previously mentioned file values.yaml. It contains variables that are assigned before the release is installed. Initially, we will leave it empty and will fill it as manifests are transferred.

_helpers.tpl

_helpers.tpl
{{/*
Expand the name of the chart.
*/}}
{{- define "helm-chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "helm-chart.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "helm-chart.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "helm-chart.labels" -}}
helm.sh/chart: {{ include "helm-chart.chart" . }}
{{ include "helm-chart.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "helm-chart.selectorLabels" -}}
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create the name of the service account to use
*/}}
{{- define "helm-chart.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "helm-chart.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

This file already has some named templates defined. These templates use Built-In variables:

  • helm-chart.name” – chart name

  • “helm-chart.fullname” – full name of the chart

  • “helm-chart.chart” – name of the chart + its version

  • “helm-chart.labels” – labels common to all chart resources. I will use them everywhere, because Helm documentation they are used to identify a resource.

  • “helm-chart.selectorLabels” – labels used for selectors in ReplicaSet and Deployment resources. These labels are also included in the template “helm-chart.labels”

Template functions

In addition to using variables, you can call template functions. More details in the documentation: poke.

Let’s start! Let’s start filling out the templates in the same order as in the case of manifests from kubectl.

cert-secret-store.yaml & lockbox-secret-store.yaml

Let’s copy the manifests from kubectl and move them to the templates folder:

$ cp -r ./kubectl/*-secret-store.yaml ./helm-chart/templates 

Let’s make some changes. First, let’s remove the application name from the resource name and replace it with the release name:

metadata:
  name: {{ .Release.Name }}-certificate-secret-store

Second, we’ll insert the helm-chart.labels template using include and pass the output (the | symbol) to the template function nident, which adds the passed number of spaces to the beginning of the line. I pass 4 to the function because that’s exactly how many spaces are needed:

metadata:
  labels:
    {{- include "helm-chart.labels" . | nindent 4 }}

Now, to find out what our manifest will look like after all the helm’s manipulations, we’ll use the command helm install with parameter --dry-runwhich will “make-believe” set the chart:

$ helm install --dry-run test-release ./helm-chart
NAME: test-release
STATUS: pending-install
MANIFEST:
---
# Source: helm-chart/templates/cert-secret-store.yaml
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: test-release-certificate-secret-store
  labels:
    helm.sh/chart: helm-chart-0.1.0
    app.kubernetes.io/name: helm-chart
    app.kubernetes.io/instance: test-release
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  ...

The release name and tags were successfully inserted into the manifest.
Since not all of our deployed applications require access to TLS certificates and Lockbox secrets, we will make the SecretStore and ExternalSecret manifests optional. Let’s add the following values ​​to values.yaml:

lockboxSecretStore:
  enabled: true

certificateSecretStore:
  enabled: true

By default, these resources will be set to true.
Now let’s add the manifest activation logic based on the previously added values. To do this, we use a conditional expression:

{{- if .Values.certificateSecretStore.enabled -}}
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
...
{{- end }}

Let’s do it again helm install --dry-runbut let’s redefine the value certificateSecretStore.enabled, making it false. This can be done in two ways: create your own values.yaml and define the values ​​there or use the parameter --set. Let’s make sure that the list of manifests is empty:

$ helm install --dry-run --set certificateSecretStore.enabled=false test-release ./helm-chart
NAME: test-release
STATUS: pending-install
MANIFEST:

Full manifest code:

cert-secret-store.yaml
{{- if .Values.certificateSecretStore.enabled -}}
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: {{ .Release.Name }}-certificate-secret-store
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "helm-chart.labels" . | nindent 4 }}
spec:
  provider:
    yandexcertificatemanager:
      auth:
        authorizedKeySecretRef:
          name: yc-auth
          key: authorized-key
{{- end }}
lockbox-secret-store.yaml
{{- if .Values.lockboxSecretStore.enabled -}}
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: {{ .Release.Name }}-lockbox-secret-store
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "helm-chart.labels" . | nindent 4 }}
spec:
  provider:
    yandexlockbox:
      auth:
        authorizedKeySecretRef:
          name: yc-auth
          key: authorized-key
{{- end }}

cert-external-secret.yaml

Since SecretStore and ExternalSecret are related entities, let’s create similar deployment conditions for ExternalSecret manifests. But first, let’s add a new value for the certificate ID in values.yaml:

certificateSecretStore:
  enabled: true
  externalSecret:
    certificateId: ""

And then into the manifest:

...
  - secretKey: tls.crt
    remoteRef:
      key: {{ .Values.certificateSecretStore.externalSecret.certificateId }}
      property: chain
  - secretKey: tls.key
    remoteRef:
      key: {{ .Values.certificateSecretStore.externalSecret.certificateId }}
      property: privateKey

Full manifest code:

cert-external-secret.yaml
{{- if .Values.certificateSecretStore.enabled -}}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: {{ .Release.Name }}-certificate-external-secret
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "helm-chart.labels" . | nindent 4 }}
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: {{ .Release.Name }}-certificate-secret-store
    kind: SecretStore
  target:
    name: {{ .Release.Name }}-certificate-secret
    template:
      type: kubernetes.io/tls
  data:
  - secretKey: tls.crt
    remoteRef:
      key: {{ .Values.certificateSecretStore.externalSecret.certificateId }}
      property: chain
  - secretKey: tls.key
    remoteRef:
      key: {{ .Values.certificateSecretStore.externalSecret.certificateId }}
      property: privateKey
{{- end }}

lockbox-external-secret.yaml

All secrets have different meanings. Therefore, similarly with other changing values, we will remove the fields from the file and place them in values.yaml. Let’s not forget to also include $SECRET_ID:

lockboxSecretStore:
  enabled: true
  externalSecret:
    secretId: ""
    data:
    - secretKey: JDBC_URL
      property: JDBC_URL
    - secretKey: DB_USERNAME
      property: DB_USERNAME
    - secretKey: DB_PASSWORD
      property: DB_PASSWORD

Now in the manifest you need to iterate through all the given values. To do this, we will use a loop – in Helm, the operator is used for this range:

...
spec:
  ...
    data:
   {{- range .Values.lockboxSecretStore.externalSecret.data }}
    - secretKey: {{ .secretKey }}
      remoteRef:
        key: {{ $.Values.lockboxSecretStore.externalSecret.secretId }}
        property: {{ .property }}
  {{- end }}

Notice the dollar sign in remoteRef.key. Operators range And with create their own scope. In this case . indicates the current scope, which is specified by the operator range. So to get the value from values.yaml must be added to the beginning $.which tells the template engine to access the root scope.

Full manifest code:

lockbox-external-secret.yaml
{{- if .Values.lockboxSecretStore.enabled -}}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: {{ .Release.Name }}-lockbox-external-secret
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "helm-chart.labels" . | nindent 4 }}
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: {{ .Release.Name }}-lockbox-secret-store
    kind: SecretStore
  target:
    name: {{ .Release.Name }}-lockbox-secret
  data:
  {{- range .Values.lockboxSecretStore.externalSecret.data }}
    - secretKey: {{ .secretKey }}
      remoteRef:
        key: {{ $.Values.lockboxSecretStore.externalSecret.secretId }}
        property: {{ .property }}        
  {{- end }}
{{- end }}

clusterip.yaml

Editing the manifest for a service is no different from others. Let’s take it to values.yaml values spec.ports[0].port And spec.ports[0].targetPort:

clusterip:
  port: 80
  targetPort: http

Also, instead of our own selectors, we will use a template helm-chart.selectorLabelsoffered out of the box:

...
spec:
  ports:
  - name: http
    protocol: TCP
    port: {{ .Values.clusterip.port }}
    targetPort: {{ .Values.clusterip.targetPort }}
  selector:
    {{- include "helm-chart.selectorLabels" . | nindent 4 }}

Full manifest code:

clusterip.yaml
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "helm-chart.labels" . | nindent 4 }}
spec:
  ports:
  - name: http
    protocol: TCP
    port: {{ .Values.clusterip.port }}
    targetPort: {{ .Values.clusterip.targetPort }}
  selector:
    {{- include "helm-chart.selectorLabels" . | nindent 4 }}

deploy.yaml

Add to values.yaml the following values:

deployment:
  replicaCount: 1
  image: ""
  containerPort: 8080
  resources:
    requests:
      cpu: "150m"
      memory: "400Mi"
    limits:
      cpu: "250m"
      memory: "600Mi"

Since variables from the Lockbox secret are transferred to the container as environment variables, let’s add the same loop as in lockbox-external-secret.yaml, but with a slightly different structure. Let’s not forget to add a condition before the loop that Lockbox is used in the release. I almost forgot the most important thing! Let’s insert resources (limits and requests) for the container using the function toYaml:

spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: {{ .Release.Name }}-app
		image: {{ .Values.deployment.image }}
		ports:
		- name: {{ .Values.clusterip.targetPort }}
		  containerPort: {{ .Values.deployment.containerPort }}
		resources:
          {{- toYaml .Values.deployment.resources | nindent 10 }}
	    {{- if .Values.lockboxSecretStore.enabled }}
        env:
        {{- range .Values.lockboxSecretStore.externalSecret.data }}
        - name: {{ .secretKey }}
          valueFrom:
            secretKeyRef:
              name: {{ $.Release.Name }}-lockbox-secret
              key: {{ .secretKey }}
        {{- end }}
        {{- end }}

Full manifest code:

deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "helm-chart.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.deployment.replicaCount }}
  selector:
    matchLabels:
      {{- include "helm-chart.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "helm-chart.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Release.Name }}-app
        image: {{ .Values.deployment.image }}
        ports:
        - name: {{ .Values.clusterip.targetPort }}
          containerPort: {{ .Values.deployment.containerPort }}
        resources:
          {{- toYaml .Values.deployment.resources | nindent 10 }}
        {{- if .Values.lockboxSecretStore.enabled }}
        env:
        {{- range .Values.lockboxSecretStore.externalSecret.data }}
        - name: {{ .secretKey }}
          valueFrom:
            secretKeyRef:
              name: {{ $.Release.Name }}-lockbox-secret
              key: {{ .secretKey }}
        {{- end }}
        {{- end }}

ingress.yaml

Let’s also add a resource of the Ingress type. Just like with ExternalSecret, we will create the ingress at the discretion of the user. IN values.yaml add values:

ingress:
  enabled: true
  host: ""

Full manifest code:

ingress.yaml
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ .Release.Name }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "helm-chart.labels" . | nindent 4 }}
spec:
  tls:
    - hosts:
      - {{ .Values.ingress.host }}
      secretName: {{ .Release.Name }}-certificate-secret
  ingressClassName: {{ .Release.Name }}-class-resource
  rules:
    - host: {{ .Values.ingress.host }}
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: {{ .Release.Name }}
              port:
                name: {{ .Values.clusterip.port }}
{{- end }}

And also values.yaml with all the added values:

values.yaml
lockboxSecretStore:
  enabled: true
  externalSecret:
    secretId: ""
    data:
    - secretKey: JDBC_URL
      property: JDBC_URL
    - secretKey: DB_USERNAME
      property: DB_USERNAME
    - secretKey: DB_PASSWORD
      property: DB_PASSWORD

certificateSecretStore:
  enabled: true
  externalSecret:
    certificateId: ""

clusterip:
  port: 80
  targetPort: http

deployment:
  replicaCount: 1
  image: ""
  containerPort: 8080
  
  resources:
    requests:
      cpu: "150m"
      memory: "400Mi"
    limits:
      cpu: "250m"
      memory: "600Mi"

ingress:
  enabled: true
  host: ""

Launching a chart

And so, we created our Helm chart with a fairly flexible values.yaml file. You can use standard values.yaml, but it is unlikely that it will meet all your needs. Therefore you can assign values ​​using the parameter --set when installing a release or, if there are many of them, write a yaml file with your values ​​and specify the path using the parameter -f. Since to deploy my services I need to overwrite quite a lot of values, I created a file my-app-values.yaml. Let’s install the chart:

$ helm install -n kubectl-ns -f ./my-app-values.yaml my-app ./helm-chart 
NAME: my-app
LAST DEPLOYED: Fri Oct 20 17:21:18 2023
NAMESPACE: kubectl-ns
STATUS: deployed
REVISION: 1
TEST SUITE: None

Let’s get a list of all resources:

$ kubectl -n kubectl-ns get all                                     
NAME                                 READY   STATUS   RESTARTS      AGE
pod/my-app-84c8d4cfdd-mhqb4   0/1    Error   1 (36s ago)  106s

NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/my-app          ClusterIP   10.96.167.201   <none>        80/TCP    107s

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-app          0/1     1            0           107s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/my-app-84c8d4cfdd          1         1         0       107s

Pod did not start! Let’s find out what the reason is by running kubectl describe:

$ kubectl -n kubectl-ns describe pods/my-app-84c8d4cfdd-mhqb4
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  5m40s                  default-scheduler  Successfully assigned kubectl-ns/my-app-84c8d4cfdd-mhqb4 to cl12s2vrpmu4of6it02q-itys
  Warning  Failed     5m40s (x2 over 5m40s)  kubelet            Error: secret "my-app-lockbox-secret" not found

secret not found. But why? Let’s look at the list of all the secrets:

$ kubectl -n kubectl-ns get secret                                   
NAME                                  TYPE                                  DATA   AGE
my-app-certificate-secret             kubernetes.io/tls                     2      7m40s
my-app-lockbox-secret                 Opaque                                4      7m39s

So here they are! I was a little confused when I discovered this problem, but I was quickly able to figure out what was going on. Let’s run the same command to install the chart, but use the parameter --dry-run. I’ve already used this command in this article: it doesn’t actually install the chart and lists all the manifests in the order they were installed. In order not to look at the contents of all manifests, let’s grab the output of the command and get only the names of the manifests:

$ helm install --dry-run -n kubectl-ns -f ./my-app-values.yaml config-server ./helm-chart | grep "Source:"
# Source: helm-chart/templates/clusterip.yaml
# Source: helm-chart/templates/deploy.yaml
# Source: helm-chart/templates/ingress.yaml
# Source: helm-chart/templates/cert-external-secret.yaml
# Source: helm-chart/templates/lockbox-external-secret.yaml
# Source: helm-chart/templates/cert-secret-store.yaml
# Source: helm-chart/templates/lockbox-secret-store.yaml

Did you notice? ExternalSecret and SecretStore are created after Deployment.

Resource launch order

Helm sorts all chart resources and executes them in the following order:

Resource launch order

Based on this, Secret(5) is created before Deployment(21). But the point is that a secret with values ​​from Lockbox creates a resource of type ExternalSecret, which is not in the list. Therefore, Helm executes resource types unknown to it last (SecretStore and ExternalSecret). But you can influence the download queue using chart hooks.

Hooks

Hooks allow manifests to be executed at a specific point in time, for example before installing a release or after uninstalling it. To indicate at what point the hook should be executed, an annotation is added to the resource helm.sh/hook. A list of all possible hooks is given in the documentation:

We will need pre-install. In addition to the annotation indicating the hook, we can also indicate its weight with an annotation helm.sh/hook-weightto assign a specific execution order to resources running within a single hook. By default, all resources are assigned a weight of “0”. The annotation value must be a string and can be either negative or positive. Helm further sorts the resources in ascending order of weight. In addition to the above two annotations, you can use the annotation helm.sh/hook-delete-policywhich defines the hook removal policy:

Specified by default before-hook-creationwhich means that the resource created by the hook will not be deleted until a new hook is run.

Determining your own order for creating resources

Let’s get into the code again. Let’s set an annotation with a hook for all resources of type ExternalSecret and SecretStore pre-install. Since resources of the SecretStore type must be created earlier, we will assign them a weight of “-2”. Then ExternalSecret will have a weight of “-1”. Since we will need secrets after executing the hook to launch Deployment and configure Ingress via HTTPS, we will leave the annotation value helm.sh/hook-deletion-policy default.

cert-secret-store.yaml | lockbox-secret-store.yaml:

apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  ...
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "-2"
...

cert-external-secret.yaml | lockbox-external-secret:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  ...
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "-1"
...

Let’s run the command to install the chart again and make sure that the pod is in the Running status:

$ kubectl -n kubectl-ns get pods
NAME                             READY   STATUS    RESTARTS      AGE
my-app-84c8d4cfdd-hksz5          1/1     Running   0             49s

Pod was launched successfully!

We are launching the second release

Now let’s try to launch the second release. Let’s take a standard image nginx. All we need is Deployment and Service(ClusterIP). This time, to assign custom values ​​to the release, we will use the parameter --set:

$ kubectl create namespace nginx-helm
namespace/nginx-helm created
$ helm install \
    -n nginx-helm \
    --set lockboxSecretStore.enabled=false \
    --set certificateSecretStore.enabled=false \
    --set deployment.image=nginx:1.25.2 \
    --set deployment.containerPort=80 \
    --set ingress.enabled=false \
    nginx ./helm-chart
NAME: nginx
LAST DEPLOYED: Fri Oct 20 20:10:58 2023
NAMESPACE: nginx-helm
STATUS: deployed
REVISION: 1
TEST SUITE: None
$ kubectl -n nginx-helm get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-658dbf5895-5rrmm   1/1     Running   0          14s

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/nginx   ClusterIP   10.96.171.141   <none>        80/TCP    14s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           14s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-658dbf5895   1         1         1       14s

We can also see a list of all releases in the namespace:

$ helm -n nginx-helm ls
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS       CHART                   APP VERSION
nginx   nginx-helm      1               2023-10-20 20:10:58.163471204 +0300 MSK deployed     helm-chart-0.1.0        1.16.0 

Let’s delete the release:

$ helm -n nginx-helm uninstall nginx     
release "nginx" uninstalled

Results and what’s next

The initial goal was achieved successfully – we created a chart that can be used to deploy your applications! You can experiment with the chart yourself and perhaps even add new features to it. I plan to improve the chart for even more flexible configuration and start transferring the launch of my applications through Helm. It is possible that a new repository with a chart will be created, which I will periodically develop (if you would like to participate in the development, you are welcome). Thanks for reading!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *