Ways to Prepare Manifests for Kubernetes


When was the last time you walked up to your burned-out devops, paternally patted him on the shoulder and said – “I like your pipeline”?

As the well-known folk wisdom says – “Whoever wants, he deploys as he wants.” And this is not a metaphor, in my short career, I have never seen the same pipelines.

There are at least dozens of ways to deploy applications to Kubernetes, but any deployment always starts with preparing a manifest. In this article, I will purposely bypass deployment automation and CI \ CD pipelines, or rather bypass the means by which these pipelines are written. In this article, we will talk about ways to prepare manifests for Kubernetes.


Today, the main tool for passing manifests to Kubernetes is the utility kubectl. This is the first and most obvious way to prepare a manifest – you just need to write the manifest “by hand”. In fact, this utility simply works with kube-apiserver and passes the manifest to it. The vast majority of training materials for demonstration deployment use kubectl and static manifests. But in practice, few people use this approach. Don’t get me wrong, but if using kubectl and static manifests, you will approach adult uncles and ask how to deploy, then they will definitely give you two things:

Of course in use kubectl paired with static manifests has a lot of advantages. At a minimum, you always see what you deploy, and you always get what you wanted at the output, unless, of course, there is some kind of Admission Controller, or a crazy operator is not working inside your cluster, which the base manifest, given as input, turns into something else. But let’s not talk about sad things. Static is always more convenient from the point of view of security, understanding what is happening in your infrastructure code at a glance.

And yet, if you are developing a modern product, then most likely you need templating. Luckily for us, there is no need to go far in the utility kubectl already built tool kustomize. Previously, this tool was in the incubator and had to be used separately from kubectl, and it did not shine with functionality. In fact, it is just a mergil of pre-prepared pieces of manifestos. In my personal opinion, its only strong point is the generation of secrets. In the latest versions, the tool has certainly become more stable, the functionality has become significantly larger – for example, looking ahead, using kustomize you can convert and deploy helm charts.

To start using kustomize you just need to define manifests following fairly simple directory organization rules. There is a directory with basic pieces of code that are used by all manifests and there are directories with pieces of “customized” settings. In fact, this is the basis of this tool. We prepare pieces of manifests, arrange them in catalogs, then these pieces are combined with each other into the final manifest. It is important that using kustomize you immediately become gitops ready, as all promising CI \ CD tools such as Spinnake, FluxCD or ArgoCD are very fond of kustomize and are friends with him out of the box. Well, an important role is played by the fact that this tool for preparing manifests is now part of the utility kubectl.

Let’s take a breath and make a lyrical digression. Unless, of course, someone inhaled. It sounds like I expect that my article will be read in one breath 🙂 Imagine that you have a question, is it possible to somehow deploy an application in Kubernetes without kubectl? The answer is yes. kubectl this is essentially a REST API client and this client operates with regular HTTP requests in kube-apiserver. Everything that does kubectl can be checked using, say, curl. And sometimes this is justified, but this is a topic for another article. So to speak: – I saw it, I did it, you don’t need it. You can go even further, in principle, nothing prevents you from writing data bypassing the API directly to etcd of the cluster. And it will work and maybe there are people who do just that. For such engineers, they drink standing up and not clinking.

Now back to how to prepare manifests. “I deploy on Friday night and I’m not afraid to look people in the eye”. It’s about those who use Helm. Perhaps this is the most powerful utility for preparing manifests today.

In fact, Helm itself does not position itself as just a template engine, parameterizer or customizer. Helm loudly calls himself a package manager for kubernetes. First and only. It not only allows you to create manifests, but it can deploy them to kubernetes itself. All honestly, without using kubectl and other utilities. In addition, Helm has its own set of rules and approaches to organizing application manifests. All this is realized through the essence Helm Chart. Using Helm Chart, you can pack into a distribution (and in fact a tar.gz archive) any application consisting of Kubernetes manifests and even, get ready, the state of such an application.

The level of parametrization of Helm charts is truly transcendent – you can write such a Helm Chart where each line of the manifest will literally be parametrized. Well, or for example, you can write one Helm Chart in such a way that passing parameters using the utility helm we can generate absolutely any manifest. Below, a simple example in which, depending on what kind you specify in the variables, you will receive such a manifest at the output. Please don’t do this.

---
{{ if eq .Values.kind "Secret" }} 
apiVersion: v1
kind: {{ .Values.kind }}
metadata:
  name: myregistrykey
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: 11
{{ end }}
{{ if eq .Values.kind "Deployment" }}
apiVersion: apps/v1
kind: {{ .Values.kind }}
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
{{ end }}

Among other ways to prepare manifests, Helm has support for chart versions and repositories for charts, plugins, a large community, a low entry threshold, and good documentation. And yes, manifests written with Helm are easily reusable, whatever that means to you.

Go ahead. Who works with Openshift knows that there is a utility oc and this utility has a built-in templating engine called Template. Yes, this is not the most obvious way to deploy to kubernetes, but it is quite working. I know product teams that successfully use the utility oc as the main templating engine in the project and prepare manifests for it. Most importantly, it allows you to define variables in the manifest, but what else do you need?

Another common way to prepare manifests is to use Ansible. Surprisingly, this powerful operating system configuration tool can prepare manifests using a templating engine. Jinja and even deploy manifests to kubernetes.

Somewhere along this approach are engineers who use scripting languages ​​to generate manifests. It is enough to take a static manifest, add variables to it, and then run these manifests through rendering using the Python interpreter, Bash, or some other scripting language, substituting the necessary parameters instead of variables. Further, it remains only with the help of kubectl or curl feed manifest kube-apiserver. In other matters, the disadvantages of this approach are quite obvious – with the growth of the project, the resource that needs to be allocated to support this type of preparation of manifests will increase.

Another way is suitable for experienced developers. There are a huge number of libraries for programming languages ​​that implement kubernetes entities in the paradigm of this language. This allows not only to deploy applications directly from the code, but also to write special applications for kubernetes called operators. Such applications often implement deployment or mutation logic. If you describe the process in general terms, then you send a request to kubernetes with a minimum amount of information (CRD) – for example, the container image and the port number on which you want your application to run. The operator picks up your request and, based on the internal logic, starts performing all sorts of actions in the cluster to add manifests – as a result, you get an application. This approach may seem inflexible and requires you to constantly maintain the operator, but it also has undeniable advantages – for example, complete control of manifest parameters, which increases stability and security.

Finally, I would like to say that the world of container applications is so diverse that you can see many unusual and sometimes strange ways of implementing seemingly simple tasks in it. Therefore, I would not be surprised if this set of methods can be diluted with a couple more, and maybe more.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *