How to break everything! 5 Bad Kubernetes Tips

Kubernetes remains the most popular container management solution. Frameworks and programming languages ​​are changing, eras are changing, and Kuber seems to be the only choice. Then everything is as usual: experts write articles, tech bloggers develop setup guides, devops on the forums are upset that something does not work as they would like. And although we had a chance to set up more than a dozen clusters, we will not write another step-by-step instruction today. Today, Friday, keep 5 bad tips on how to lay a time bomb under your kubernetes cluster.

Spoiler alert: don’t do that.

Tip #1 Use the latest

Always use the latest version of Kubernetes. As soon as information about the new version flashes in the news, immediately download it and update immediately! We’re out of the way with retrogads reading release notes and wondering how the compatibility situation is. In addition, be sure to make sure that containers collected from the most recent images get into your pods. It’s easy to verify this, ask your devops to show the manifests for the current cluster configuration. You will see something like this:

apiVersion: apps/v1

kind: Deployment

metadata:

name: my-deployment

spec:

template:

metadata:

labels:

app: my-super-app

spec:

containers:

- name: super-app

image: docker.io/acme/super-app:latest

Notice the word “latest”? Everything is fine. Of course, this is not very informative because you don’t know if the image was taken yesterday or a year ago, but we are not here to ask such questions. If the cluster simply stops working after the update, then most likely it is the devops who messed something up again, 100%.

Tip #2 Include all settings and configurations right in the container

Well, I think it’s quite obvious. Containers that end up in your cluster don’t need to be generic. Instead, they should:

– have a pre-registered IP address;

– contain all necessary logins, passwords, keys and secrets;

– have hard links to neighboring services with which to interact.

In general, let’s immediately instruct developers to write applications with Kubernetes in mind. Just in case. Probably, if one day the cluster has to be rebuilt or migrated to another site, then everything will stop working. But, on the other hand, he is Kubernetes in Africa and Kubernetes, let’s not think about sad things and deal with problems as they become available. If anything, we will rewrite our application from scratch, what problems. But right now you will solve all your questions, because the containers are tailored specifically for your situation, and this cannot but rejoice.

Tip #3 Manage your cluster with kubectl only

While your colleagues develop and agree on procedures for making changes to their cluster configurations, we will solve this problem once and for all. We need to correct literally one line in the config and we don’t want to wait until this edit is approved, which means that we will independently make all the changes on the go using kubectl. It is better to document the changes in your notes, and if you need to reproduce the configuration, we will immediately find what has changed.

Of course, these notes have already swelled up to the size of a mini-Wikipedia, and something has already been completely lost, but we didn’t study the courses in vain and passed the Certified Kubernetes Administrator (CKA) exam with brilliance, albeit the fourth time. gitops? No, this is something for developers. We do not need this, especially since using kubectl you can search for errors, read logs, view cluster metrics, and what else do you need?

And of course, do not budget for the implementation of a kubernetes monitoring system. No, we already have everything you need literally out of the box.

Tip #4 Create one big cluster for all your needs

All sorts of amateur devops deploy several kubernetes clusters for different needs: development, production, testing, etc. They do this only because, I think, they do not understand the concept of namespaces, which can share all the necessary resources between landscapes. It’s time to know that namespaces solves all the problems of sharing and delegating resources, and security will also be all right. But it is not exactly. For some reason, there is also RBAC, but we will read this later, somehow painfully abstruse there. Role, ClusterRole, RoleBinding, ClusterRole – something is somehow too complicated, we will postpone it for later.

We are super professionals and there will definitely be no problems. There is a small chance that if there is a serious failure in the cluster, then we will lose everything at once. Or, for example, hackers break into production and get where they should not. But what are the chances? Hope it’s small.

Tip #5 Don’t get distracted by metrics, Kubernetes will handle it on its own

Kubernetes is literally the cutting edge of technological thought, so you just need to make it work at least somehow, and then it will take over the solution of all tasks. Well, maybe not everyone. But scaling and load distribution for sure. It is enough just to start pods with containers, and the problem will be solved. Kubernetes itself will distribute server resources between pods, itself will detect memory leaks leading to unstable operation, and certainly will not allow a situation where one pod eats up all the resources, and the cluster simply dies. Yes, there were labs in the CKA course on setting limits for pods. And even something interesting about pod monitoring: startup, readiness, liveness. But this is a reinsurance, Kubernetes is smart enough to solve these problems without our participation.

—–

This is, of course, bad advice. You do not need to do this if you want to ensure long and reliable operation of your Kubernetes cluster. But you would be surprised how many people strictly follow this advice in their daily work and continue to hope for the best anyway.

The material was prepared by Sergey Polunin, Head of the Infrastructure IT Solutions Protection Group at Gazinformservice, Sergey Polunin’s blog can be read at link.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *