The Kubernetes architecture is well suited for organizations like FAANG, but can be redundant and overly complex for others.
Kubernetes, an open source container orchestrator, has become the de facto uncontested solution for those deploying containerized applications in a production environment. There are many good reasons for this, including the fact that Kubernetes offers a high degree of reliability, automation, and scalability. Nevertheless, sometimes it seems to me that the Kubernetes architecture has gone over the hype: despite the fact that it is more than six years old, it is still subject to many disadvantages. Some of them are inherent in Kubernetes from birth, others are the result of the development of the ecosystem formed around the platform.
Before diving headlong into Kubernetes, consider the following issues with this open source container orchestrator.
1. Kubernetes is designed for companies like FAANG.
First of all, the Kubernetes architecture was and still is designed for companies that need to manage extremely large software environments.
If you’re Google (whose Borg orchestrator is the basis for the open source Kubernetes project), Kubernetes is a great tool for you. This is also true if you are Netflix, Facebook, Amazon, or another web-based company with dozens of data centers and hundreds of applications and services distributed across them.
But if you are a small organization that has one server room “on the ground” or a cloud subscription and perhaps a dozen applications to deploy, the Kubernetes architecture is most likely redundant for you, just like bulldozing a garden plot. If you are not using Kubernetes on a large scale, the impact of its solutions is not worth the effort and operational costs required to configure it.
This does not mean that Kubernetes will never be useful for small deployments: I believe that it is moving towards that. But whenever I launch a Kubernetes cluster today to deploy just one or two applications to a small number of servers, I remain convinced that I would be better off using a simpler solution.
2. The Kubernetes market is fragmented.
Another problem with Kubernetes architecture is that there are too many Kubernetes distributions – and too many different tools, philosophies, and related approaches – so that the Kubernetes ecosystem is too fragmented.
Of course, fragmentation is the lot of any open source ecosystem, to some extent. For example, Red Hat Enterprise Linux has a different package manager, different management tools, and so on than Ubuntu. However, there are more similarities than differences between Red Hat and Ubuntu. If you are a sysadmin and are currently working with Red Hat, you don’t need to spend six months learning new tools if you suddenly want to switch to Ubuntu.
I don’t think the same is true for Kubernetes. If you are using, say, OpenShift today, but want to upgrade to VMware Tanzu, you have a pretty steep learning curve. Although both of these Kubernetes distributions use the same underlying platform, the methodologies and tools they add are very different.
This fragmentation is also seen in Kubernetes cloud services. Google Kubernetes Engine, or GKE, has a very different user interface and management toolset than a platform like Amazon EKS (analogous to GKE in the AWS cloud).
Of course, this is not the fault of the Kubernetes architecture itself, but the result of vendors’ efforts to differentiate their products. But this is a painful problem from the point of view of Kubernetes users.
3. There are too many parts in Kubernetes.
We talk about Kubernetes as a single platform, but in fact it consists of over half a dozen different components. This means that when you are about to install or upgrade Kubernetes, you have to deal with each part separately. But most Kubernetes distributions lack automated solutions for this.
Of course, Kubernetes is a complex platform, and it really takes a lot of parts to get it working. But, compared to other complex platforms, Kubernetes stands out for the poor integration of its various parts into an easily manageable whole. Although your typical Linux distribution also consists of many different programs, you can install and manage all of them in a centralized and optimized way. This is not the case with Kubernetes architecture.
4. Kubernetes does not automatically guarantee high availability.
One of the most frequently cited reasons for using Kubernetes is to understand its magical ability to manipulate your applications in such a way as to ensure they never crash, even if part of your infrastructure crashes.
It is true that the Kubernetes architecture automatically makes intelligent decisions about where to place workloads in the cluster. Kubernetes, however, is far from being the go-to for high availability. For example, it will work successfully in a production environment with one master node – and this is the road to the fall of the entire cluster. (After all, the workloads will not last long if the control cluster falls from the master nodes.)
In addition, Kubernetes cannot automatically guarantee that cluster resources are properly allocated between the various workloads running on it. To achieve this, you need to manually configure resource quotas.
5. Managing Kubernetes manually is difficult.
While Kubernetes requires a lot of manual intervention to ensure high availability, it manages to make it much more difficult for you to manually control if that’s what you need.
Of course, there are ways to change the timings used by Kubernetes to determine if a container is working properly, or to send a workload to run on a specific server in the cluster. But Kubernetes’ architecture is not really designed for administrators to manually make these changes. It is assumed that you will always be happy with the default settings.
This makes sense, given that (as noted above) Kubernetes was built primarily for large-scale deployments. With thousands of servers and hundreds of applications, you are not going to tweak a lot by hand. But Kubernetes will not help you if you are a small company and want more control over the structure of workloads in the cluster.
6. Kubernetes has problems monitoring and optimizing performance.
Kubernetes tries to keep your workloads up and running (although, as noted above, its ability to do so depends on factors such as the number of masters you install and your resource allocation structure).
But Kubernetes architecture doesn’t do much for you to track workloads and ensure their optimal performance. It does not alert you to problems and does not make it easy to collect monitoring data from the cluster. Most of the dashboards that come with Kubernetes distributions also don’t offer a comprehensive view of your environment. There are third-party tools that give you this visibility – but this is another entity that you will need to configure, explore, and manage if you want to run Kubernetes.
Likewise, Kubernetes is not very good at helping you optimize your costs. It does not notify you if the servers in the cluster are only at 20% capacity, which probably means you are wasting money on over-provisioned infrastructure. This is where third-party tools can help you deal with similar problems, but they themselves add even more complexity.
7. Kubernetes brings it all down to code.
In Kubernetes, you need to write code to perform almost any task, usually in the form of YAML files, which you then need to apply on the Kubernetes command line.
Many people see not a bug, but a feature in the Kubernetes architectural principle of “everything is code”. However, while I certainly understand the value of being able to manage the entire platform using the same methodology and tooling (i.e. YAML files), I would also like Kubernetes to offer other options for people who need them.
There are times when I don’t want to write a long YAML file to deploy a simple workload – or drag it from the github and then manually tweak it piece by piece to suit my environment. When I need to do something simple in Kubernetes, I really would like to just click a button or run a simple command (I mean the kubectl command, which does not require a dozen arguments, many of which are configured using cryptic copy-paste) – but this is rarely done. …
8. Kubernetes wants to have complete control over everything.
My last complaint about Kubernetes is that it is completely not designed to work with other types of systems. It wants to be the only platform you use to deploy and manage applications.
It’s good if all of your workloads are containerized and can be orchestrated by Kubernetes. But what if you have legacy apps that can’t run in containers? Or what if you want to run part of the workload on the Kubernetes cluster and run the other part somewhere outside? Kubernetes does not offer built-in functionality for such tasks. It’s designed with the assumption that everyone wants to run everything inside containers, and that’s forever.
Lest I be accused of hating Kubernetes, let me reiterate that it is a powerful tool for orchestrating large-scale containerized applications. There are many use cases that Kubernetes is great for.
But the Kubernetes architecture also has drawbacks. Overall, this is not the best solution if you have legacy workloads to manage and / or if your deployments are not large enough to justify the complexity that Kubernetes brings with it. To prove its worth and to live up to the reputation it enjoys in certain areas of the IT ecosystem, Kubernetes must address these challenges.
– From the translator:
I remind you more delicious links for confectionery connoisseurs:
10 Kubernetes Deployment Antipatterns: Common Practices for which there are other solutions
11 PRO fakups when implementing Kubernetes and how to avoid them