K8s in production and development: four myths

When starting to experiment with Kubernetes, many are faced with one of the biggest misconceptions: they believe that K8s will work in production the same way as in a development or test environment.

Will not be.

When it comes to Kubernetes, and in general about containers and microservices, there is a big difference between running in a “lab” environment and in full operation. The difference is the same as between simply starting up and starting safely and reliably.

This statement has an important initial postulate: this is not a Kubernetes problem, but the whole variety of containers and microservices. It is relatively easy to deploy a container. And using and scaling containers (and containerized microservices) in a production environment is much more difficult.

Container orchestration is usually the same. For example, The New Stack conducted three years ago study and found that the adoption of containers pushed the adoption of Kubernetes because companies were looking for powerful technology to help them solve complex operational problems. While Kubernetes has alternatives, it quickly became synonymous with orchestration and the de facto standard. And there could be a big difference between launching the K8s in the sandbox and in combat.

When IT professionals start working with containers and K8s on a small scale, they may experience a sharp rise in the learning curve, going from “local setup” to “roll-out to production”. I recommend dispelling some misconceptions before you find yourself in this situation. You need to think about this in advance.

Myth One: Running Kubernetes in a development or test environment ensures that your operational needs are met.

In the reality: launching Kubernetes in a development or testing environment allows you to simplify something or not bother with the operational load that will appear when rolling out into production. Operational and safety considerations are the main areas where the difference between running the K8s in combat and in a development or test environment emerges. Because if the cluster falls under laboratory conditions, then there is nothing to worry about.

The difference between launching in production and launching in one environment can be compared to the difference between agility and flexibility, combined with reliability and performance. And in the latter case, you have to work.

Developers use containers to gain application flexibility during development and to test new applications and code. And operators need to provide the reliability, scalability, performance and security that require a robust, time-tested, enterprise-grade platform.

Automation becomes more critical when using Kubernetes (and containers in general) in battle. The deployment of production clusters needs to be automated to ensure repeatability and consistency. It also helps when restoring the system.

Versioning is also critical for operations in production. Wherever possible, versioning should be everywhere, including in service deployment configuration, policies, and infrastructure (through an infrastructure-as-code approach). This ensures repeatable media. Also, be sure to version your container images, you don’t need to use the “latest” tag for deployment in environments, so you can easily end up with different versions.

Myth two: you provided reliability and safety

In the reality: if you’re only using Kubernetes in non-combat environments, you probably haven’t, at least not yet. But do not be discouraged, you will come to this. It’s just a matter of planning and designing the architecture before rolling out to prod.

Obviously, production, scalability, availability, and security requirements are higher in combat environments. It is important to build these requirements into the architecture and build security and scaling management into K8s deployment plans, in Helm charts etc.

How can experimenting in a development or testing environment lead to false confidence?

It is normal when all network connections are open in the mentioned environments. It may even be desirable to make sure that any service can access any other service. Open connections are the default setting in Kubernetes. But in a combat environment, this approach is unlikely to be reasonable, because downtime and an increase in attacked areas pose a very large threat to business.

When it comes to containers and microservices, it takes a lot of effort to create a highly reliable, highly available system. Orchestration helps us with this, but it’s not a magic wand. The same goes for security.

It takes a lot of work to protect Kubernetes and reduce the attacked area. It is very important to move to a model with least privileges and enforcement of network policies, leaving only the communication channels that are needed by the services.

Vulnerabilities in container images can quickly become a critical issue in the production environment, and in development and test environments, there may be little or no risk.

Pay attention to what base images you use to build containers. Use trusted official images whenever possible, or roll out your own. In an on-premises environment, it may be easier to use unknown images, but this creates security risks. It is unlikely that you want your Kubernetes cluster to help someone mine crypto.

It is recommended that you treat container security as a ten-tiered system that encompasses the container stack (host and registries) as well as container lifecycle issues (such as API management). For details on these ten layers and their relationship to orchestration tools like Kubernetes, see podcast with a security specialist from Red Hat, and also in the article Ten Layers of Container Security

Myth three: orchestration makes scaling a breeze

In the reality: While professionals generally consider orchestrators like Kubernetes to be an essential tool for scaling containers, it is a misconception to think that orchestration will make it easier to scale in production immediately. The amount of data there is much more, and your monitoring will also need scaling. With the growth of volumes, everything changes. It is impossible to make sure that the interfaces of all K8s components are fully implemented until you roll them out to production. You won’t be able to automatically detect that Kubernetes is working properly, and that the API server and other management components have scaled to meet your needs.

Again, things can look a little simpler in development and test environments. And over time, you will have to work to meet the needs and maintain this state. In on-premises environments, it is easy to overlook basics, such as prescribing the correct resources and request limits. And if you do not do this in production, then one day everything may collapse with you.

Scaling a cluster to one side or the other is a prime example where a task may look simple in local experiments, but is clearly more complicated in a combat environment.

Productive clusters are harder to scale than development or debug clusters. While it is quite easy to scale applications horizontally in Kubernetes, there are a few things to keep in mind with DevOps, especially when it comes to keeping services running while scaling your infrastructure. It is important to make sure that core services, as well as alert systems for vulnerabilities and security breaches, are distributed across the cluster nodes and work with stateful volumes so that data is not lost when scaling down.

As with other tasks, it is all about planning and resources. You need to understand your scaling needs, plan, and most importantly, test. Your production environment must be able to handle much higher loads.

Myth # 4: Kubernetes works the same everywhere

In the reality: the differences in work in different environments can be similar to how the launch of Kubernetes on a developer’s laptop and on a production server differs. Many people believe that if the K8s is running locally, it will work in any production environment. While it provides consistent environments, there can be major differences depending on the vendor.

To put a cluster into production, you need components that are usually not available in local environments: monitoring, logging, certificate and credential management. You need to take this into account, this is one of the main issues that adds to the difference between production environments and development and test environments.

However, all of the above applies not so much to Kubernetes as to containers and microservices in general, especially in hybrid cloud and multi-cloud environments.

Public-private deployments of Kubernetes are trickier than they seem on paper, because many of the required services are proprietary, such as load balancers and firewalls. A container that works great on-premises, in the cloud with a different set of tools may not start at all, or work in an unprotected mode. This is why service mesh technologies like Istio are getting so much attention. They ensure that application services are available wherever your container runs, so you don’t have to think about infrastructure — which is the main feature of containers.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *