How managed kubernetes and managed OpenShift work in IBM Cloud. Part 1 – Architecture and Security

The development can be compared to a picture where the artist is a leading developer. Creating an elegant microservice application – with the creations of the best architects – modernists. But to put the process on stream and leave the opportunity to choose – art. In the first article in a series, we want to talk about how the IBM Kubernetes service and IBM Managed OpenShift cloud service were created and run, and how you can deploy and test your Kubernetes cluster in the IBM cloud for free.

Smotr

The IBM cloud has been gaining functionality over the past ten years. It all started with building shared infrastructures for servicing large corporations, then with virtual and physical machines based on SoftLayer data centers, then there was the five-year construction of PaaS (based on Cloud Foundry runtimes) and the evolution of a huge number of services. The Moscow development team also took part in creating part of the services. But today we are not talking about services, but about what is managed kubernetes and managed openshift and how it works in the IBM cloud. Many details cannot be told, since the project is internal, but it is possible to open some veil.

What is kubernetes and how is managed kubernetes / openshift different from a local installation

Kubernetes was initially positioned as an open-source platform for managing containerized applications and services. The main tasks of kubernetes are (I will leave without translation, so as not to break the terminology):

  • Service discovery and load balancing
  • Storage orchestration
  • Automated rollouts and rollbacks
  • Automatic bin packing
  • Self-healing
  • Secret and configuration management

In general, kubernetes does an excellent job of all these tasks. On the other hand, kubernetes began to be positioned as a database for storing application configurations or an API control tool for your components (especially relevant in the context of the development of operators).

One of the advantages of kubernetes is that you can run containerized applications both on your computing resources and in the cloud. In the case of cloud resources, many cloud providers provide the ability to use computing resources to run applications and take full administration of the clusters:

  • cluster deployment
  • setting network availability and distribution
  • installation of updates and fixpacks
  • Configuring a cluster to increase fault tolerance and security (more in the article)

If you work with managed kubernetes in any cloud, then of course you are limited in a number of actions. For example, several versions of kubernetes are usually supported, and it is unlikely that you will be able to use versions of kubernetes that have not been supported for a long time. The main advantage is undoubtedly that it is not your team that administers the clusters, which reduces the time required to develop applications. Of course, managed kubernetes and managed openshift cannot be used in all organizations and for any type of application, but there are a wide range of tasks that are great for computing in the clouds.

Cloud Architecture

Inside the company, the IBM Managed Kubernetes project and the IBM Managed OpenShift project are called the Armada project. The project began with one data center, but now it is available in 60 cloud data centers in 6 different regions. In order to describe how the cloud scales, I will use two terms: hubs and spokes. The entire Armada project is based on kubernetes, which means that its clusters are controlled by a control panel that runs on kubernetes. As soon as the control panel does not have enough resources to manage the necessary set of clusters, it deploys additional spokes. These spokes will continue to be responsible for managing clusters in a specific region.

The control panel consists of more than 1,500 deployments and is located in 60 kubernetes clusters. All this is necessary in order to manage more than 15,000 clusters of our customers (not including test free clusters that are deployed on the same worker).

To create IKS and Managed OpenShift, the team used the OpenSource model internally. Most IBM employees have access to most Armada repositories and can create their own PRs to integrate their services. As part of the service work, a huge number of CI / CD tools were also developed, which were integrated into the Razee project. In the summer of 2019, IBM uploaded all the achievements of the Razee project to OpenSource.

In general, the architecture for IKS and Managed OpenShift is as follows:

Armada architecture

When you work with the IBM Cloud CLI and request the creation of a cluster, then in fact your requests go to the Armada API, then the control panel determines the availability of spokes and initiates the creation of the required number of workers in the regions you specify. The entire infrastructure for the workers is provided using the IBM Cloud Infrastructure (aka Soflayer), in fact, the same virtual instances and bare metal hosts are used, which are available in the "Compute" section of the cloud services catalog. After a while, you will receive an authorization token and you can start deploying your applications.

Since OpenShift and Kubernetes differ in their capabilities and development roadmap, the underlying technology stack is correspondingly different:

Armada software stack

How is security ensured?

You can talk about the Armada project for a very long time both from a technical point of view and from a marketing one. But first of all, when choosing a cloud provider that provides managed kubernetes, everyone asks the same question – how does the provider guarantee and ensure the security and fault tolerance of my applications? .. It is impossible to evaluate the performance, convenience, level of service support without receiving an answer to this question. As a development manager, during the development of any major project, I draw a map of the threats. It is necessary to make all possible hacking options and secure your infrastructure, applications and data. In order to talk about the security of the kubernetes cluster, you need to describe the following points:

  • security of the infrastructure itself and data centers
  • access to Kubernetes API and etcd
  • security master and worker nodes
  • network security
  • persistent storage security
  • monitoring and logging
  • container security and container images

Now, first things first:

Security of the infrastructure itself and data centers

No matter how we would like to completely disengage from hardware and maintenance of the hardware stuffing of IT systems, in fact, we need to be sure that the service provider will completely cover our rear, and confirm this documented with the help of industrial and industry certifications, and if necessary with the help of reports on conducting audits. This aspect was taken into account by the IBM team with all possible seriousness and all the necessary information was collected and presented in one place (https://www.ibm.com/cloud/compliance)

Access Kubernetes API and etcd

In order to access the Kubernetes API and data in etcd, you need to go through three levels of authorization. Each issued authorization token is associated with authentication tokens, cluster authorization data (RBAC) and the Admission controller (see diagram below).

Access Kubernetes API and etcd

Since wizards are configured centrally using spokes, this means that you cannot change the wizard configuration, the wizards themselves are not even located on your cloud account and are not visible in the list of your devices (unlike workers). All configuration changes can only be made within certain capabilities. On the one hand, this is a limitation, but due to this limitation, attackers will also not have access to your wizards, in addition there is no human error factor, there is no risk from using incompatible versions of kubernetes components, and the entire cluster administration process is facilitated. In general, we can say that IBM is responsible for ensuring fault tolerance and the proper configuration of kubernetes master. If your project has strict requirements for using certain versions of components, then in your place I would not look at managed kubernetes at all and use my own installation.

Security master and worker nodes

In order to ensure the safety of the workers and the master of the nodes, we use encrypted VPN tunnels between the computing nodes, and the user has the opportunity to order a worker with encrypted hard drives. We also use Application Armor to restrict application access to resources at the operating system level. Application Armor is a Linux kernel extension for configuring resource access for each application.

When creating a cluster, after choosing the configuration that suits you, virtual or baremetal servers will be created for you, on which components for the work of your workers will be installed. The user has access to the worker’s OS, but only when connected via management VPN, which can be useful for troubleshooting, as well as for updating the worker’s OS itself. There is no public IP access over ssh, in order to get ssh inside the container, you must use kubectl exec, this connection will be made through the OpenVPN tunnel.

Secure masters and workers

Network security

In managed kubernetes and openshift, the Calico network plugin is used as a network virtualization solution. Network security is achieved through pre-installed Kubernetes and Calico network policies. Your workers can be in the same VLAN as all your other infrastructure in the same data center, such as ordinary virtual machines and baremetal servers, as well as network applications and storage systems, and thanks to calico systems located outside your cluster, will be able to communicate via private network with your deployments.

When a cluster with a public VLAN is created, the control panel creates a resource Hostendnd with label ibm.role: worker_public for each worker and its external network interfaces. To protect external network interfaces, the control panel will apply all Calico default policies to all endpoints with the label. ibm.role: worker_public.

Calico default policies allow all outgoing traffic and allow incoming traffic from the Internet to certain components (Kubernetes NodePort, LoadBalancer and Ingress service). All other traffic is blocked. Default policies do not apply to traffic within the cluster (interaction between pods)

Persistent storage security

For security at the persistence level, encryption and key authorization are used. Currently available for IKS:

  • Classic NFS
  • Classic block storage (iSCSI)
  • VPС block storage
  • Ibm cloud object storage
  • Porworx-based SDS (uses local drives of your own workers)

Monitoring and Logging

You can use IBM Cloud Monitoring or a solution from Sysdig to monitor IKS. Naturally, Prometheus was not without. Managed OpenShift uses built-in monitoring tools.

With the logs themselves, things are more complicated. It is necessary to collect logs from completely different levels, we use a large number of our own and open source solutions. We collect and store the following logs:

  • Logs of the container itself (STDOUT, STDERR)
  • Application logs (if the path to them is specified)
  • Logs from the work node
  • Kubernetes API Logs
  • Ingress Logs
  • Logs of all Kubernetes system components (kube-system namespace)

To control the logs, a separate service is available: IBM Cloud Log Analysis with LogDNA, which allows you to display all the logs in a common console and analyze retrospectively or in real time, depending on the tariff. You can create an instance separately in each of the 6 regions and then use it to collect the logs of your Kubernetes cluster and other infrastructure on your account. To connect this service to your cluster, you need to put a pod with the LogDNA agent following simple instructions, and all the logs will be sent to the LogDNA repository, then depending on the plan you choose, they will be available for further analysis for a certain period.

To analyze the activities inside your cloud services, including logins and much more, Activity Tracker with LogDNA is available – it allows you to track various actions in your services.

As an additional monitoring tool, you can set the IBM Cloud Monitoring with Sysdig service on your cluster – it is available in all 6 regions, which will allow you to monitor many metrics in your cluster in real time and use the built-in integrations with many common environments running in containers. In addition, you can configure the reaction to events with the possibility of notifications via slack, email, PagerDuty, WebHook, etc.

Container and container image security

The company has its own opinion about what is included in DevOps. If someone is interested, you can read more about this in the IBM Garage method. Understanding what DevSecOps is also formed in many companies and applied in practice. To understand what stages a Docker image goes through to become a Docker container, take a look at the following figure.

secure image

In the IBM cloud, it is possible to use the Docker registry as a service. When pushing the image into this registry docker image is signed. On the part of the worker node, an addon is installed, which is responsible for checking the integrity and compliance with security policies – Vulnerability Advisor. Using these policies, you can, for example, limit the circle of registry, where docker images can jump from.

apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: ClusterImagePolicy
metadata:
name: ibmcloud-default-cluster-image-policy
spec:
 repositories:
  # CoreOS Container Registry
  - name: "quay.io/*"
    policy:

  # Amazon Elastic Container Registry
  - name: "* amazonaws.com / *"
    policy:

  # IBM Container Registry
  - name: "registry * .bluemix.net / *"
    policy

Vulnerability Advisor works with running containers, periodically scanning them and automatically detecting installed packages. Docker images with potential vulnerabilities are marked as dangerous for use and provide detailed information about the vulnerabilities found.

Security advisor is the center for managing all vulnerabilities of your application. It allows the ability to work with problems and fix them. It works both with the results of the Vulnerability Advisor and with the cluster itself, timely warning of the need to update a particular component.

Registration and deployment example of the Kubernetes managed cluster

You can deploy and test your managed Kubernetes cluster in the IBM cloud absolutely free:

  • Register in the IBM cloud: https://ibm.biz/rucloud (you have to confirm your email address, you do not need to add credit card data at this stage)
  • To use the IKS service, you can transfer your account to a paid one (by clicking Upgrade and entering your bank card details – you will receive $ 200 in your account). Or specially for readers of the Habr, you can get a coupon for switching your account to the "trial" mode – this will allow you to deploy a minimum cluster for free for 30 days. After this period, the cluster can be recreated again and continue testing. You can request a coupon at the link – https://ibm.biz/cloudcoupon. Confirmation of the coupon is made during the working day.
  • You can create a free cluster (one worker 2 vCPUs 4GB RAM) from the services catalog – https://cloud.ibm.com/kubernetes/catalog/cluster/create
  • It will take 5-7 minutes to create a cluster, after which IKS cluster will be available to you.

Conclusion

I hope after reading this article the reader has fewer questions about how managed kubernetes and managed open shift work. This article can also be used as an instruction for action on implementing your own kubernetes. All practices used by IBM are applicable to private clouds and, with some effort, can be implemented in any data center.

Resources

IKS slack
https://ibm-container-service.slack.com/
https://www.ibm.com/cloud/blog/announcements/ibm-cloud-activity-tracker-with-logdna-for-ibm-cloud-object-storage
https://www.ibm.com/cloud/blog/announcements/introducing-the-portworx-software-defined-storage-solution
https://cloud.ibm.com/docs/services/Monitoring-with-Sysdig?topic=Sysdig-getting-started

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *