Configure Amazon EKS multitenant cluster

We constantly use Cloud-native technologies, and launch systems in containers on the platform Kubernetes. This technology is great for container load orchestration due to the flexibility and installation of virtual machines directly on the hardware. (bare metal). Once upon a time Kubernetes suitable only for simple workloads without state preservation, now it became possible to store databases there, train machines and deploy complex applications.

Since Amazon ex became free in 2018, it is most often chosen to run workloads Kubernetes on the Aws. Place Kubernetes on own resources is expensive, difficult and in general, from a business point of view, is doubtful. Better manage workloads and services, for example with Amazon ex.

Amazon ex great for multi-tenant (multitenant) service thanks to support for orchestration layers Kubernetes. You can perform various workloads on the same server. This will increase instance density. Amazon ec2. Difficulties may arise with the implementation of login security and multitenant data at startup Saasapplications. To everyone who will use Amazon ex for Saassome useful tricks may come in handy.

Hidden text

We at the company knowingly do not use the translation of the term multitenant, proposed by Wikipedia, since we believe that it does not fully reflect all the specifics of architecture and its components.

Things to consider when implementing a Amazon EKS multitenant cluster

  • For each tenant (aka “tenant” of the application) you need your own namespace
  • It will be necessary to create the necessary isolation and relationships
  • For operations with the cluster use AWS IAM and access control
  • Through accounts AWS IAM can manage access to workloads
  • Network access balance between namespaces and tenant controlled by setting the network policy of objects
  • Object Security Policy for pod help keep track of host access Amazon ec2 and general data.

Each tenant needs its own namespace

Any system is susceptible to vulnerabilities and crashes, especially at serious loads. In order to reduce the amount of damage to the system, in case of a problem, it would be correct to divide the system by logical boundaries. To create logical boundaries separating tenant at multitenant EKS cluster, use the namespace. Borders are further enhanced by policy and security settings, such as role-based access control (RBAC) and resource quotas. As a result, only resources with the same namespace will be in contact with each other, and access from the outside will be controlled by permissions.

Hidden text

The interaction principle is similar to how resources are protected in different AWS accounts – tenant with different namespaces do not overlap and do not affect each other.

Soft isolation through ResourceQuota or how to prevent tenant from drawing resources on itself?

The namespace is not only used for isolation. Also in order to honestly distribute the resources of resources such as CPU, memory, storage. To not monopolize resources for specific workloads in certain namespace, may I help ResourceQuota and soft insulation method. An object ResourceQuota used to limit the total resource consumption in the namespace. This works for tenant in the ratio of mapping 1: 1 – namespace / tenant. To make sure that no container monopolizes resources, it’s better to write an object Limitrange. Assign available resources through globally defined PriorityClass-objects based on priority workloads.

Hidden text

ResourceQuota is also useful if you need to assign priorities for regular and exclusive tenant in accordance with the agreement between the client and SaaS provider.

Hard isolation or 1: 1 mapping of instance groups and tenant – which is better?

Multiple tenant pods can share the same instances Amazon ec2which, in turn, operate as nodes in the same Amazon ex clusters. Example, Soft Multitenant EKSwhen several tenant share one pool of work nodes is presented below in the picture.

If you need more autonomy, for example, if you want to completely separate the groups of nodes Amazon ec2, you can apply the mechanisms taint, tolerations and nodeSelector. Consider, as an example, a number of nodes that must perform workloads only from tenant a. First of all, you need to bind a key / value pair to these nodes and assign label.

kubectl taint nodes node1 tenant=A:NoSchedule 
kubectl label nodes node1 tenant=A

As a result, the node can only start under a key / value pair tenant A. Meaning label helps workloads tenant a determine your place. Values taint and label similar, but designed for different purposes.

Hidden text

Additionally, you need to register toleration to our taint:

tolerations:
  - key: “tenant”
	value: “A”
	effect: “NoSchedule”

With the help of ready-made labels connect nodeSelector – This ensures that the workload is performed exclusively on dedicated nodes.

nodeSelector:
  tenant: A

Via tolerations and taints workloads namespaces / tenant A (or any other tenant) can only be associated with the nodes we need. As a result, workloads are distributed across different groups of nodes, often auto-scalable, and are monitored from a single point. (Thanks to this tactic, it’s very easy to plan a budget for tenant – invoices for this expense item will be marked) Example, Hard Multitenant EKSwhen each tenant has a dedicated pool of work nodes is presented below in the picture.

The above described hard insulation reference scenario by splitting nodes for tenant cannot be considered final without the introduction of additional security measures that will prevent the abuse of workloads in groups of nodes by others tenant. How to implement this? We use ValidatingAdmissionWebhook. This is a project Open Policy Agent (OPA) from CNCF.

Hidden text

Launch guide OPA on the Amazon ex can be found on the blog Aws.

Amazon IAM Integration in AWS EKS Cluster for Access Control

As mentioned above, the described scenario of rigid isolation cannot be considered final without the introduction of security measures. The first thing to do is for multitenant a cluster is to make an access setting that can be integrated with Amazon iam.

Working with RBAC at Kubernetes You need to understand the difference between cluster and regular roles. On the one hand, the cluster role is understandable and logical. Beginners often turn to her Kubernetes, because assigning the role of the admin of the cluster is one of the simplest operations. When connected multitenant, everything is getting harder. For example, if an object needs access to a namespace tenant, appropriate namespaceroles. On the other hand, it is allowed to set the same roles for each tenant namespaces. Both methods are viable, it all depends on your personal preferences.

Hidden text

AWS IAM is very easy to map to the EKS cluster role. There is a clear and understandable documentation from AWS, everything has been disassembled in detail.

Access AWS resources by integrating AWS IAM into EKS workloads

Roles AWS IAM can be used not only to manage objects in the cluster Amazon exbut also for accessing workloads to Aws. Securing roles is not enough to ensure security AWS IAM to basic instances Amazon ec2. We need to work on permission permissions for nodes, in particular, on those that provide access to all possible workloads on all tenant.

In the pool of work nodes that can perform multitenant load cannot be adjusted Iam roles in Amazon ec2 instances for tenant. Even if it is a single-tenant cluster, roles Iam on nodes Amazon ec2 cannot be made safe, as it will just require a huge number of different permissions.

Hidden text

Starting September 2019, AWS IAM has added support for EKS workloads at the hearth level.

How to implement it:

  1. Map namespace accounts in the AWS IAM role;
  2. Associate accounts with the necessary pods.

As a result, you can not only control access to resources Aws using workloads Ex cluster, but also restrict their access only to those resources that are tenant.

Hidden text

As an example. Often Amazon S3 containers store persistent data; tenant data can also be stored there. If you assign IAM permissions to each pod in an EKS cluster, this will prevent unintentional access to different tenant data in the same container. Each under will have access exclusively to his tenant. You can read more about this in more detail. here.

Manage communications through network settings

Network settings control inbound and outbound permissions based on multiple criteria. For multitenant EKS, tenant map into the namespace. To limit the exchange of data between namespaces and pods in the same namespace use values namespaceSelector and podSelector. It is understood that namespace for tenant a already set up and associated with label.

kubectl label namespace/tenant-a tenant=a

The code below shows how the only tenant shared namespace traffic gets under with labels app: api thanks to network settings.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: same-namespace-only
  namespace: tenant-a
spec:
  podSelector:
	matchLabels:
  	app: api
  policyTypes:
  - ingress
  - egress
  ingress:
  - from:
	- namespaceSelector:
    	matchLabels:
      	tenant: a
  egress:
  - to:
	- namespaceSelector:
    	matchLabels:
      	tenant: a

Originally PodSecurityPolicy designed to restrict access to the base instance EC2 in a cluster Amazon ex, but it can also be used to limit shared resources in multitenant instance cluster EC2.

Hidden text

If you do not use PodSecurityPolicy, then it will turn out like this:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: privileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: true
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - '*'
  volumes:
  - '*'
  hostNetwork: true
  hostPorts:
  - min: 0
	max: 65535
  hostIPC: true
  hostPID: true
  runAsUser:
	rule: 'RunAsAny'
  seLinux:
	rule: 'RunAsAny'
  supplementalGroups:
	rule: 'RunAsAny'
  fsGroup:
	rule: 'RunAsAny'

It is enough to observe all permissions of volumes to understand which security problems are considered the most important. If pods multiple namespaces / tenants share base instance resources Amazon ec2, you must definitely block access to pods from the host. Otherwise, there is a high probability of unintentionally disclosing data between tenantif this data is stored or cached in related folders.

Conclusion

AT Amazon ex many useful things for managing persistent data. In addition, it is increasingly being placed on it. multitenant Services. We, the OpsGuru team, consider it important to be able to properly differentiate resources and monitor security. This is the only way to ensure the correct operation of the entire system.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *