Using Qsan Storage in a Kubernetes Container Management Environment

There are various technologies for compacting physical server resources for their more efficient use. The most well-known option is virtualization. It is in this area that data storage systems (DSS) are one of the key elements, as they allow for the easy implementation of high availability clusters (HA cluster). However, in addition to virtualization, other methods for increasing efficiency are available, one of which is the use of containers.

Let's give a brief overview of the differences between virtual machines and containers.

Virtual machines:

  • Completely isolated environment of virtual machines from each other and from the hypervisor due to the host hardware;

  • The internal operating system type does not have to match the host operating system type;

  • The size of a VM usually starts from several GB;

  • Resource reservations are required for each VM;

  • VM startup is usually measured in minutes.

Containers:

  • Isolation from each other is carried out exclusively by software means of the host operating system;

  • The internal operating system is exactly the same as the host OS;

  • The container size can be quite small (literally tens of MB);

  • The container itself requires minimal resources to operate;

  • The start is usually measured in seconds.

Based on the above advantages and disadvantages, containers are often used to launch applications and services that do not require huge resources. And this allows for greater density of service placement within a single server, thereby increasing its utilization.

One of the frequently used container management tools is Kubernetes, where in addition to orchestrating the containers themselves, it is also necessary to manage the host resources allocated to them. And one of these resources is storage space, which can be storage volumes.

Of course, disk resources can be managed manually. But the ability to manage everything from one window is much more convenient. This is achieved thanks to the standard CSI interaction protocol. Qsan XCubeSAN support just such a protocol. And further we will describe how to achieve dynamic allocation of storage space from the host side.

A number of comments

Only thick pools are supported

Up to 32 CSI volumes can be created within a single host group. An attempt to create a 33rd volume will result in an error message

For MPIO to work correctly with a dual-controller storage system, it is of course necessary to connect the host to both controllers and add the iSCSI ports of both controllers to a single host group.

In preparing this article, illustrations taken from the data storage system manufacturer's materials were used. Those interested can view them in original.

The first step is to enable Timestamp to record the time of access to each specific volume on the storage side (System -> Settings -> Enable timestamp). By default, this option is disabled, as it has a negative impact on overall performance.

Next, you need to create the required thick pool (let it be Pool_01) and the future host group (let it be test1). Then we switch our attention to the host where you need to install the CSI driver:

git clone https://github.com/QsanJohnson/csi-qsan

In the csi-qsan/deploy folder, you need to make changes to the qsan-auth.yaml file, namely, specify the IP addresses of the storage system management interfaces and the administrator password.

Then run the installation script csi-qsan/deploy/install.sh

You can check the installation status with the command kubectl get pod -n qsan

Next, edit the sc.yaml file in the csi-qsan/deploy/example folder. You need to set the StorageClass name (it will be useful later), specify the IP of the management and iSCSI ports, the pool name (in our case, Pool_01) and the iSCSI target name. The iSCSI target can be found on the tab of the corresponding host group.

Now we apply the configuration and check the status

kubectl apply -f sc.yaml

kubectl get sc

The next step is to edit the pvc.yaml file, where PersistentVolumeClaim is the name, the required volume size (let it be 100GB) and StorageClass name (sc-test, which we specified earlier).

We also apply the configuration and check the status

kubectl apply -f pvc.yaml

kubectl get pvc

And finally, edit the pod.yaml file, where you need to set the PersistentVolumeClaim name from the previous step

We still apply the configuration and check the status.

kubectl apply -f pod.yaml

kubectl get pod

As a result of our actions, a 100GB volume should be created on the Pool_01 pool in the specified test1 host group. We check on the storage system side that this is indeed the case.

Now it remains to adjust multipath.conf in order to enable MPIO for such volumes. To do this, add additional data to the specified file.

defaults {
	path_grouping_policy	multibus
	user_friendly_names	no
	find_multipaths		no
getuid_callout		“/lib/udev/scsi_id --replace-whitespace --whitelisted --dev
}

This article presented the basic steps for establishing communication between the Kubernetes server and the storage system. Qsan XCubeSANFor more detailed information, it is highly recommended to read the user manuals provided by both vendors.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *