Functional testing of Kubernetes Operators with Kubebuilder

A little about Kubebuilder

Kubebuilder is built on top of controller-runtime And client-gotwo of the most powerful libraries from Kubernetes itself.

Kubebuilder automatically generates a lot of the boilerplate code, CRD configurations, and everything else that is needed for a complete operator. This tool also includes a testing framework that allows you not only to write controllers, but also to test them in an isolated environment. We'll talk about testing a little later, but for now let's set up the environment and launch Kubebuilder.

First, you'll need to install a few dependencies. Before moving further, you will need to install Gobecause Kubebuilder is a tool for Golang.

And Kubebuilder itself can be downloaded from the official repository; there is a command that will do everything for you:

curl -L https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.4.0/kubebuilder_linux_amd64 -o kubebuilder
chmod +x kubebuilder
sudo mv kubebuilder /usr/local/bin/

If you're on MacOS:

brew install kubebuilder

Checking the installation:

kubebuilder version

If everything went well, you will see the Kubebuilder version and that all the necessary components are working.

Now let's create a new operator project. Kubebuilder generates the basis for the statement starting from the command line. First you need to initialize the project:

kubebuilder init --domain my.domain --repo github.com/your-username/my-operator

This command will create a minimal project structure with the main files for the Go module and dependencies. Parameter --domain specifies the domain name for your CRDs. For example, if you are developing an operator for your company, you can specify --domain yourcompany.com.

Next you need to create API and controller for our operator:

kubebuilder create api --group batch --version v1 --kind Job

This command generates the necessary files for the Kubernetes API and controller. Parameter --group points to a resource group (in this case it is batch), --version to the API version, and --kind on the type of resource the operator is working with (for example, Job).

After this we see a new project structure with an API file in api/v1/job_types.gowhere the CRD structure is defined, and the controller file in controllers/job_controller.gowhere the logic of the operator’s work is written.

Now let's look at how to write logic for our operator. Let's take the example of a controller as a basis. Job. In file job_controller.go you will find a method Reconcilewhich is responsible for how the operator reacts to changes in resources. Here we will write the logic of what to do when Kubernetes makes changes to an object Job.

An example of the simplest logic:

func (r *JobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    log := log.FromContext(ctx)

    // Получаем ресурс
    var job batchv1.Job
    if err := r.Get(ctx, req.NamespacedName, &job); err != nil {
        log.Error(err, "unable to fetch Job")
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }

    // Здесь пишем логику работы с ресурсом, например:
    // Проверяем, создан ли под для этого Job, если нет — создаем.
    
    return ctrl.Result{}, nil
}

Here we use the standard Kubebuilder client to retrieve a Job object from the cluster. After this, you can write any logic that you want to implement in the operator’s work.

But we are here for testing. Let's get started.

Functional testing of Kubernetes Operators with Kubebuilder

EnvTest is a lightweight environment for testing Kubernetes controllers, which allows you to run tests without deploying a full-fledged cluster.

First of all, we need to prepare a test environment. To do this we will use the package controller-runtime/pkg/envtestwhich is already included with Kubebuilder. First, let's add it to the dependencies of our project:

go get sigs.k8s.io/controller-runtime/pkg/envtest

Then create a file main_test.gowhere our test code will be located:

package main_test

import (
    "testing"
    "sigs.k8s.io/controller-runtime/pkg/client"
    "sigs.k8s.io/controller-runtime/pkg/envtest"
    "sigs.k8s.io/controller-runtime/pkg/log/zap"
    "github.com/onsi/gomega"
)

var k8sClient client.Client
var testEnv *envtest.Environment

func TestMain(m *testing.M) {
    gomega.RegisterFailHandler(gomega.Fail)
    testEnv = &envtest.Environment{
        CRDDirectoryPaths: []string{"../config/crd/bases"},
    }

    var err error
    cfg, err := testEnv.Start()
    if err != nil {
        panic(err)
    }

    k8sClient, err = client.New(cfg, client.Options{})
    if err != nil {
        panic(err)
    }

    code := m.Run()
    testEnv.Stop()
    os.Exit(code)
}

What's going on here:

  • envtest.Environment configures a minimal Kubernetes API server and etcd for testing CRDs and controllers.

  • client.New creates a client to interact with objects in the cluster.

This code starts the test environment and initializes the API server. Now you can start writing tests.

CRD testing

Let's start with a simple test that checks if our CRD is generated correctly.

Let's say we are working with a resource Job. Sample code to create a CRD and check that it is created correctly in the cluster:

func TestCreateCRD(t *testing.T) {
    g := gomega.NewWithT

    // Создаем объект CRD
    job := &batchv1.Job{
        ObjectMeta: metav1.ObjectMeta{
            Name: "test-job",
            Namespace: "default",
        },
        Spec: batchv1.JobSpec{
            Template: corev1.PodTemplateSpec{
                Spec: corev1.PodSpec{
                    Containers: []corev1.Container{
                        {
                            Name:  "busybox",
                            Image: "busybox",
                            Command: []string{"sleep", "10"},
                        },
                    },
                    RestartPolicy: corev1.RestartPolicyNever,
                },
            },
        },
    }

    // Создаем объект в тестовой среде
    err := k8sClient.Create(context.Background(), job)
    g.Expect(err).NotTo(gomega.HaveOccurred())

    // Проверяем, что объект действительно создан
    fetchedJob := &batchv1.Job{}
    err = k8sClient.Get(context.Background(), client.ObjectKey{Name: "test-job", Namespace: "default"}, fetchedJob)
    g.Expect(err).NotTo(gomega.HaveOccurred())
    g.Expect(fetchedJob.Name).To(gomega.Equal("test-job"))
}

This test verifies that when an object is created Job our controller processes it correctly and the object appears in the cluster. Using gomega as an assertion framework, you can ensure that no errors occur and that the object is actually created.

Interaction with other objects in the cluster

Now let's complicate the task and check how the operator interacts with other Kubernetes objects. For example, the operator should automatically create ConfigMap when creating a specific CRD. Here's how to test this logic:

func TestConfigMapCreation(t *testing.T) {
    g := gomega.NewWithT

    // Создаем CRD
    job := &batchv1.Job{
        ObjectMeta: metav1.ObjectMeta{
            Name: "job-with-configmap",
            Namespace: "default",
        },
        Spec: batchv1.JobSpec{
            Template: corev1.PodTemplateSpec{
                Spec: corev1.PodSpec{
                    Containers: []corev1.Container{
                        {
                            Name:  "nginx",
                            Image: "nginx",
                        },
                    },
                    RestartPolicy: corev1.RestartPolicyNever,
                },
            },
        },
    }

    err := k8sClient.Create(context.Background(), job)
    g.Expect(err).NotTo(gomega.HaveOccurred())

    // Проверяем, что ConfigMap создан
    configMap := &corev1.ConfigMap{}
    err = k8sClient.Get(context.Background(), client.ObjectKey{Name: "job-configmap", Namespace: "default"}, configMap)
    g.Expect(err).NotTo(gomega.HaveOccurred())
    g.Expect(configMap.Data["config"]).To(gomega.Equal("some-config-data"))
}

Here we check that when creating Jobour controller automatically creates ConfigMapcontaining the required data.

Handling events and reacting to changes

The last important point is to check how the operator reacts to changes in resources and events. For example, if Job failed, the operator must generate a notification or restart Pod.

An example of a test that checks the response to an event:

func TestJobFailureEvent(t *testing.T) {
    g := gomega.NewWithT

    // Создаем объект Job с ошибочным подом
    job := &batchv1.Job{
        ObjectMeta: metav1.ObjectMeta{
            Name: "failing-job",
            Namespace: "default",
        },
        Spec: batchv1.JobSpec{
            Template: corev1.PodTemplateSpec{
                Spec: corev1.PodSpec{
                    Containers: []corev1.Container{
                        {
                            Name:  "busybox",
                            Image: "busybox",
                            Command: []string{"false"}, // Под завершится с ошибкой
                        },
                    },
                    RestartPolicy: corev1.RestartPolicyNever,
                },
            },
        },
    }

    err := k8sClient.Create(context.Background(), job)
    g.Expect(err).NotTo(gomega.HaveOccurred())

    // Проверяем, что оператор среагировал на событие и выполнил корректные действия
    // Например, оператор создает событие с ошибкой
    events := &corev1.EventList{}
    err = k8sClient.List(context.Background(), events, client.InNamespace("default"))
    g.Expect(err).NotTo(gomega.HaveOccurred())
    g.Expect(events.Items).NotTo(gomega.BeEmpty())
    g.Expect(events.Items[0].Reason).To(gomega.Equal("FailedJob"))
}

This test simulates a bug in Job and verifies that the operator responds correctly to this event by creating a failure record.

Testing resource updates

For example, the operator must correctly process changes to already created Job. For example, when changing the configuration Job our operator must update the accompanying ConfigMap. Here's how to write a test that checks this:

func TestUpdateJobConfig(t *testing.T) {
    g := gomega.NewWithT

    // Создаем исходный объект Job
    job := &batchv1.Job{
        ObjectMeta: metav1.ObjectMeta{
            Name:      "update-job",
            Namespace: "default",
        },
        Spec: batchv1.JobSpec{
            Template: corev1.PodTemplateSpec{
                Spec: corev1.PodSpec{
                    Containers: []corev1.Container{
                        {
                            Name:  "nginx",
                            Image: "nginx",
                        },
                    },
                    RestartPolicy: corev1.RestartPolicyNever,
                },
            },
        },
    }

    err := k8sClient.Create(context.Background(), job)
    g.Expect(err).NotTo(gomega.HaveOccurred())

    // Изменяем Job
    job.Spec.Template.Spec.Containers[0].Image = "nginx:latest"
    err = k8sClient.Update(context.Background(), job)
    g.Expect(err).NotTo(gomega.HaveOccurred())

    // Проверяем, что изменения были приняты и оператор обновил ConfigMap
    configMap := &corev1.ConfigMap{}
    err = k8sClient.Get(context.Background(), client.ObjectKey{Name: "update-job-configmap", Namespace: "default"}, configMap)
    g.Expect(err).NotTo(gomega.HaveOccurred())
    g.Expect(configMap.Data["config"]).To(gomega.Equal("updated-config-data"))
}

The operator reacts to an update to an existing resource and performs the appropriate actions as an update ConfigMap.

Testing resource dependencies

Sometimes an operator must manage multiple resources simultaneously and keep their status in sync. For example, if one resource depends on another, the operator must ensure that all components remain up to date. In the following example, the operator ensures that Deployment was up to date when the associated one changes Job:

func TestJobDeploymentSync(t *testing.T) {
    g := gomega.NewWithT

    // Создаем Job
    job := &batchv1.Job{
        ObjectMeta: metav1.ObjectMeta{
            Name:      "sync-job",
            Namespace: "default",
        },
        Spec: batchv1.JobSpec{
            Template: corev1.PodTemplateSpec{
                Spec: corev1.PodSpec{
                    Containers: []corev1.Container{
                        {
                            Name:  "nginx",
                            Image: "nginx",
                        },
                    },
                    RestartPolicy: corev1.RestartPolicyNever,
                },
            },
        },
    }

    err := k8sClient.Create(context.Background(), job)
    g.Expect(err).NotTo(gomega.HaveOccurred())

    // Создаем связанный Deployment
    deployment := &appsv1.Deployment{
        ObjectMeta: metav1.ObjectMeta{
            Name:      "sync-deployment",
            Namespace: "default",
        },
        Spec: appsv1.DeploymentSpec{
            Selector: &metav1.LabelSelector{
                MatchLabels: map[string]string{"app": "nginx"},
            },
            Template: corev1.PodTemplateSpec{
                ObjectMeta: metav1.ObjectMeta{
                    Labels: map[string]string{"app": "nginx"},
                },
                Spec: corev1.PodSpec{
                    Containers: []corev1.Container{
                        {
                            Name:  "nginx",
                            Image: "nginx",
                        },
                    },
                },
            },
        },
    }

    err = k8sClient.Create(context.Background(), deployment)
    g.Expect(err).NotTo(gomega.HaveOccurred())

    // Проверяем, что Deployment синхронизирован с Job
    fetchedDeployment := &appsv1.Deployment{}
    err = k8sClient.Get(context.Background(), client.ObjectKey{Name: "sync-deployment", Namespace: "default"}, fetchedDeployment)
    g.Expect(err).NotTo(gomega.HaveOccurred())
    g.Expect(fetchedDeployment.Spec.Template.Spec.Containers[0].Image).To(gomega.Equal("nginx"))
}

This test verifies that the operator is synchronizing state Deployment with changes to Job.

Conclusion

Kubebuilder makes it possible to test complex scenarios in a lightweight environment without setting up a full-fledged Kubernetes cluster.


Open lessons will be held soon as part of the online course “Infrastructure platform based on Kubernetes”:

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *