Network Policies to Protect Workloads in a Kubernetes Cluster

In a cluster Kubernetes any service in any namespace is available to us, that is, by default, the pod is open to any traffic.
We can define a network policy for a namespace or pod to secure the workloads in the cluster. For example, split workloads in a multi-tenant cluster across projects, teams, or organizations.
Scenario
Imagine that in Kubernetes namespaces we deploy an application at three levels: frontend, backend And database.
Frontend will be public. Applications will be provided through load balancerso we will refer to the frontend by the DNS name or IP address of this balancer.
Backend will contain all application logic.
Database will be the database.
We know that by default any namespace can send traffic to any namespace and accept any traffic. Without network policies, our three-tier architecture would look like this:

Let’s customize the architecture by creating three new namespaces where we will deploy Services And deployment.
For simplicity, we will use the image nginx to deploy pods to deployments.
1. Set up new namespaces
We create namespaces and add labels to each so that later we can apply network policies on these labels.
# Create "frontend" namespace and add a label
> kubectl create ns frontend
> kubectl label ns frontend tier=frontend
# Create "backend" namespace and add a label
> kubectl create ns backend
> kubectl label ns backend tier=backend
# Create "database" namespace and add a label
> kubectl create ns database
> kubectl label ns database tier=database
2. Deploy services and deployments
2.1 Database layer
# Deploy a deployment named "database" with 2 replicas
> kubectl create deploy database -n database --image=nginx --replicas=2
# List the pods of "database" deployments
> kubectl get pods -n database
----------------------------------------------------------------------------------------------------
NAME READY STATUS RESTARTS AGE
database-7d94797799-b9sdw 1/1 Running 0 23h
database-7d94797799-jc4xt 1/1 Running 0 23h
----------------------------------------------------------------------------------------------------
# Create a service (cluster ip) named "database" for accessing the pods of the "database" deployment
> kubectl create service clusterip database --tcp=80 -n database
# List the service
> kubectl get svc -n database
----------------------------------------------------------------------------------------------------
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
database ClusterIP 10.106.137.165 <none> 80/TCP 23h
----------------------------------------------------------------------------------------------------
2.2 Backend layer
# Deploy a deployment named "backend" with 2 replicas
> kubectl create deploy backend -n backend --image=nginx --replicas=2
# List the pods of "backend" deployments
> kubectl get pods -n backend
----------------------------------------------------------------------------------------------------
NAME READY STATUS RESTARTS AGE
backend-5c5c74cbf6-h4d9p 1/1 Running 0 23h
backend-5c5c74cbf6-jf6fj 1/1 Running 0 23h
----------------------------------------------------------------------------------------------------
# Create a service (cluster ip) named "backend" for accessing the pods of the "backend" deployment
> kubectl create service clusterip backend --tcp=80 -n backend
# List the service
> kubectl get svc -n backend
----------------------------------------------------------------------------------------------------
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.101.154.161 <none> 80/TCP 23h
----------------------------------------------------------------------------------------------------
2.3 Frontend layer
# Deploy a deployment named "frontend" with 2 replicas
> kubectl create deploy frontend -n frontend --image=nginx --replicas=2
# List the pods of "frontend" deployments
> kubectl get pods -n frontend
----------------------------------------------------------------------------------------------------
NAME READY STATUS RESTARTS AGE
frontend-5d7445bdb8-g8rpb 1/1 Running 0 24h
frontend-5d7445bdb8-kqtnc 1/1 Running 0 24h
----------------------------------------------------------------------------------------------------
# Create a service (load balancer) named "frontend" for accessing the pods of the "frontend" deployment,
# From internet through the load balancer IP or DNS
> kubectl create service loadbalancer frontend --tcp=80:80 -n frontend
# List the service, and note down the external IP
> kubectl get svc -n frontend
----------------------------------------------------------------------------------------------------
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 10.102.187.203 45.76.197.98 80:30008/TCP 30m
----------------------------------------------------------------------------------------------------
We provide the frontend through a load balancer. We indicate EXTERNAL IP load balancing service so that users can access the application on the frontend.
3. Security risks
3.1 Problem
There are currently no policies applied to pods and namespaces. All pods in the cluster can communicate with each other. But what if it is a multi-tenant cluster or sensitive data is stored there? Any intruder from the level frontend get direct access to database and all namespaces in the cluster. This is a serious security risk.
3.2 Solution
To protect against these risks, you can use network policies. With their help, we can isolate all three levels apart, limit incoming traffic For Database And backendas well as allow traffic from frontend just on backendso that an attacker cannot frontend get direct access to database and other namespaces. Here’s how it will look on the diagram:

4. Apply network policies
4.1 Database layer
By default, we block all incoming and outgoing traffic.
We can create a default policy that will deny all incoming And outgoing traffic in this namespace.
# Deny all ingress and egress traffic
> kubectl create -f \
https://raw.githubusercontent.com/shamimice03/Network_Policies_Kubernetes/main/netpol-deny-all-database.yaml
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all-database
namespace: database
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
---
Level Database will now be isolated from the rest of the namespaces in the cluster.
Allowing incoming traffic from the backend
Applicable to level Database another policy to allow incoming traffic only with backend.
# Allow ingress traffic from "backend-tier" using the following manifest file:
> kubectl create -f \
https://raw.githubusercontent.com/shamimice03/Network_Policies_Kubernetes/main/netpol-allow-ingress-from-backend.yaml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-from-backend
namespace: database
spec:
podSelector:
matchLabels: {}
ingress:
- from:
- podSelector:
matchLabels: {}
namespaceSelector:
matchLabels:
tier: backend
---
4.2 Backend layer
● Deny all incoming and outgoing traffic by default
Apply the same policy as for Databaseto isolate backend from other namespaces.
# Deny all ingress and egress traffic
> kubectl create -f \
https://raw.githubusercontent.com/shamimice03/Network_Policies_Kubernetes/main/netpol-deny-all-backend.yaml
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all-backend
namespace: backend
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
---
● Allow outgoing traffic to the database
On database level we allowed incoming traffic only from the backend, and in the previous step we denied all traffic in both directions for the backend. Therefore, the database layer is ready to accept incoming traffic from the backend, and the backend itself is prohibited from sending outgoing traffic. To establish interaction between the backend and the database, you need to allow traffic in this direction.
Plus we use Services to access pods, which means you need to create another rule for outgoing traffic to resolve the DNS names of services. In a Kubernetes cluster, the DNS server is a collection of pods in the kube-system namespace. It turns out that you need to allow outgoing traffic to the kube-system space, but not to everything, but only to the kube-dns level. Now the pods on the backend will be able to resolve the DNS names of the services.
We write a policy to allow outgoing traffic With backend V databaseas well as on port 53 kube-system namespaces:
# allow egress from backend-tier to database-tier and allow dns resolving of services
> kubectl create -f \
https://raw.githubusercontent.com/shamimice03/Network_Policies_Kubernetes/main/netpol-allow-egress-to-database.yaml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-to-database
namespace: backend
spec:
podSelector:
matchLabels: {}
egress:
- to:
- podSelector:
matchLabels: {}
namespaceSelector:
matchLabels:
tier: database
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
---
Here is what we end up with:

As you can see, the backend can send outgoing traffic to the database, but it doesn’t accept any incoming traffic yet.
● Allow incoming traffic from the frontend
Let’s apply another policy to the backend layer to allow incoming traffic only from the frontend.
# Allow ingress from frontend-tier
> kubectl create -f \
https://raw.githubusercontent.com/shamimice03/Network_Policies_Kubernetes/main/netpol-allow-ingress-from-frontend.yaml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-from-frontend
namespace: backend
spec:
podSelector:
matchLabels: {}
ingress:
- from:
- podSelector:
matchLabels: {}
namespaceSelector:
matchLabels:
tier: frontend
---
This way the backend will accept incoming traffic from the frontend and send outgoing traffic to the database.
4.3 Frontend layer
● Deny all traffic by default
As for the previous levels, we will apply a network policy that will completely deny all incoming and outgoing traffic.
# Deny all ingress and egress traffic
> kubectl create -f \
https://raw.githubusercontent.com/shamimice03/Network_Policies_Kubernetes/main/netpol-deny-all-frontend.yaml
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all-frontend
namespace: frontend
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
---
● Allow outgoing traffic to the backend
Right now all frontend traffic is banned. To send traffic to the backend, you need to allow outgoing traffic to the backend and to the kube-system namespace to resolve the DNS names of the services, as we said a little higher.
# Allow egress to backend-tier
> kubectl create -f \
https://github.com/shamimice03/Network_Policies_Kubernetes/blob/main/netpol-allow-egress-to-backend.yaml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-to-backend
namespace: frontend
spec:
podSelector:
matchLabels: {}
egress:
- to:
- podSelector:
matchLabels: {}
namespaceSelector:
matchLabels:
tier: backend
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
---
Here’s what we get now:

● Allow incoming traffic from the Internet
Since we are accessing the frontend through the load balancing service, the request will come from the IP address of the external load balancer. We have denied all incoming traffic, so for now we will not be able to access the application on the frontend through the external IP. You need to create a network policy that will allow incoming traffic from anywhere except the private IP addresses of the pods. This will block traffic from pods that are in other namespaces.
# Allow ingress from everywhere except pod-network
> kubectl create -f \
https://raw.githubusercontent.com/shamimice03/Network_Policies_Kubernetes/main/netpol-allow-ingress-from-everywhere.yaml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-from-loadbalancer
namespace: frontend
spec:
podSelector:
matchLabels: {}
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
except: # Restrict the Private IP CIDR Block of your pod network.
- 10.0.0.0/8 # So that, pods from other namespaces cannot send ingress traffic.
ports:
- protocol: TCP
port: 80
---
We have applied all network policies that are needed for protection And isolation Kubernetes workloads. Here’s how it works now:

5. Check what happened
Let’s test our system and make sure the network policies work as expected.
● First, let’s try to access the application through the external IP address of the load balancing service.

We see that the application frontend accessible from an external IP address.

Practical course covering all aspects project security on Kubernetes.
● Let’s try to access services in the backend and database through a pod in the frontend. The first should work, the second should not.
# Dive into a pod of the "frontend" deployment
> kubectl exec -it frontend-5d7445bdb8-g8rpb -n frontend bash
# Try to access a service(clusterIP) named "backend", resides in the backend-tier
>> curl backend.backend
---------------------------------------------------------------------------------
<!DOCTYPE html>
<html>
<head> #Successfully Accessed
<title>Welcome to nginx!</title>
<style>
...
---------------------------------------------------------------------------------
# Try to access a service(clusterIP) named "database", resides in the database-tier
>> curl database.database
---------------------------------------------------------------------------------
# curl: (28) Failed to connect to database.database port 80: Connection timed out
---------------------------------------------------------------------------------
● Now check the network policies through the pod on backend.
# Dive into a pod of the "backend" deployment
> kubectl exec -it backend-5c5c74cbf6-h4d9p -n backend bash
# Try to access a service(clusterIP) named "frontend", resides in the frontend-tier
>> curl frontend.frontend
---------------------------------------------------------------------------------
# curl: (28) Failed to connect to frontend.frontend port 80: Connection timed out
---------------------------------------------------------------------------------
# Try to access a service(clusterIP) named "database", resides in the database-tier
>> curl database.database
----------------------------------------------------------------------------------
<!DOCTYPE html>
<html>
<head> #Successfully Accessed
<title>Welcome to nginx!</title>
<style>
...
----------------------------------------------------------------------------------
● Finally, do the same from the pod to database.
# Dive into a pod of the "database" deployment
> kubectl exec -it database-7d94797799-b9sdw -n database bash
# Try to access a service(clusterIP) named "backend", resides in the backend-tier
>> curl backend.backend
---------------------------------------------------------------------------------
# curl: (28) Failed to connect to backend.backend port 80: Connection timed out
---------------------------------------------------------------------------------
# Try to access a service(clusterIP) named "frontend", resides in the frontend-tier
>> curl frontend.frontend
---------------------------------------------------------------------------------
# curl: (28) Failed to connect to frontend.frontend port 80: Connection timed out
---------------------------------------------------------------------------------
As you can see, all of our network policies are working as expected.
Security in Kubernetes: how to upgrade skills
Enrollment for the course is open at the Slurm training center “Security in Kubernetes” for security engineers, DevOps, SREs and developers working independently in Kubernetes.
On course you will get acquainted with the main threat models, as well as learn how to counter them and what to do so that the container launches on time, and security is at all stages – from development to sending to the server and subsequent deployment.
You can consolidate all the knowledge in practice on the stands – you will work out the theory and be confident in the decisions.
If you already have basic knowledge and want to touch the inside of Kuba, come to “Kubernetes Mega”. You are waiting for 6 hours of practice, seasoned with a pinch of theory from the speakers.
What will happen?
authorization in the cluster
autoscaling setting
backup
Stateful applications in a cluster
integration of Kubernets and Vault to store secrets
HorizontalPodAutoscaler
certificate rotation in the cluster
Blue-Green Deploy and Canary Deploy
service mesh setup
“Mega” suitable for everyone who will run Kubernetes in production and be responsible for the project in the future: security specialists, system engineers, administrators, architects, DevOps, etc. free courseto learn how to install Kubernetes manually.
The kit is cheaper:
We offer sets of video courses (tariff “Standard”) with a discount of 20%
“Security” + “Mega” = 90,000 rubles instead of 130,000
Learn more and sign up: “Security in Kubernetes”, Kubernetes Mega.