We launch the Kubernetes Ingress controller with a public ip on a home laptop

Working with Ingress controllers usually involves working with Kubernetes in the cloud, where external ips are assigned automatically. I’m learning Kubernetes by using a normal laptop behind a NAT that runs various flavors of Kubernetes in virtual machines. When I dealt with the Ingress controller, I had an irresistible desire to add a public ip to it and access it from the outside. Let’s see how this can be done. Linux Almighty will help us with this.

I decided to borrow a public ip from vps. To do this, in reg.ru (not advertising, it just worked here), I rented a virtual machine with ubuntu20.04 on board and a couple of ip addresses for a couple of hours. We will use one for access via ssh, remove the second from the interface of the virtual machine and bring it into our Kubernetes (the work can be organized even simpler, with DNATs, but it’s more interesting this way). It is clear that the public ip addresses listed below will have their own, and they must be replaced accordingly.

VPS

vps status at the initial stage:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:73:f5:f6 brd ff:ff:ff:ff:ff:ff
    inet 95.163.241.96/24 brd 95.163.241.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 89.108.76.161/24 brd 89.108.76.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a00:f940:2:4:2::51d4/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe73:f5f6/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 52:54:00:9a:da:36 brd ff:ff:ff:ff:ff:ff

After listening to eth0, we make sure that the hypervisor regularly sends arp requests to confirm ip addresses. In the future, we will unbind the ip address 89.108.76.161 from the interface and start a daemon that will respond to these arp requests, imitating the presence of an ip address:

# tcpdump -i eth0 -n -v arp 
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:53:20.229845 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 95.163.241.96 tell 37.140.193.29, length 28
14:53:20.229879 ARP, Ethernet (len 6), IPv4 (len 4), Reply 95.163.241.96 is-at 52:54:00:73:f5:f6, length 28
14:54:05.031046 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 89.108.76.161 tell 37.140.193.29, length 28
14:54:05.031103 ARP, Ethernet (len 6), IPv4 (len 4), Reply 89.108.76.161 is-at 52:54:00:73:f5:f6, length 28
14:54:09.126771 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 95.163.241.96 tell 37.140.193.29, length 28
14:54:09.126827 ARP, Ethernet (len 6), IPv4 (len 4), Reply 95.163.241.96 is-at 52:54:00:73:f5:f6, length 28
14:54:49.573563 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 89.108.76.161 tell 37.140.193.29, length 28
14:54:49.573615 ARP, Ethernet (len 6), IPv4 (len 4), Reply 89.108.76.161 is-at 52:54:00:73:f5:f6, length 28
14:54:54.693462 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 95.163.241.96 tell 37.140.193.29, length 28
14:54:54.693493 ARP, Ethernet (len 6), IPv4 (len 4), Reply 95.163.241.96 is-at 52:54:00:73:f5:f6, length 28

Let’s run a tunnel from vps to a home laptop using wireguard. There are a lot of instructions on the Internet, so there is nothing special here:

# apt update
# apt install wireguard
# wg genkey | tee /etc/wireguard/private.key
# chmod go= /etc/wireguard/private.key
# cat /etc/wireguard/private.key | wg pubkey | tee /etc/wireguard/public.key
# cat  > /etc/wireguard/wg0.conf <<EOF
[Interface]
Address = 10.15.0.1/24
SaveConfig = true
ListenPort = 51820
PrivateKey = gFzlk6/oBAkRnqTSqRQ0A03IR8iX2NY0Q9518xMTDmI=
EOF

We raise the wireguard:

# systemctl start wg-quick@wg0.service

Remove external ip from interface:

# ip addr del 89.108.76.161/24 brd 89.108.76.255 dev eth0

Add routing to the external ip through the tunnel:

# ip r add 89.108.76.161 via 10.15.0.2

The command below is needed so that the laptop is not left without Internet access, because. then we wrap all its traffic into the tunnel:

# iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE

We allow access to the external ip and laptop address in the wireguard network through the tunnel:

# wg set wg0 peer hd7clB/uztrTOlsWTrHCF7mu9g6ECp+FhE2lhohWf1s= allowed-ips 89.108.76.161,10.15.0.2

Allow forwarding between interfaces:

# sysctl -w net.ipv4.ip_forward=1

and make sure the FORWARD chain is not blocked:

# iptables-save | grep FORWARD   
:FORWARD ACCEPT [450722:544073659]
:FORWARD ACCEPT [4633:3846037]

After starting wireguard, the wg0 interface will appear in the system:

# ip a
4: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.15.0.1/24 scope global wg0
       valid_lft forever preferred_lft forever

Notebook (Ubuntu20.04)

Install wireguard and generate keys by analogy:

# cat  > /etc/wireguard/wg2.conf <<EOF 
[Interface]
PrivateKey = Some private key
Address = 10.15.0.2/24
Table = off

[Peer]
PublicKey = aU3tLYzJPTKCtelYgVTtAfgnvixWdNK5jC2wnXgvemw=
AllowedIPs = 0.0.0.0/0
Endpoint = 95.163.241.96:51820
PersistentKeepalive = 25
EOF

We raise the tunnel:

# systemctl start wg-quick@wg2.service

We check the presence of the wireguard interface:

# ip a
221: wg2: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.15.0.2/24 scope global wg2
       valid_lft forever preferred_lft forever

and connectivity with the server:

# ping 10.15.0.1
PING 10.15.0.1 (10.15.0.1) 56(84) bytes of data.
64 bytes from 10.15.0.1: icmp_seq=1 ttl=64 time=16.3 ms
64 bytes from 10.15.0.1: icmp_seq=2 ttl=64 time=8.91 ms
64 bytes from 10.15.0.1: icmp_seq=3 ttl=64 time=9.00 ms

For the initial check, we will hang the external ip on the loopback of the laptop:

# ip addr add 89.108.76.161 dev lo

We direct all laptop traffic through the tunnel so that return packets reach clients that will contact 89.108.76.161 (192.168.88.1 is the laptop’s default gateway):

# ip r add 95.163.241.96/32 via 192.168.88.1 
# ip r add default via 10.15.0.1 

Make sure the FORWARD chain is not blocked:

# iptables-save | grep FORWARD
:FORWARD ACCEPT [67644779:42335638975]
:FORWARD ACCEPT [149377:28667150]

And

# sysctl -w net.ipv4.ip_forward=1

VPS

Check availability of 89.108.76.161 with VPS:

# ping 89.108.76.161
PING 89.108.76.161 (89.108.76.161) 56(84) bytes of data.
64 bytes from 89.108.76.161: icmp_seq=1 ttl=64 time=6.90 ms
64 bytes from 89.108.76.161: icmp_seq=2 ttl=64 time=38.7 ms
64 bytes from 89.108.76.161: icmp_seq=3 ttl=64 time=59.9 ms

We start the daemon that will respond to arp requests:

# farpd -d -i eth0 89.108.76.161

Now ping 89.108.76.161 from an external network (for example, from a phone connected to the operator’s network) will work.

Laptop

Recall that a virtual machine (VM) is running on a laptop (hypervisor), in which minikube runs. It is connected to the virbr0 bridge of the hypervisor:

# ip a
19: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:c3:6e:e6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever

Let’s remove the external address from lo:

# ip addr del 89.108.76.161 dev lo

Let’s configure packet routing to 89.108.76.161 towards the VM:

# ip r add 89.108.76.161 via 192.168.122.245

VM

VM interfaces:

l@minikube2:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:a5:b3:df brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.245/24 brd 192.168.122.255 scope global dynamic enp1s0
       valid_lft 2292sec preferred_lft 2292sec
    inet6 fe80::5054:ff:fea5:b3df/64 scope link 
       valid_lft forever preferred_lft forever
3: br-5b72cdfd77e4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:01:94:a2:a5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.58.1/24 brd 192.168.58.255 scope global br-5b72cdfd77e4
       valid_lft forever preferred_lft forever
    inet6 fe80::42:1ff:fe94:a2a5/64 scope link 
       valid_lft forever preferred_lft forever

Forwarding status:

l@minikube2:~$ sysctl -w net.ipv4.ip_forward
net.ipv4.ip_forward = 1

l@minikube2:~$ sudo iptables-save | grep FORWARD
:FORWARD ACCEPT [2663492:1312451658]
:FORWARD ACCEPT [6299:278761]

A minicube is running on the machine with three nodes, which are containers connected by a bridge br-5b72cdfd77e4:

l@minikube2:~$ docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED      STATUS        PORTS                                                                                                                                  NAMES
d672c95f6adc   gcr.io/k8s-minikube/kicbase:v0.0.37   "/usr/local/bin/entr…"   5 days ago   Up 34 hours   127.0.0.1:49197->22/tcp, 127.0.0.1:49196->2376/tcp, 127.0.0.1:49195->5000/tcp, 127.0.0.1:49194->8443/tcp, 127.0.0.1:49193->32443/tcp   helm-m03
6eac7091ea0c   gcr.io/k8s-minikube/kicbase:v0.0.37   "/usr/local/bin/entr…"   5 days ago   Up 34 hours   127.0.0.1:49192->22/tcp, 127.0.0.1:49191->2376/tcp, 127.0.0.1:49190->5000/tcp, 127.0.0.1:49189->8443/tcp, 127.0.0.1:49188->32443/tcp   helm-m02
c02b9bb12c98   gcr.io/k8s-minikube/kicbase:v0.0.37   "/usr/local/bin/entr…"   5 days ago   Up 34 hours   127.0.0.1:49187->22/tcp, 127.0.0.1:49186->2376/tcp, 127.0.0.1:49185->5000/tcp, 127.0.0.1:49184->8443/tcp, 127.0.0.1:49183->32443/tcp   helm

We route packets to the third node:

l@minikube2:~$ sudo ip r add 89.108.76.161 via 192.168.58.4

Let’s go to it:

l@minikube2:~$ minikube ssh -n helm-m03

Let’s set the external address to lo:

docker@helm-m03:~$ sudo ip addr add 89.108.76.161 dev lo
docker@helm-m03:~$ ip a        
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever    
    inet 89.108.76.161/32 scope global lo
       valid_lft forever preferred_lft forever

21: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c0:a8:3a:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.58.4/24 brd 192.168.58.255 scope global eth0
       valid_lft forever preferred_lft forever

Let’s install a python to check connectivity:

docker@helm-m03:~$ sudo apt update
docker@helm-m03:~$ sudo apt install python

and start the server on port 8080:

docker@helm-m03:~$ python -m http.server

Let’s check access to 89.108.76.161 from the outside using http://89.108.76.161:8000.

Let’s move on to the Ingress controller. Add it to the cluster:

l@minikube2:~$ minikube addons enable ingress

Let’s add the external ip to the ingress controller:

l@minikube2:~$ k patch svc -n ingress-nginx ingress-nginx-controller -p '{"spec":{"externalIPs":["89.108.76.161"]}}'

and we automatically add DNAT to the pod responsible for working with ingress-nginx-controller:

l@minikube2:~$ sudo iptables-save | grep 89.108.76.161
-A KUBE-SERVICES -d 89.108.76.161/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:http external IP" -m tcp --dport 80 -j KUBE-EXT-CG5I4G2RS3ZVWGLK
-A KUBE-SERVICES -d 89.108.76.161/32 -p tcp -m comment --comment "ingress-nginx/ingress-nginx-controller:https external IP" -m tcp --dport 443 -j KUBE-EXT-EDNDUDH2C75GIR6O

Let’s deploy the whoami service to Kubernetes:

l@minikube2:~$ cat > deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami  
  labels:
    app: whoami
spec:
  replicas: 3
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
      - name: whoami
        image: traefik/whoami
        ports:
        - containerPort: 80
EOF
l@minikube2:~$ cat > service.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: extip
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: whoami
EOF
l@minikube2:~$ cat ingress.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: extip

spec:
  ingressClassName: nginx
  rules:
  - host: extip.yourdomainhere
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: extip
            port:
              number: 80
EOF
l@minikube2:~$ k apply -f deployment.yaml
l@minikube2:~$ k apply -f service.yaml
l@minikube2:~$ k apply -f ingress.yaml

Let’s write the external ip address 89.108.76.161 in the A records of the extip.yourdomainhere domain. Reaching out from outside http://extip.yourdomainhereeverything is working!

curl extip.yourdomainhere
Hostname: whoami-75d55b64f6-7q894
IP: 127.0.0.1
IP: 10.244.0.17
RemoteAddr: 10.244.0.3:50120
GET / HTTP/1.1
Host: extip.yourdomainhere
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/109.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.5
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 192.168.58.4
X-Forwarded-Host: extip.yourdomainhere
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Scheme: http
X-Real-Ip: 192.168.58.4
X-Request-Id: f3c1f071b171b2ab1036241410acebcb
X-Scheme: http

So, we borrowed a public ip from vps, brought it to Kubernetes, organized routing and connectivity to this address, deployed the service to Kubernetes and checked its operation.

I hope it was interesting.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *