Rancher. Adding a new cluster

Introduction

Rancher is a system that runs on Kubernetes and allows you to work with multiple clusters through a single entry point. Rancher helps to optimize the deployment of clusters in different environments: bare-metal server (dedicated, iron), private clouds, public clouds and unites clusters using single authentication, access control and security policies.

Regardless of how the cluster is created and wherever it is located, Rancher provides access to all clusters through one control panel.

Tasks

  1. Resource markup:

    1. 3 service nodes (etcd, control panel)

    2. 4 worker nodes

    3. 1 virtual machine inside a new cluster to create a tunnel

  2. Configure communication between networks:

    1. First network: internal network with Rancher cluster and Rancher server manager

    2. Second network: external network with a Rancher cluster on a bare-metal server

  3. Add a Nexus server to store Helm and Docker artifacts

Installing VMware

Given: dedicated server with characteristics: 2 x Xeon Gold 6240R, 24 cores each, 320 GB DDR4 RAM, 4 x 480 GB SSD.

Made on the server RAID10 and installed VMware vSphere Hypervisor 7 License. This is a free version limited to 8 cores per VM.

Installing and configuring virtual machines

#

Name

Cpu

RAM. gb

HDD, gb

Private ip

Public ip

1

vm-firewall

1

4

16

10.21.0.1

8.10.7.2

2

vm-rch-node-1

eight

32

60

10.21.0.11

3

vm-rch-node-2

eight

32

60

10.21.0.12

4

vm-rch-node-3

eight

32

60

10.21.0.13

5

vm-rch-node-4

eight

32

60

10.21.0.14

6

vm-rch-etcd-1

2

eight

twenty

10.21.0.15

7

vm-rch-etcd-2

2

eight

twenty

10.21.0.16

eight

vm-rch-etcd-3

2

eight

twenty

10.21.0.17

nine

vm-nexus

4

16

200

10.21.0.100

8.10.7.2:8081

Each VM has an Ubuntu Server 18 distribution.

On each VM for the cluster (except for vm-nexus), execute the commands to install docker:

apt-get update && sudo apt-get install -y apt-transport-https echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.listcurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -apt-get update && sudo apt-get install -y kubectl docker.io open-iscsi bridge-utils open-vm-tools cifs-utils jq util-linux coreutils gccgo-go

And turn off swap:

swapoff -a

in the file / etc / fstab delete the line responsible for the swap file Plus delete the file that is responsible for swap: / swap

Network architecture

Ip-addresses on all VMs are registered manually. The network and route settings in each VM are stored in a file at: /etc/netplan/00-installer-config.yaml

The current cluster is on the 10.1.0.0/16 network

For the internal LAN of the new cluster, the address range is 10.21.0.0/16

A car vm-firewall has two network interfaces:

  1. ens160. A “white” address is attached to this interface: 8.10.7.2, which was issued by the provider. This address connects to the traffic entry point of the current cluster and creates a tunnel.

  2. ens192. The interface for the internal network of the new cluster. It acts as a gateway for all VMs within the new cluster’s network.

Network settings on vm-firewall:

network: 
ethernets:   
ens160:     
dhcp4: false     
addresses: [8.10.7.2/23]     
gateway4: 8.10.7.1     
dhcp4-overrides:       
use-dns: false     
nameservers:       
addresses: [8.8.8.8,8.8.4.4]     
routes:       
- to: 10.1.0.0/16         
via: 8.10.7.2         
on-link: true         
metric: 100       
- to: 10.21.0.0/16         
via: 10.21.0.1         
on-link: true         
metric: 110       
- to: 0.0.0.0/0         
via: 8.10.7.2         
on-link: true         
metric: 120   
ens192:     
dhcp4: false     
addresses: [10.21.0.1/16] 
version: 2

Network settings on any other VM:

network: 
ethernets:
ens160: 
dhcp4: false 
addresses: [10.21.0.11/16] 
gateway4: 10.21.0.1 
nameservers:   
addresses: [8.8.8.8,8.8.4.4] 
version: 2

After editing the service config file netplan need to check syntax:

And apply the settings:

After that, you need to check how the ip-address settings were applied:

user@vm-rch-node-01:~$ ifconfig
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500   
inet 10.21.0.11  netmask 255.255.0.0  broadcast 10.21.255.255    inet6 fe80::20c:29ff:fe3b:3ea5  prefixlen 64  scopeid 0x20<link>    ether 00:0c:29:3b:3e:a5  txqueuelen 1000  (Ethernet)   
RX packets 3756900  bytes 2423660972 (2.4 GB)   
RX errors 0  dropped 503  overruns 0  frame 0   
TX packets 2413321  bytes 311067924 (311.0 MB)   
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

And how the route settings were applied:

user@vm-rch-node-01:~$ ip route
default via 10.21.0.1 dev ens160 proto static
10.21.0.0/16 dev ens160 proto kernel scope link src 10.21.0.11

Configuring a tunnel between the current cluster and the new cluster

Variables:

  • 19.4.15.14 – white ip-address at the point of the current cluster, which is used to communicate with the new cluster

  • 8.10.7.2 – the white ip-address is set on the vm-firewall and serves as a point through which a tunnel is created and communication with the current cluster will occur

IPsec technology is selected for the tunnel. The settings are made on the VM vm-firewall.

From the side of our internal network of the current cluster, we configure the entry point on the firewall and save the file with the settings of the 1st and 2nd phases to raise the tunnel.

Installing packages:

apt-get install racoon ipsec-tools

Two components are installed in the system:

  1. Demon racoon to manage the ISAKMP tunnel.

  2. Utilities setkey for managing SA data tunnels.

Let’s start with the first one. Racoon is responsible for the parameters for authorizing tunnels within IKE. This is a daemon, it is configured with one configuration file (/etc/racoon/racoon.conf), started by a regular init script (/etc/init.d/racoon ):

In the block remote the settings of the 1st phase are indicated

In the block sainfo the settings of the 2nd phase are indicated

root@vm-firewall:~# cat /etc/racoon/racoon.conf
log notify;
path pre_shared_key "/etc/racoon/psk.txt";
listen {
isakmp 8.10.7.2[500];
strict_address;
}
remote 19.4.15.14 {
exchange_mode main,aggressive;
lifetime time 24 hour;
dpd_delay 5;
proposal {   
encryption_algorithm aes 256;   
hash_algorithm sha1;   
authentication_method pre_shared_key;   
dh_group 2;   
lifetime time 86400 sec;
}
}
sainfo address 10.21.0.0/24 any address 10.1.0.0/16 any {
pfs_group 2;
lifetime time 28800 sec;
encryption_algorithm aes 256;
authentication_algorithm hmac_sha1;
compression_algorithm deflate;
}

In the psk.txt file, specify the key that was set at the entry point from the network side of the current cluster:

root@vm-firewall:~# cat /etc/racoon/psk.txt
# IPv4/v6 addresses
19.4.15.14 PA$$W0RD

Next, we will deal with the utility setkey… It also starts as a daemon (/etc/init.d/setkey ), but in fact it executes the /etc/ipsec-tools.conf script. As stated earlier, it already sets up tunnels for user traffic. Namely, it sets SA and SP for them:

root @ vm-firewall: ~ # cat /etc/ipsec-tools.conf
#! / usr / sbin / setkey -f

# NOTE: Do not use this file if you use racoon with racoon-tool
# utility. racoon-tool will setup SAs and SPDs automatically using
# /etc/racoon/racoon-tool.conf configuration.
#
## Flush the SAD and SPD
#
flush;
spdflush;

## Some sample SPDs for use racoon
#
spdadd 10.21.0.0/24 10.1.0.0/16 any -P out ipsec
esp / tunnel / 8.10.7.2-19.4.15.14 / require;
spdadd 10.1.0.0/16 10.21.0.0/24 any -P in ipsec
esp / tunnel / 19.4.15.14-8.10.7.2 / require;

We restart the services in this sequence:

systemctl restart setkey.service
systemctl restart racoon.service

Functional check

Remember, the tunnel will only rise when traffic flows into it. We need to run the ping before the destination. We start ping from node 10.21.0.11 to the node of the current cluster (10.1.1.13):

user@vm-rch-node-01:~$ ping 10.1.1.13
PING 10.1.1.13 (10.1.13.13) 56(84) bytes of data.
64 bytes from 10.1.1.13: icmp_seq=1 ttl=61 time=4.67 ms
64 bytes from 10.1.1.13: icmp_seq=2 ttl=61 time=2.55 ms
64 bytes from 10.1.1.13: icmp_seq=3 ttl=61 time=2.94 ms
64 bytes from 10.1.1.13: icmp_seq=4 ttl=61 time=4.07 ms^C
--- 10.1.1.13 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 2.556/3.563/4.671/0.850 ms

There should be responses from the reverse side with a small delay (unless, of course, ICMP is closed anywhere on the site).

Checking the route settings on the same node:

user@vm-rch-node-01:~$ traceroute 10.1.1.13
traceroute to 10.1.1.13 (10.1.1.13), 30 hops max, 60 byte packets
1 _gateway (10.21.0.1) 0.113 ms 0.080 ms 0.096 ms
2 * * *
3 10.1.1.13 (10.1.1.13) 2.846 ms 2.875 ms 2.874 ms

On the vm-firewall, we check if the ISAKMP tunnel has risen:

root@vm-firewall:~# racoonctl show-sa isakmp
Destination    Cookies                                      Created
10.1.0.1.500   356a7e134251a93f:30071210375c165         2021-02-19 09:18:28

We can also see if tunnels with user data have been created using the commands racoonctl show-sa esp or setket -D:

root@vm-firewall:~# setkey -D
8.10.7.2 19.4.15.14
esp mode=tunnel spi=133928257(0x07fb9541) reqid=0(0x00000000)
E: aes-cbc e12ee98c dcc52fa3 dd115cae 57aaa59e b7b37484 ffbfe306 b7a1e3cd 1e2c5301
A: hmac-sha1 a8d2c0a7 f9690fe2 287cad8f 3023a683 67d4ed85seq=0x00000000 replay=4 flags=0x00000000 state=mature
created: Feb 20 07:53:53 2021 current: Feb 20 08:15:59 2021
diff: 1326(s) hard: 1800(s) soft: 1440(s)
last: Feb 20 07:53:54 2021 hard: 0(s) soft: 0(s)
current: 49838883(bytes) hard: 0(bytes) soft: 0(bytes)
allocated: 52998 hard: 0 soft: 0
sadb_seq=1 pid=4468 refcnt=0
19.4.15.14 8.10.7.2
esp mode=tunnel spi=193529941(0x0b890855) reqid=0(0x00000000)
E: aes-cbc 8ad5774d eaa4fe28 685eb88a 12320bac 4ed8ec2e c6af576f 7f3e1f27 666da5da
A: hmac-sha1 22c82645 7f7b9550 73cbd018 1648b4b7 402411ffseq=0x00000000 replay=4 flags=0x00000000 state=mature
created: Feb 20 07:53:53 2021 current: Feb 20 08:15:59 2021
diff: 1326(s) hard: 1800(s) soft: 1440(s)
last: Feb 20 07:53:54 2021 hard: 0(s) soft: 0(s)
current: 7180221(bytes) hard: 0(bytes) soft: 0(bytes)
allocated: 24822 hard: 0 soft: 0sadb_seq=2 pid=4468 refcnt=0

If the tunnel is not established, then you need to change the log level from log notify to log debug2 in the /etc/racoon/racoon.conf file and restart the services:

systemctl restart setkey.service
systemctl restart racoon.service

After that, we will see detailed logs in / var / log / syslog and on command racoon -F

We read carefully man racoon.conf and examples in / usr / share / doc / racoon /

After debugging, do not forget to return the log level to its original location.

Installing and configuring the Nexus

Go to the machine where you need to install Nexus and install it according to the instructions: https://help.sonatype.com/repomanager3/installation

Check that Nexus is responding on port 8081:

user@vm-nexus:~$ curl -I localhost:8081
HTTP/1.1 200 OK
Date: Sat, 20 Feb 2021 09:58:41 GMT
Server: Nexus/3.29.2-02 (OSS)
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Content-Type: text/html
Last-Modified: Sat, 20 Feb 2021 09:58:41 GMT
Pragma: no-cache
Cache-Control: no-cache, no-store, max-age=0, must-revalidate, post-check=0, pre-check=0
Expires: 0Content-Length: 8435

Further settings are made on the VM vm-firewall.

Install the package for the ability to save / restore iptables:

aptitude install iptables-persistent

Configuring the ability to forward packets (IP forwarding):

Open sysctl.conf file:

Find it in the file and remove the comment from the line:

Save, exit the file and reboot the machine with the reboot command

Check that the setting has been applied:

root@vm-firewall:~# sysctl -a | grep net.ipv4.ip_forwardnet.ipv4.ip_forward = 1

Add a rule to iptables for the ability to forward packets:

iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE

We check that the rule has been added:

root@vm-firewall:~# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source           destination  
Chain INPUT (policy ACCEPT)
target prot opt source           destination  
Chain OUTPUT (policy ACCEPT)
target prot opt source           destination  
Chain POSTROUTING (policy ACCEPT)
MASQUERADE  all  --  anywhere         anywhere

We prohibit connections to port 22 (via SSH) for everyone, but we allow connections from the network of the current cluster (10.1.0.0):

iptables -A INPUT -p tcp -s 10.1.0.0/16 --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j DROP

You need to connect to the Nexus web interface. Since it is installed on the internal machine 10.21.0.100, it is necessary to make it possible to connect to it from the public address 8.10.7.2. To do this, we make a rule on iptables, which we forward port 8081 from 10.21.0.100 where Nexus is running to port 8081. Thus, the Nexus web interface will be available at: 8.10.7.2:8081

iptables -t nat -A PREROUTING -p tcp -d 8.10.7.2 --dport 8081 -j DNAT --to-destination 10.21.0.100:8081
iptables -t nat -A POSTROUTING -p tcp --sport 8081 --dst 10.21.0.100 -j SNAT --to-source 8.10.7.2:8081

Saving iptables

After we have added new rules (by the way, they are applied immediately and do not require a reboot), we need to save the rules so that they will be applied after rebooting the machine.

To do this, run the command:

iptables-save > /etc/iptables/rules.v4

You can also just see which rules will be saved without writing to a file:

root@vm-firewall:~# iptables-save
# Generated by iptables-save v1.6.1 on Sat Feb 20 10:23:16 2021
*filter
:INPUT ACCEPT [1367208:430732612]
:FORWARD ACCEPT [2626485:2923178076]
:OUTPUT ACCEPT [887495:574719960]
-A INPUT -s 10.1.0.0/16 -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j DROP
-A OUTPUT -p tcp -m tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sat Feb 20 10:23:16 2021
# Generated by iptables-save v1.6.1 on Sat Feb 20 10:23:16 2021
*nat
:PREROUTING ACCEPT [590459:52961865]
:INPUT ACCEPT [218805:21703262]
:OUTPUT ACCEPT [130:9341]
:POSTROUTING ACCEPT [1549:81533]
-A PREROUTING -d 8.10.7.2/32 -p tcp -m tcp --dport 8081 -j DNAT --to-destination 10.21.0.100:8081
-A POSTROUTING -s 10.21.0.0/16 -d 10.1.0.0/16 -o ens160 -m policy --dir out --pol ipsec -j ACCEPT
-A POSTROUTING -d 10.21.0.100/32 -p tcp -m tcp --sport 8081 -j SNAT --to-source 8.10.7.2:8081
-A POSTROUTING -o ens160 -j MASQUERADE
COMMIT
# Completed on Sat Feb 20 10:23:16 2021

Debugging iptables

If you need to remove all iptables rules:

iptables -Fiptables -Xiptables -t nat -Xiptables -t nat -Fiptables -t filter -Xiptables -t filter -F

Restoring rules from a file:

iptables-restore < /etc/iptables/rules.v4

The author of the article https://www.facebook.com/ymalov/

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *