Rancher is a system that runs on Kubernetes and allows you to work with multiple clusters through a single entry point. Rancher helps to optimize the deployment of clusters in different environments: bare-metal server (dedicated, iron), private clouds, public clouds and unites clusters using single authentication, access control and security policies.
Regardless of how the cluster is created and wherever it is located, Rancher provides access to all clusters through one control panel.
Tasks
Resource markup:
3 service nodes (etcd, control panel)
4 worker nodes
1 virtual machine inside a new cluster to create a tunnel
Configure communication between networks:
First network: internal network with Rancher cluster and Rancher server manager
Second network: external network with a Rancher cluster on a bare-metal server
Add a Nexus server to store Helm and Docker artifacts
Installing VMware
Given: dedicated server with characteristics: 2 x Xeon Gold 6240R, 24 cores each, 320 GB DDR4 RAM, 4 x 480 GB SSD.
Made on the server RAID10 and installed VMware vSphere Hypervisor 7 License. This is a free version limited to 8 cores per VM.
Installing and configuring virtual machines
#
Name
Cpu
RAM. gb
HDD, gb
Private ip
Public ip
1
vm-firewall
1
4
16
10.21.0.1
8.10.7.2
2
vm-rch-node-1
eight
32
60
10.21.0.11
3
vm-rch-node-2
eight
32
60
10.21.0.12
4
vm-rch-node-3
eight
32
60
10.21.0.13
5
vm-rch-node-4
eight
32
60
10.21.0.14
6
vm-rch-etcd-1
2
eight
twenty
10.21.0.15
7
vm-rch-etcd-2
2
eight
twenty
10.21.0.16
eight
vm-rch-etcd-3
2
eight
twenty
10.21.0.17
nine
vm-nexus
4
16
200
10.21.0.100
8.10.7.2:8081
Each VM has an Ubuntu Server 18 distribution.
On each VM for the cluster (except for vm-nexus), execute the commands to install docker:
in the file / etc / fstab delete the line responsible for the swap file Plus delete the file that is responsible for swap: / swap
Network architecture
Ip-addresses on all VMs are registered manually. The network and route settings in each VM are stored in a file at: /etc/netplan/00-installer-config.yaml
The current cluster is on the 10.1.0.0/16 network
For the internal LAN of the new cluster, the address range is 10.21.0.0/16
A car vm-firewall has two network interfaces:
ens160. A “white” address is attached to this interface: 8.10.7.2, which was issued by the provider. This address connects to the traffic entry point of the current cluster and creates a tunnel.
ens192. The interface for the internal network of the new cluster. It acts as a gateway for all VMs within the new cluster’s network.
user@vm-rch-node-01:~$ ip route default via 10.21.0.1 dev ens160 proto static 10.21.0.0/16 dev ens160 proto kernel scope link src 10.21.0.11
Configuring a tunnel between the current cluster and the new cluster
Variables:
19.4.15.14 – white ip-address at the point of the current cluster, which is used to communicate with the new cluster
8.10.7.2 – the white ip-address is set on the vm-firewall and serves as a point through which a tunnel is created and communication with the current cluster will occur
IPsec technology is selected for the tunnel. The settings are made on the VM vm-firewall.
From the side of our internal network of the current cluster, we configure the entry point on the firewall and save the file with the settings of the 1st and 2nd phases to raise the tunnel.
Installing packages:
apt-get install racoon ipsec-tools
Two components are installed in the system:
Demon racoon to manage the ISAKMP tunnel.
Utilities setkey for managing SA data tunnels.
Let’s start with the first one. Racoon is responsible for the parameters for authorizing tunnels within IKE. This is a daemon, it is configured with one configuration file (/etc/racoon/racoon.conf), started by a regular init script (/etc/init.d/racoon ):
In the block remote the settings of the 1st phase are indicated
In the block sainfo the settings of the 2nd phase are indicated
Next, we will deal with the utility setkey… It also starts as a daemon (/etc/init.d/setkey ), but in fact it executes the /etc/ipsec-tools.conf script. As stated earlier, it already sets up tunnels for user traffic. Namely, it sets SA and SP for them:
# NOTE: Do not use this file if you use racoon with racoon-tool # utility. racoon-tool will setup SAs and SPDs automatically using # /etc/racoon/racoon-tool.conf configuration. # ## Flush the SAD and SPD # flush; spdflush;
## Some sample SPDs for use racoon # spdadd 10.21.0.0/24 10.1.0.0/16 any -P out ipsec esp / tunnel / 8.10.7.2-19.4.15.14 / require; spdadd 10.1.0.0/16 10.21.0.0/24 any -P in ipsec esp / tunnel / 19.4.15.14-8.10.7.2 / require;
Remember, the tunnel will only rise when traffic flows into it. We need to run the ping before the destination. We start ping from node 10.21.0.11 to the node of the current cluster (10.1.1.13):
user@vm-rch-node-01:~$ ping 10.1.1.13 PING 10.1.1.13 (10.1.13.13) 56(84) bytes of data. 64 bytes from 10.1.1.13: icmp_seq=1 ttl=61 time=4.67 ms 64 bytes from 10.1.1.13: icmp_seq=2 ttl=61 time=2.55 ms 64 bytes from 10.1.1.13: icmp_seq=3 ttl=61 time=2.94 ms 64 bytes from 10.1.1.13: icmp_seq=4 ttl=61 time=4.07 ms^C --- 10.1.1.13 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 2.556/3.563/4.671/0.850 ms
There should be responses from the reverse side with a small delay (unless, of course, ICMP is closed anywhere on the site).
Checking the route settings on the same node:
user@vm-rch-node-01:~$ traceroute 10.1.1.13 traceroute to 10.1.1.13 (10.1.1.13), 30 hops max, 60 byte packets 1 _gateway (10.21.0.1) 0.113 ms 0.080 ms 0.096 ms 2 * * * 3 10.1.1.13 (10.1.1.13) 2.846 ms 2.875 ms 2.874 ms
On the vm-firewall, we check if the ISAKMP tunnel has risen:
root@vm-firewall:~# racoonctl show-sa isakmp Destination Cookies Created 10.1.0.1.500 356a7e134251a93f:30071210375c165 2021-02-19 09:18:28
We can also see if tunnels with user data have been created using the commands racoonctl show-sa esp or setket -D:
If the tunnel is not established, then you need to change the log level from log notify to log debug2 in the /etc/racoon/racoon.conf file and restart the services:
We prohibit connections to port 22 (via SSH) for everyone, but we allow connections from the network of the current cluster (10.1.0.0):
iptables -A INPUT -p tcp -s 10.1.0.0/16 --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j DROP
You need to connect to the Nexus web interface. Since it is installed on the internal machine 10.21.0.100, it is necessary to make it possible to connect to it from the public address 8.10.7.2. To do this, we make a rule on iptables, which we forward port 8081 from 10.21.0.100 where Nexus is running to port 8081. Thus, the Nexus web interface will be available at: 8.10.7.2:8081
After we have added new rules (by the way, they are applied immediately and do not require a reboot), we need to save the rules so that they will be applied after rebooting the machine.
To do this, run the command:
iptables-save > /etc/iptables/rules.v4
You can also just see which rules will be saved without writing to a file:
root@vm-firewall:~# iptables-save # Generated by iptables-save v1.6.1 on Sat Feb 20 10:23:16 2021 *filter :INPUT ACCEPT [1367208:430732612] :FORWARD ACCEPT [2626485:2923178076] :OUTPUT ACCEPT [887495:574719960] -A INPUT -s 10.1.0.0/16 -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j DROP -A OUTPUT -p tcp -m tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT COMMIT # Completed on Sat Feb 20 10:23:16 2021 # Generated by iptables-save v1.6.1 on Sat Feb 20 10:23:16 2021 *nat :PREROUTING ACCEPT [590459:52961865] :INPUT ACCEPT [218805:21703262] :OUTPUT ACCEPT [130:9341] :POSTROUTING ACCEPT [1549:81533] -A PREROUTING -d 8.10.7.2/32 -p tcp -m tcp --dport 8081 -j DNAT --to-destination 10.21.0.100:8081 -A POSTROUTING -s 10.21.0.0/16 -d 10.1.0.0/16 -o ens160 -m policy --dir out --pol ipsec -j ACCEPT -A POSTROUTING -d 10.21.0.100/32 -p tcp -m tcp --sport 8081 -j SNAT --to-source 8.10.7.2:8081 -A POSTROUTING -o ens160 -j MASQUERADE COMMIT # Completed on Sat Feb 20 10:23:16 2021