Rancher. Adding a new cluster
Introduction
Rancher is a system that runs on Kubernetes and allows you to work with multiple clusters through a single entry point. Rancher helps to optimize the deployment of clusters in different environments: bare-metal server (dedicated, iron), private clouds, public clouds and unites clusters using single authentication, access control and security policies.
Regardless of how the cluster is created and wherever it is located, Rancher provides access to all clusters through one control panel.
Tasks
-
Resource markup:
-
3 service nodes (etcd, control panel)
-
4 worker nodes
-
1 virtual machine inside a new cluster to create a tunnel
-
-
Configure communication between networks:
-
First network: internal network with Rancher cluster and Rancher server manager
-
Second network: external network with a Rancher cluster on a bare-metal server
-
-
Add a Nexus server to store Helm and Docker artifacts
Installing VMware
Given: dedicated server with characteristics: 2 x Xeon Gold 6240R, 24 cores each, 320 GB DDR4 RAM, 4 x 480 GB SSD.
Made on the server RAID10 and installed VMware vSphere Hypervisor 7 License. This is a free version limited to 8 cores per VM.
Installing and configuring virtual machines
# |
Name |
Cpu |
RAM. gb |
HDD, gb |
Private ip |
Public ip |
1 |
vm-firewall |
1 |
4 |
16 |
10.21.0.1 |
8.10.7.2 |
2 |
vm-rch-node-1 |
eight |
32 |
60 |
10.21.0.11 |
|
3 |
vm-rch-node-2 |
eight |
32 |
60 |
10.21.0.12 |
|
4 |
vm-rch-node-3 |
eight |
32 |
60 |
10.21.0.13 |
|
5 |
vm-rch-node-4 |
eight |
32 |
60 |
10.21.0.14 |
|
6 |
vm-rch-etcd-1 |
2 |
eight |
twenty |
10.21.0.15 |
|
7 |
vm-rch-etcd-2 |
2 |
eight |
twenty |
10.21.0.16 |
|
eight |
vm-rch-etcd-3 |
2 |
eight |
twenty |
10.21.0.17 |
|
nine |
vm-nexus |
4 |
16 |
200 |
10.21.0.100 |
8.10.7.2:8081 |
Each VM has an Ubuntu Server 18 distribution.
On each VM for the cluster (except for vm-nexus), execute the commands to install docker:
|
And turn off swap:
in the file / etc / fstab delete the line responsible for the swap file Plus delete the file that is responsible for swap: / swap |
Network architecture
Ip-addresses on all VMs are registered manually. The network and route settings in each VM are stored in a file at: /etc/netplan/00-installer-config.yaml
The current cluster is on the 10.1.0.0/16 network
For the internal LAN of the new cluster, the address range is 10.21.0.0/16
A car vm-firewall has two network interfaces:
-
ens160. A “white” address is attached to this interface: 8.10.7.2, which was issued by the provider. This address connects to the traffic entry point of the current cluster and creates a tunnel.
-
ens192. The interface for the internal network of the new cluster. It acts as a gateway for all VMs within the new cluster’s network.
Network settings on vm-firewall:
|
Network settings on any other VM:
|
After editing the service config file netplan need to check syntax:
And apply the settings:
After that, you need to check how the ip-address settings were applied:
|
And how the route settings were applied:
|
Configuring a tunnel between the current cluster and the new cluster
Variables:
-
19.4.15.14 – white ip-address at the point of the current cluster, which is used to communicate with the new cluster
-
8.10.7.2 – the white ip-address is set on the vm-firewall and serves as a point through which a tunnel is created and communication with the current cluster will occur
IPsec technology is selected for the tunnel. The settings are made on the VM vm-firewall.
From the side of our internal network of the current cluster, we configure the entry point on the firewall and save the file with the settings of the 1st and 2nd phases to raise the tunnel.
Installing packages:
|
Two components are installed in the system:
-
Demon racoon to manage the ISAKMP tunnel.
-
Utilities setkey for managing SA data tunnels.
Let’s start with the first one. Racoon is responsible for the parameters for authorizing tunnels within IKE. This is a daemon, it is configured with one configuration file (/etc/racoon/racoon.conf), started by a regular init script (/etc/init.d/racoon ):
In the block remote the settings of the 1st phase are indicated
In the block sainfo the settings of the 2nd phase are indicated
|
In the psk.txt file, specify the key that was set at the entry point from the network side of the current cluster:
|
Next, we will deal with the utility setkey… It also starts as a daemon (/etc/init.d/setkey ), but in fact it executes the /etc/ipsec-tools.conf script. As stated earlier, it already sets up tunnels for user traffic. Namely, it sets SA and SP for them:
root @ vm-firewall: ~ # cat /etc/ipsec-tools.conf # NOTE: Do not use this file if you use racoon with racoon-tool ## Some sample SPDs for use racoon |
We restart the services in this sequence:
|
Functional check
Remember, the tunnel will only rise when traffic flows into it. We need to run the ping before the destination. We start ping from node 10.21.0.11 to the node of the current cluster (10.1.1.13):
|
There should be responses from the reverse side with a small delay (unless, of course, ICMP is closed anywhere on the site).
Checking the route settings on the same node:
|
On the vm-firewall, we check if the ISAKMP tunnel has risen:
|
We can also see if tunnels with user data have been created using the commands racoonctl show-sa esp or setket -D:
|
If the tunnel is not established, then you need to change the log level from log notify to log debug2 in the /etc/racoon/racoon.conf file and restart the services:
|
After that, we will see detailed logs in / var / log / syslog and on command racoon -F
We read carefully man racoon.conf and examples in / usr / share / doc / racoon /
After debugging, do not forget to return the log level to its original location.
Installing and configuring the Nexus
Go to the machine where you need to install Nexus and install it according to the instructions: https://help.sonatype.com/repomanager3/installation
Check that Nexus is responding on port 8081:
|
Further settings are made on the VM vm-firewall.
Install the package for the ability to save / restore iptables:
|
Configuring the ability to forward packets (IP forwarding):
Open sysctl.conf file:
Find it in the file and remove the comment from the line:
Save, exit the file and reboot the machine with the reboot command
Check that the setting has been applied:
|
Add a rule to iptables for the ability to forward packets:
|
We check that the rule has been added:
|
We prohibit connections to port 22 (via SSH) for everyone, but we allow connections from the network of the current cluster (10.1.0.0):
|
You need to connect to the Nexus web interface. Since it is installed on the internal machine 10.21.0.100, it is necessary to make it possible to connect to it from the public address 8.10.7.2. To do this, we make a rule on iptables, which we forward port 8081 from 10.21.0.100 where Nexus is running to port 8081. Thus, the Nexus web interface will be available at: 8.10.7.2:8081
|
Saving iptables
After we have added new rules (by the way, they are applied immediately and do not require a reboot), we need to save the rules so that they will be applied after rebooting the machine.
To do this, run the command:
|
You can also just see which rules will be saved without writing to a file:
|
Debugging iptables
If you need to remove all iptables rules:
|
Restoring rules from a file:
|
The author of the article https://www.facebook.com/ymalov/