Orchestrating Configurations with SaltStack
Salt Master
Salt Master is a central server that manages minions (agents) and distributes commands. Main functions:
Command management: Sending commands to minions to perform various tasks.
Data collection: get information about the state of minions.
Orchestration: coordination of complex tasks that require interaction between multiple systems.
Salt Minion
Salt Minion is an agent installed on a managed node (server, virtual machine, etc.). It performs the following functions:
Executing commands: receives and executes commands from the master.
Data collection: Sends system status data to the master.
Local control: can perform tasks autonomously without constant connection to the master.
You can read more about SaltStack itself here.
Orchestration with SaltStack
In SaltStack, orchestration looks like managing multiple nodes and services through a central control to make the execution of tasks and configurations synchronized. SaltStack uses an IaC approach to infrastructure management, which allows you to define configurations and tasks as code files that can be versioned and managed using version control systems.
Salt Orchestrate Runner — is a tool in SaltStack that allows you to manage and coordinate the execution of tasks across multiple nodes. It is used to write and execute orchestration scripts, which can include executing commands, applying states, and managing configurations.
A simple orchestration scenario might involve running a command on multiple nodes:
# /srv/salt/orchestrate/simple_orchestrate.sls
execute_command:
salt.function:
- name: cmd.run
- tgt: '*'
- arg:
- echo "Hello, world!"
Here is the script /srv/salt/orchestrate/simple_orchestrate.sls
defines the task execute_command
which uses the function cmd.run
to execute the command echo "Hello, world!"
on all nodes (tgt: '*'
).
In a more complex scenario, you can complete the tasks sequentially:
# /srv/salt/orchestrate/complex_orchestrate.sls
prepare_servers:
salt.state:
- tgt: 'web*'
- sls: webserver.setup
deploy_application:
salt.state:
- tgt: 'app*'
- sls: app.deploy
- require:
- salt: prepare_servers
In this example, the state is first executed webserver.setup
on nodes with a label web*
and then the state app.deploy
on nodes with a label app*
but only after successful completion of the first task.
SLS files are used to define states and tasks that SaltStack should perform. These files are written in YAML format and contain descriptions of the state of systems and actions to be performed.
Example of creating an SLS file:
# /srv/salt/webserver/setup.sls
install_nginx:
pkg.installed:
- name: nginx
start_nginx:
service.running:
- name: nginx
- enable: True
- require:
- pkg: install_nginx
Here is the SLS file /srv/salt/webserver/setup.sls
defines two states: package installation nginx
and launch of the service nginx
. Condition require
ensures that the service will only start after the package is installed.
To execute an SLS file, use the command salt-run
or salt
:
salt 'web*' state.apply webserver.setup
This command applies the states defined in webserver.setup
on all nodes that match the pattern web*
.
How does all this look in practice?
Let's perform the web server configuration correction. We'll start with setting up the load balancer, continue with ensuring consistent configuration of all servers in the cluster, and finish with automation and monitoring of these processes.
Setting Up a Load Balancer with Nginx
Let's create an SLS file to install Nginx:
# /srv/salt/loadbalancer/install_nginx.sls
install_nginx:
pkg.installed:
- name: nginx
configure_nginx:
file.managed:
- name: /etc/nginx/nginx.conf
- source: salt://loadbalancer/nginx.conf
- user: root
- group: root
- mode: 644
- require:
- pkg: install_nginx
start_nginx:
service.running:
- name: nginx
- enable: True
- require:
- file: configure_nginx
Example of Nginx configuration file:
# /srv/salt/loadbalancer/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
upstream backend {
server webserver1.example.com;
server webserver2.example.com;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
Applying the configuration:
salt 'loadbalancer*' state.apply loadbalancer.install_nginx
Sequential configuration of all cluster servers
Let's create an SLS file to install and configure the web server:
# /srv/salt/webserver/setup.sls
install_nginx:
pkg.installed:
- name: nginx
configure_nginx:
file.managed:
- name: /etc/nginx/nginx.conf
- source: salt://webserver/nginx.conf
- user: root
- group: root
- mode: 644
- require:
- pkg: install_nginx
start_nginx:
service.running:
- name: nginx
- enable: True
- require:
- file: configure_nginx
Example of a configuration file for a web server:
# /srv/salt/webserver/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
server_name webserver.example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
}
Applying the configuration to all web servers:
salt 'web*' state.apply webserver.setup
Automation and Monitoring with SaltStack
Setting up Salt Mine to collect data:
# /srv/salt/minion
mine_functions:
network.ip_addrs: []
Setting up Beacons to monitor server status:
# /srv/salt/beacons.conf
beacons:
service:
- services:
nginx: {}
- interval: 10
Setting up Reactors to automatically respond to events:
# /srv/salt/reactor/reactor.conf
reactor:
- 'salt/beacon/*/service/':
- /srv/salt/reactor/service_restart.sls
Example of Reactor SLS file to restart the service:
# /srv/salt/reactor/service_restart.sls
restart_service:
local.cmd.run:
- tgt: 'web*'
- arg:
- systemctl restart nginx
- require:
- beacon: salt/beacon/*/service/
You can get more practical skills in infrastructure through practical online courses from industry experts.