Rewriting our Object Cloud
Hello everyone, Happy Knowledge Day and the beginning of autumn!
How Belinsky Vissarion Grigorievichand then himself Vladimir Ilyich Lenin said :
“Study, study and study again! Long live science!”
So we will take up knowledge, consider what a storehouse of useful information can give us – Github and pieces of disparate information from the very depths of the global network, the study of manuals and a little ingenuity.
In the last part, we took the first step in NixOS, working with the “philosophy” of Nix, as with BashSible (Ansible playbook consisting of bash-inserts) and today we will rework our system configuration file. and consider the topic of monitoring on the same poor machine (when we move on to the topic of containers and vm, we will already take out everything superfluous from our poor storage and improve something, add something).
What’s on our menu:
Let’s add a “repository” ( Nix-channel ) with the latest (unstable) packages and see how the updated packages behave and see how to remove the packages.
Update configuration.nix
Grafana+Prometheus
Let’s start our dive into the amazing world of Nix:
The first thing we will do is consciously switch to dark unstable side, knowing full well that this can give us both pluses and minuses
Pros:
Solving old problems, patching security holes.
Possibility of software expansion and optimization.
Current Software Version
Minuses:
New problems.
possible unstable, unpredictable packet behavior (this happens everywhere)
It is necessary to look in the documentation – what functionality was removed and (or) what was replaced.
Weigh all the pros and cons, we move on:
Updating packages
We look at the number of connected repositories:
# nix-channel --list
nixos https://nixos.org/channels/nixos-23.05
Add a channel from which the system will receive “unstable” packets:
nix-channel --add https://nixos.org/channels/nixpkgs-unstable unstable
Add Unstable channel:
{ config, pkgs, lib, modulesPath, ... }:
let
unstableTarball =
fetchTarball
https://github.com/NixOS/nixpkgs/archive/nixos-unstable.tar.gz;
in
#####
{
imports =
[ # Include the results of the hardware scan.
./hardware-configuration.nix
# ./apps/minio-server.nix
];
######
Update the list of repositories:
# nix-channel --update
unpacking channels...
и
Finding and updating just one package:
# nix search package
# nix-env -f channel:nixpkgs-unstable -iA
Let’s rewrite the block with caddy now:
Was :
systemd service
systemd.services.caddy = {
description = "Caddy" ;
wantedBy = [ "multi-user.target" ];
serviceConfig = {
EnvironmentFile= "/etc/caddy/Caddyfile";
ExecStart = "${pkgs.caddy}/bin/caddy run --environ --config /etc/caddy/Caddyfile";
ExecStop = " ${pkgs.caddy}/bin/caddy stop";
ExecReload = "${pkgs.caddy}/bin/caddy --config /etc/caddy/Caddyfile --force";
Type = "notify";
Restart = "always";
LimitNOFILE="1048576";
TimeoutStopSec="infinity";
SendSIGKILL="no";
User="root"; #caddy
Group="root"; #caddy
ProtectProc="invisible";
PrivateTmp="true";
AmbientCapabilities="CAP_NET_BIND_SERVICE";
};
};
systemd.services.caddy.enable=true;
Caddyfile
localhost {
header /* {
-Server
Permissions-Policy interest-cohort=()
Strict-Transport-Security max-age=31536000;
X-Content-Type-Options nosniff
X-Frame-Options DENY
Referrer-Policy no-referrer-when-downgrade
}
route /console/* {
uri strip_prefix /console
reverse_proxy http://127.0.0.1:9001
}
}
It became:
services.caddy = {
enable = true;
extraConfig =
''
localhost {
header /* {
-Server
Permissions-Policy interest-cohort=()
Strict-Transport-Security max-age=31536000;
X-Content-Type-Options nosniff
X-Frame-Options DENY
Referrer-Policy no-referrer-when-downgrade
}
route /console/* {
uri strip_prefix /console
reverse_proxy http://127.0.0.1:9001
}'';
};
This method has advantages:
Let’s continue torment update our services.
Update Minio.
We remember that last time there was a problematic version of minio-server from the package manager, so we install it from unstable-channel
nix-env -f channel:nixpkgs-unstable -iA minio
gives us the version that is currently the most recent:
# minio -v
minio version RELEASE.2023-08-16T20-17-30Z (commit-id=RELEASE.2023-08-16T20-17-30Z)
Runtime: go1.20.7 linux/amd64
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Copyright: 2015-0000 MinIO, Inc.
and remove it:
nix-env --uninstall minio
и закомментируем строчки:
# minio
# minio-client
И сервис наш тоже
выполняем нашу любимую команду и чистим кэш
nixos-rebuild switch && nix-collect-garbage
now, let’s think about how to implement this in the configuration file:
go to config
nano /etc/nixos/configuration.nix
Next we go to a complete reinstallation of Minio
Let’s clean up our storage (since this is a test machine and the data is not really needed, we take risky steps)
# unstable.minio
# unstable.minio-client
И сервис наш тоже
выполняем нашу любимую команду и чистим кэш
nixos-rebuild switch && nix-collect-garbage
uncomment and update our lines
unstable.minio
unstable.minio-client
update config
systemd.services.minioserver = {
enable = true;
description = "MinioServer" ;
wantedBy = [ "default.target" ];
serviceConfig = {
EnvironmentFile= "/etc/default/minio";
ExecStart = "${pkgs.minio}/bin/minio server --json $MINIO_OPTS $MINIO_VOLUMES ";# ExecStop = " ";
Type = "notify";
Restart = "always";
LimitNOFILE="1048576";
TimeoutStopSec="infinity";
SendSIGKILL="no";
User="root";;
Group= "root";
ProtectProc="invisible";
};
};
а этот сервис нам поможет по новой не прописывать алиас на наш сервер.
Да и в случае чего можно дополнить нужными нам командами,например- статистикой.
systemd.services.minio-config = {
enable = true;
path = [ pkgs.minio pkgs.minio-client];
requiredBy = [ "multi-user.target" ];
after = [ "minioserver.service" ];
serviceConfig = {
Type = "simple";
User = "root";
Group = "root";
};
script=""
set -e
mc alias set mys3 http://localhost:9000 barsik barsikpassword
'';
};
It remains to update the configuration.
Rewriting Prometheus
We clear everything as in the minio example and rewrite it.
прописываем вместо prometheus
environment.systemPackages = with pkgs; [
...
unstable.prometheus #prometheus
...
];
since our credentials are (most likely) no longer valid and update the alias
#Здесь мы пойдем по пути меньшего сопротивления,
# потому что мы уверены на 1000%,что на него имеем доступ только мы.
#Но если это не так то делаем по аналогии,как это было в 1й части.
mc alias set mylocal http://localhost:3000 barsik barsikpassword
and generate a config for prometheus:
# mc admin prometheus generate mylocal
scrape_configs:
- job_name: minio-job
bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoiYmFyc2lrIiwiZXhwIjo0ODQ2OTE2OTU2fQ.TBElC01EHRxMdjmbSmgaZsbA3scZS3FnxCP2CGZacmHEbOCSPj5YWHyLnayWsgK56QwNNSd8OJvrV8sU3t9wbw
metrics_path: /minio/v2/metrics/cluster
scheme: http
static_configs:
- targets: ['localhost:9000']
write to the service:
services.prometheus = {
enable = true;
scrapeConfigs = [{
job_name = "minio-job";
bearer_token = "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoiYmFyc2lrIiwiZXhwIjo0ODQ2OTE2OTU2fQ.TBElC01EHRxMdjmbSmgaZsbA3scZS3FnxCP2CGZacmHEbOCSPj5YWHyLnayWsgK56QwNNSd8OJvrV8sU3t9w> metrics_path = "/minio/v2/metrics/cluster";
scheme = "http";
static_configs = [{
targets = ["localhost:9000"];
labels = { alias = "minio"; };
}];
}];
};
Our signature:
nixos-rebuild switch
и смотрим ,что с прометеусом:
# systemctl status prometheus.service
● prometheus.service
Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; preset: enabled)
Active: active (running) since Tue 2023-08-29 20:07:12 MSK; 3s ago
Main PID: 60628 (prometheus)
IP: 0B in, 0B out
IO: 140.0K read, 0B written
Tasks: 7 (limit: 4694)
Memory: 15.6M
CPU: 432ms
CGroup: /system.slice/prometheus.service
That is, it is running and the graphs are already available in the Web-console
Let’s improve our Postgresql.
And adjust service.postgres for a single design:
services.postgresql = {
enable = true;
ensureDatabases = [ "miniodb" ];
enableTCPIP = true;
authentication = pkgs.lib.mkOverride 10 ''
#type database DBuser auth-method
local all all trust
host all all 127.0.0.1/32 trust
# ipv6
host all all ::1/128 trust
'';
initialScript = pkgs.writeText "backend-initScript" ''
CREATE ROLE minio WITH LOGIN PASSWORD 'minio' CREATEDB;
CREATE DATABASE minio;
GRANT ALL PRIVILEGES ON DATABASE minio TO minio;
'';
};
And go to the web console and find Events:
Setting up our connection:
Look at the result:
if we want to see what exactly is in the database and table:
# sudo -u postgres psql
psql (14.9)
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
minio | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | minio=CTc/postgres
miniodb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
minio=# \dt
List of relations
Schema | Name | Type | Owner
--------+-------+-------+-------
public | minio | table | minio
(1 row)
minio=# \dt+
List of relations
Schema | Name | Type | Owner | Persistence | Access method | Size | Description
--------+-------+-------+-------+-------------+---------------+------------+-------------
public | minio | table | minio | permanent | heap | 8192 bytes |
(1 row)
# SELECT * FROM minio;
key | value
-----+-------
(0 rows)
Now let’s go to the web console->Buckets and see:
create a Bucket
Watching events
Click on Subscribe to event (subscribe to events) and get into:
or in console:
устанавливаем jq:
nixos-env -i jq
Ищем наш ARN
mc admin info --json mys3/backup | jq .info.sqsARN
arn:minio:sqs::PG1:postgresql
прописываем ARN
mc event add --event "put,get,delete" mys3/backup arn:minio:sqs::PG1:postgresql
Let’s try to upload data from the “Download” folder:
mc cp . mys3/backup --recursive
And check our events:
sudo -u postgres psql
psql (14.9)
Type "help" for help.
postgres=#\c minio
You are now connected to database "minio" as user "postgres".
minio=# SELECT * FROM minio1;
minio=# minio=# SELECT * FROM minio;
Dealing with Backups
Update and restic.
environment.systemPackages = with pkgs; [
unstable.restic ];
initialize the bucket:
export AWS_ACCESS_KEY_ID=barsik
export AWS_SECRET_ACCESS_KEY=barsikpassword
# restic -r s3:http://localhost:9000/resticbackup init
enter password for new repository:
enter password again:
created restic repository f906e494c7 at s3:http://localhost:9000/resticbackup
And we write the config itself with deleting obsolete backups in an hour (in fact, we don’t change the data here so often, so it’s not really necessary):
systemd.services.s3backup= {
enable = true;
script=""
export AWS_ACCESS_KEY_ID=barsik
export AWS_SECRET_ACCESS_KEY=barsikpassword
${pkgs.restic}/bin/restic -r s3:http://localhost:9000/resticbackup backup /home/vasya -p /home/vasya/password
${pkgs.restic}/bin/restic -r s3:http://localhost:9000/resticbackup forget -p /home/vasya/password --keep-last 1
'';
};
systemd.timers.s3backup = {
wantedBy = [ "timers.target" ];
partOf = [ "s3backup.service" ];
timerConfig = {
OnCalendar = "OnCalendar=*:0/15"; #every 15 minutes
Unit = "s3backup.service";
};
};
Setting up Grafana:
add to /etc/nixos/configuration.nix:
в environment.systemPackages = with pkgs; [
...
unstable.grafana
...
];
В сервисе Caddy прописываем:
...
route /grafana* {
uri strip_prefix /grafana
reverse_proxy http://127.0.0.1:3000
}
...
добавляем сервис:
services.grafana = {
enable = true;
settings = {
server = {
http_addr = "0.0.0.0";
http_port = 3000;
root_url = "https://localhost/grafana";
own_from_sub_path="true";
};
};
};
we are greeted by a welcome window with a standard admin/admin
I was met
We connect our prometheus, set up the necessary charts and enjoy our Dashboard:
Now it remains to update the line:
networking.firewall.allowedTCPPorts = [22 9000 80 443 ];
We update the config and now we have fewer entry points to our server, which means it has become a little more secure from external intrusions.
Thank you all for your attention, I hope this article was useful.
And now take a breath and start writing part 3.
PS I want to express my gratitude to everyone who read and responded (it turned out that I raised a narrow topic that is interesting not only to me) and NixOS Community – for giving me an idea of how services can be improved.