Upgrading Zabbix 5.2 to Zabbix 6.0 on CentOS 8

There was a desire to update Zabbix to the latest version, but this did not work out of the blue. In the process, I encountered the need to update several systems in series. There are enough different manuals for updating them, but when working out the update on a test bench, I collected excerpts from them in a separate instruction so that when updating in battle, I don’t forget anything and don’t get confused. Slightly combed and supplemented with comments version of this instruction and I want to share.

Initial data before update:

The update process can be divided into several sequential steps:

  1. OS update

  2. Database update

  3. Zabbix update

  4. Updating Primary keys in tables with historical data

Work is performed under the root account, unless otherwise specified (just to avoid appending sudo to each command).

1. OS update

Since Red Hat stopped supporting CentOS 8 and disabled the repositories, it was decided to replace it with Oracle Linux. Fortunately, this is done very simply and almost automatically.

On my installation, there were no problems either during the update or after it.

Before starting, it is recommended to update all installed packages to the latest available version (during the first run of the update, I skipped this step and nothing terrible happened):

# Актуализируем ссылки на репозитории CentOS и обновляем установленные пакеты до последних доступных
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
dnf clean all
dnf update
dnf upgrade

The update is done with a script centos2ol.sh from Oracle. By this link you can read README.md, which describes the features of use. I made a backup copy with a snapshot of a virtual machine, so the process of creating backups is not described anywhere.

Preliminary steps:

  • Check for and remove non-standard kernels (Kernels)

  • Disable legacy and third party repositories

  • Make sure /var/cache has 5 Gb free space

  • Disabled all automatic updates

  • Create a backup

Oracle Upgrade Guide (in English) where all this is described in detail.

# Получаем список активных репозиториев и отключаем сторонние
dnf repolist
# Хочу обратить внимание на то, что это мой список сторонних реп. У вас он скорее всего будет другой.
dnf config-manager --set-disabled epel epel-modular grafana pgdg-common pgdg12 pgdg13 timescale_timescaledb zabbix zabbix-non-supported
# Осталось скачать скрипт, сделать исполняемым и запустить
wget https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh
chmod +x centos2ol.sh
./centos2ol.sh

After the update, it is recommended to reboot the system:

Switch complete.
Oracle recommends rebooting this system.
[root@zabbix ~]# reboot

2. Database update

Starting from version 6 Zabbix requires PostgreSQL 13 or older. I decided to install the latest stable at the moment version 14.

Since TimescaleDB 2.1.1 is installed on the current PostgreSQL 12.6, and for PostgreSQL 14 requires version 2.5+then you need to update it first. Since TimescaleDB 2.7 (the latest currently available) supports PostgreSQL 12, you can upgrade directly. Otherwise, you would have to upgrade through an intermediate version of PostgreSQL.

su - postgres
# подключаемся к базе данных zabbix
psql zabbix
# получаем список установленных расширений БД и их версий
zabbix=# \dx
                                      List of installed extensions
    Name     | Version |   Schema   |                            Description
-------------+---------+------------+-------------------------------------------------------------------
 plpgsql     | 1.0     | pg_catalog | PL/pgSQL procedural language
 timescaledb | 2.1.1   | public     | Enables scalable inserts and complex queries for time-series data
(2 rows)
zabbix=# \q
exit

You need to update the PostgreSQL repository:

# Обновляем репозиторий PostgreSQL
rpm -Uvh https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
# Отключаем лишнее
dnf config-manager --set-disabled pgdg10 pgdg11 pgdg13 
# Включаем нужное
dnf config-manager --set-enabled timescale_timescaledb

Well, then we update TimescaleDB:

# Обновляем пакет
dnf upgrade -y timescaledb-2-postgresql-12
su - postgres
psql zabbix
# Обновляем непосредственно расширение в БД
ALTER EXTENSION timescaledb UPDATE;
# Проверяем, версия обновилась до 2.7.0
\dx
                                      List of installed extensions
    Name     | Version |   Schema   |                            Description
-------------+---------+------------+-------------------------------------------------------------------
 plpgsql     | 1.0     | pg_catalog | PL/pgSQL procedural language
 timescaledb | 2.7.0   | public     | Enables scalable inserts and complex queries for time-series data
(2 rows)

\q

Now you can upgrade PostgreSQL itself. Here the process is to install a new version and migrate the database to it. I did this with the pg_upgrade utility, although there are other ways.

# Устанавливаем PostgeSQL 14 и TimescaleDB для неё
dnf install -y postgresql14-server timescaledb-2-postgresql-14
# Инициализируем базу
postgresql-14-setup initdb
# С помощью утилиты timescaledb-tune меняем некоторые настройки в файле postgresql.conf:
timescaledb-tune --pg-config=/usr/pgsql-14/bin/pg_config
# Так же в этом файле поменяем порт на отличный от используемого предыдущей версией
# port = 5532 (по умолчанию port = 5432)
vim /var/lib/pgsql/14/data/postgresql.conf
# Добавляем сервис
systemctl enable postgresql-14
# Стоит сравнить файлы конфигураций и подправить необходимые вам настройки
diff /var/lib/pgsql/12/data/pg_hba.conf /var/lib/pgsql/14/data/pg_hba.conf

The pg_hba.conf file for postgresql 14 has scram-sha-256 password encryption by default (instead of md5 for previous versions). In a good way, you need to update passwords (more precisely, the way they are hashed) to SCRAM (here fairly detailed instructions in English), but you can just change it back to md5.

I did this procedure on an already updated database. But I think it’s worth doing it first.

Upgrading to SCRUM

In order to upgrade passwords to SCRAM you need to make sure of 2 things:

  1. PostgreSQL 10 or higher is used.

  2. The drivers used to connect to the database support SCRAM.

The update itself goes like this:

  1. In the postgresql.conf file, set the password_encryption parameter to scram-sha-256.

  2. Decide which accounts need to update their passwords. My user is zabbix.

  3. Rehash passwords, in fact, just enter a new (naturally, no one bothers to enter the current one) password using the \password command in psql.

  4. Replace md5 in pg_hba.conf with scram-sha-256.

# Установить password_encryption = scram-sha-256
vim /var/lib/pgsql/14/data/postgresql.conf
# Перезагрузить конфигурацию PostgreSQL
service postgresql-14 reload
# Под пользователем postgres запустить psql
su - postgres
psql
# Выбрать учетные записи, которым требуется обновление
SELECT
    rolname, rolpassword ~ '^SCRAM-SHA-256\$' AS has_upgraded
FROM pg_authid
WHERE rolcanlogin;
# У меня такой результат:
 rolname  | has_upgraded
----------+--------------
 postgres | t
 zabbix   | f
(2 rows)
# Если в выборке у пользователя параметр has_upgraded установлен в false (f), то ему требуется перехешировать пароль.
# Вводим пароль для пользователя zabbix
\password zabbix
# Проверяем что для всех пользователей возвращается TRUE 
SELECT rolname, rolpassword ~ '^SCRAM-SHA-256\$' AS has_upgraded FROM pg_authid WHERE rolcanlogin;
# Теперь у всех true
 rolname  | has_upgraded
----------+--------------
 postgres | t
 zabbix   | t
(2 rows)
# Выход из psql
\q
# Осталось заменить md5 в файле pg_hba.conf на scram-sha-256
vim /var/lib/pgsql/14/data/pg_hba.conf
# Возвращаемся в УЗ root
exit
# И еще раз перезагрузить конфигурацию PostgreSQL
service postgresql-14 reload
# Теперь убедимся, что кластер готов к обновлению.
# Для этого запускаем pg_upgrade с ключом --check. При этом никакие изменения не происходят.
sudo -iu postgres /usr/pgsql-14/bin/pg_upgrade \
  --old-datadir=/var/lib/pgsql/12/data \
  --new-datadir=/var/lib/pgsql/14/data \
  --old-bindir=/usr/pgsql-12/bin \
  --new-bindir=/usr/pgsql-14/bin \
  --old-options '-c config_file=/var/lib/pgsql/12/data/postgresql.conf' \
  --new-options '-c config_file=/var/lib/pgsql/14/data/postgresql.conf' \
  --check
  
# Получаем примерно такой результат выполнения:
Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for user-defined encoding conversions              ok
Checking for user-defined postfix operators                 ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok
Checking for new cluster tablespace directories             ok

*Clusters are compatible*
# Ошибок нет, всё готово к обновлению.

Before starting the migration, you need to make sure that both (old and new) PostgeSQL services are stopped. Yes, and Zabbix can also be stopped.

service postgresql-12 stop
service postgresql-14 stop
service zabbix-server stop
service zabbix-agent2 stop
# Запускаем миграцию данных
sudo -iu postgres /usr/pgsql-14/bin/pg_upgrade \
  --old-datadir=/var/lib/pgsql/12/data \
  --new-datadir=/var/lib/pgsql/14/data \
  --old-bindir=/usr/pgsql-12/bin \
  --new-bindir=/usr/pgsql-14/bin \
  --old-options '-c config_file=/var/lib/pgsql/12/data/postgresql.conf' \
  --new-options '-c config_file=/var/lib/pgsql/14/data/postgresql.conf'
# Если все завершилось без ошибок, то меняем используемые порты в старой и новой версии
# port = 5432
vim /var/lib/pgsql/14/data/postgresql.conf
# port = 5532
vim /var/lib/pgsql/12/data/postgresql.conf
# Запускаем сервисы и проверяем работоспособность Zabbix
service postgresql-14 start
service zabbix-server start
service zabbix-agent2 start
# Можно выполнить анализ базы и сбор статистики:
sudo -iu postgres /usr/pgsql-14/bin/vacuumdb --all --analyze-in-stages
# Если все впорядке, можно удалить данные старой БД:
/var/lib/pgsql/delete_old_cluster.sh
# При желании можно удалить и сам PostgreSQL 12
dnf remove postgresql12-server-12.11-1PGDG.rhel8.x86_64 postgresql12-libs-12.11-1PGDG.rhel8.x86_64

Now, finally, you can update Zabbix itself.

3. Update Zabbix

Zabbix itself is fairly easy to upgrade if the installation is not very large.

It should be taken into account that Zabbix and Zabbix Proxy must be of the same major version, otherwise Zabbix will not accept data from it. So there is no point in leaving the proxy running while the server is being updated. The proxy needs to be updated in parallel with the server.

Well, depending on the size of the database, updating it to version 6 can take a significant amount of time.

Before updating, it is worth making backup copies of the database and configuration files again. Or take a snapshot.

# Останавливаем zabbix-server
service zabbix-server stop
# Обновляем репозиторий заббикса:
rpm -Uvh https://repo.zabbix.com/zabbix/6.0/rhel/8/x86_64/zabbix-release-6.0-1.el8.noarch.rpm
# Очищаем и пересоздаем кэш (хотя это можно и не делать):
dnf clean all
dnf makecache
# Проверим список установленных пакетов zabbix в системе
rpm -qa | grep zabbix
# У меня такой список получился:
zabbix-release-6.0-1.el8.noarch
zabbix-get-5.2.7-1.el8.x86_64
zabbix-nginx-conf-5.2.6-1.el8.noarch
zabbix-web-deps-5.2.6-1.el8.noarch
zabbix-web-pgsql-5.2.6-1.el8.noarch
zabbix-agent2-5.2.5-1.el7.x86_64
zabbix-web-5.2.6-1.el8.noarch
zabbix-server-pgsql-5.2.6-1.el8.x86_64
zabbix-java-gateway-5.2.6-1.el8.x86_64
# Устанавливаем обновление zabbix на сервер, выбирая установленные у вас пакеты:
dnf upgrade zabbix-web zabbix-web-pgsql zabbix-server-pgsql zabbix-agent2 zabbix-get zabbix-server zabbix-java-gateway
# Запускаем zabbix-server и ждём пока обновится формат базы
service zabbix-server start

4. Update Primary keys

I’ll leave right away link to a section in the Zabbix documentationthere this step is sufficiently detailed and clearly stated.

Basic things to know:

  • Don’t forget to backup your database

  • Zabbix must not be running during the upgrade

  • CSV data files take up a lot of space

  • On Zabbix Proxy, to update, just execute the history_pk_prepare.sql script

 # Установить пакет “zabbix-sql-scripts”:
dnf install -y zabbix-sql-scripts
# Остановить Zabbix:
service zabbix-server stop
# Выполням скрипт history_pk_prepare.sql на базе zabbix от пользователя zabbix.
# Этот скрипт переименует старые таблицы с историей и создаст новые.
sudo -u zabbix psql zabbix < /usr/share/doc/zabbix-sql-scripts/postgresql/history_pk_prepare.sql

Further actions depend on whether you need to transfer historical data or not. If not, then the update is complete. You need to delete the old tables and start zabbix.

If the data needs to be saved, then they need to be unloaded from the old table, create a temporary table and load data into it, insert data from the temporary table into the permanent one. And so 5 times (according to the number of tables with historical data).

For TimescaleDB, the official Zabbix documentation provides an update example one tables.

# Под пользователем postgres запускаем psql
su - postgres
psql
# Далее выполняем нижеприведенный код
history.sql
 -- Посмотрим, сколько места на диске займут несжатые данные после выгузки в CSV.
select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_old');

-- Выгрузить данные в файл
\copy (select * from history_old) TO '/tmp/history.csv' DELIMITER ',' CSV

CREATE TEMP TABLE temp_history (
    itemid     bigint NOT NULL,
    clock      integer  DEFAULT '0' NOT NULL,
    value      DOUBLE PRECISION DEFAULT '0.0000'   NOT NULL,
    ns  integer  DEFAULT '0' NOT NULL
);
-- Загрузить данные из файла
\copy temp_history FROM '/tmp/history.csv' DELIMITER ',' CSV

-- Создадим hypertable и заполним её данными
select create_hypertable('history', 'clock', chunk_time_interval => 86400, migrate_data => true);
INSERT INTO history SELECT * FROM temp_history ON CONFLICT (itemid,clock,ns) DO NOTHING;

-- Включаем сжатие
select set_integer_now_func('history', 'zbx_ts_unix_now', true);
alter table history set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');

-- В hypertable_schema стоит заменить public на свою схему, если она отличается
-- Возвращенный ID задачи (<JOB_ID>) нужно будет подставить в run_job
select add_compression_policy('history', (
    select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema="public" and hypertable_name="history_old"
    )::integer
);

select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema="public" and hypertable_name="history"), scheduled => true);

-- Запустить задачу сжатия данных (вместо <JOB_ID> нужно поставить ID задачи из предыдущего шага)
call run_job(<JOB_ID>);
-- Может появиться уведомление 'NOTICE:  no chunks for hypertable public.history_uint that satisfy compress chunk policy', это нормально.
history_uint.sql
 -- Verify that there is enough space to allow export of uncompressed data
select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_uint_old');

-- Export data
\copy (select * from history_uint_old) TO '/tmp/history_uint.csv' DELIMITER ',' CSV

CREATE TEMP TABLE temp_history_uint (
    itemid     bigint NOT NULL,
    clock      integer  DEFAULT '0' NOT NULL,
    value      numeric(20)     DEFAULT '0' NOT NULL,
    ns  integer  DEFAULT '0' NOT NULL
);
-- Import data
\copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV

-- Create hypertable and populate it
select create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true);
INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;

-- Enable compression
select set_integer_now_func('history_uint', 'zbx_ts_unix_now', true);
alter table history_uint set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');

-- Substitute your schema in hypertable_schema
-- Job id will returned, it should be passed to run_job
select add_compression_policy('history_uint', (
    select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema="public" and hypertable_name="history_uint_old"
    )::integer
);

select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema="public" and hypertable_name="history_uint"), scheduled => true);

-- Run compression job
call run_job(<JOB_ID>);
-- May show 'NOTICE:  no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine.
history_str.sql
 -- Verify that there is enough space to allow export of uncompressed data
select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_str_old');

-- Export data
\copy (select * from history_str_old) TO '/tmp/history_str.csv' DELIMITER ',' CSV

CREATE TEMP TABLE temp_history_str (
    itemid     bigint NOT NULL,
    clock      integer  DEFAULT '0' NOT NULL,
    value      varchar(255)    DEFAULT ''  NOT NULL,
    ns  integer  DEFAULT '0' NOT NULL
);
-- Import data
\copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV

-- Create hypertable and populate it
select create_hypertable('history_str', 'clock', chunk_time_interval => 86400, migrate_data => true);
INSERT INTO history_str SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;

-- Enable compression
select set_integer_now_func('history_str', 'zbx_ts_unix_now', true);
alter table history_str set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');

-- Substitute your schema in hypertable_schema
-- Job id will returned, it should be passed to run_job
select add_compression_policy('history_str', (
    select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema="public" and hypertable_name="history_str_old"
    )::integer
);

select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema="public" and hypertable_name="history_str"), scheduled => true);

-- Run compression job
call run_job(<JOB_ID>);
-- May show 'NOTICE:  no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine.
history_log.sql
-- Verify that there is enough space to allow export of uncompressed data
select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_log_old');

-- Export data
\copy (select * from history_log_old) TO '/tmp/history_log.csv' DELIMITER ',' CSV

CREATE TEMP TABLE temp_history_log (
    itemid     bigint NOT NULL,
    clock      integer  DEFAULT '0' NOT NULL,
    timestamp  integer  DEFAULT '0' NOT NULL,
    source     varchar(64)     DEFAULT ''  NOT NULL,
    severity   integer  DEFAULT '0' NOT NULL,
    value      text     DEFAULT ''  NOT NULL,
    logeventid integer  DEFAULT '0' NOT NULL,
    ns  integer  DEFAULT '0' NOT NULL
);
-- Import data
\copy temp_history_log FROM '/tmp/history_log.csv' DELIMITER ',' CSV

-- Create hypertable and populate it
select create_hypertable('history_log', 'clock', chunk_time_interval => 86400, migrate_data => true);
INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;

-- Enable compression
select set_integer_now_func('history_log', 'zbx_ts_unix_now', true);
alter table history_log set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');

-- Substitute your schema in hypertable_schema
-- Job id will returned, it should be passed to run_job
select add_compression_policy('history_log', (
    select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema="public" and hypertable_name="history_log_old"
    )::integer
);

select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema="public" and hypertable_name="history_log"), scheduled => true);

-- Run compression job
call run_job(<JOB_ID>);
-- May show 'NOTICE:  no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine. 
history_text.sql
 -- Verify that there is enough space to allow export of uncompressed data
select sum(before_compression_total_bytes)/1024/1024 as before_compression_total_mbytes, sum(after_compression_total_bytes)/1024/1024 as after_compression_total_mbytes FROM chunk_compression_stats('history_text_old');

-- Export data
\copy (select * from history_text_old) TO '/tmp/history_text.csv' DELIMITER ',' CSV

CREATE TEMP TABLE temp_history_text (
    itemid     bigint NOT NULL,
    clock      integer  DEFAULT '0' NOT NULL,
    value      text     DEFAULT ''  NOT NULL,
    ns  integer  DEFAULT '0' NOT NULL
);
-- Import data
\copy temp_history_text FROM '/tmp/history_text.csv' DELIMITER ',' CSV

-- Create hypertable and populate it
select create_hypertable('history_text', 'clock', chunk_time_interval => 86400, migrate_data => true);
INSERT INTO history_text SELECT * FROM temp_history_text ON CONFLICT (itemid,clock,ns) DO NOTHING;

-- Enable compression
select set_integer_now_func('history_text', 'zbx_ts_unix_now', true);
alter table history_text set (timescaledb.compress,timescaledb.compress_segmentby='itemid',timescaledb.compress_orderby='clock,ns');

-- Substitute your schema in hypertable_schema
-- Job id will returned, it should be passed to run_job
select add_compression_policy('history_text', (
    select extract(epoch from (config::json->>'compress_after')::interval) from timescaledb_information.jobs where application_name like 'Compression%%' and hypertable_schema="public" and hypertable_name="history_text_old"
    )::integer
);

select alter_job((select job_id from timescaledb_information.jobs where hypertable_schema="public" and hypertable_name="history_text"), scheduled => true);

-- Run compression job
call run_job(<JOB_ID>);
-- May show 'NOTICE:  no chunks for hypertable public.history_uint that satisfy compress chunk policy', it is fine.

Well, the last step is to delete the old tables:

DROP TABLE history_old;
DROP TABLE history_uint_old;
DROP TABLE history_str_old;
DROP TABLE history_log_old;
DROP TABLE history_text_old;
\q

It is more convenient to save the above sql scripts to files (without the last call run_job command) and call them separately. And call run_job(); already run after the script is executed.

More or less like this:
touch history_text.sql
vi history_text.sql
sudo -u zabbix psql zabbix < /tmp/history_text.sql
# Дальше вывод работы скрипта
 before_compression_total_mbytes | after_compression_total_mbytes
---------------------------------+--------------------------------
            292.0625000000000000 |            70.7031250000000000
(1 row)

COPY 1359548
CREATE TABLE
COPY 1359548
     create_hypertable
----------------------------
 (15,public,history_text,t)
(1 row)

INSERT 0 1359548
 set_integer_now_func
----------------------

(1 row)

ALTER TABLE
 add_compression_policy
------------------------
                   1007
(1 row)

                                               alter_job
-------------------------------------------------------------------------------------------------------
 (1007,"1 day",00:00:00,-1,01:00:00,t,"{""hypertable_id"": 15, ""compress_after"": 612000}",-infinity)
(1 row)

# Вызываем run_job с Job ID = 1007
echo "call run_job(1007);" |sudo -u zabbix psql zabbix
CALL

It remains to start Zabbix:

service zabbix-server start

Actually everything, this is where my combined hodgepodge of instructions ends. Hope this will be useful to somebody.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *