Monitoring 95+ PostgreSQL metrics with Zabbix Agent 2 plugin

Last year, the popular monitoring service Zabbix introduced Agent 2, designed to reduce the number of TCP connections and provide convenient extensibility through plugins on Golang.

My name is Dasha, and I am one of the developers of the PostgreSQL monitoring plugin for Zabbix Agent 2. In this article I will talk about the main “tricks” of using Zabbix Agent 2 for monitoring PostgreSQL, about the principle of the plugin, give advice on how to configure it, and also explain on an example of how to customize the plugin.

How did the PostgreSQL monitoring plugin for Zabbix Agent 2 come about?

In 2019, Zabbix announced the release of a new Zabbix Agent 2. It is written from scratch in Golang. A separate plugin is required to monitor each application. We at Postgres Professional decided that this is a great opportunity to apply our many years of experience using Zabbix to monitor PostgeSQL, and wrote a monitoring module for Agent 2.

How is DBMS monitoring arranged in Zabbix?

Let’s start with a small introduction to the Zabbix monitoring workflow for beginners.

The structure that is interesting to us now can be divided into two components:

  1. Zabbix Server that stores and collects data.

  2. Agents that are installed on monitored objects and collect data.

To monitor each application in Zabbix Server, a template is required – an XML file. It contains metrics keys (unique IDs) and parameters for their processing.

Zabbix Agent 2 aims to provide the user with an out-of-the-box monitoring tool that is quickly and easily configurable, as well as good extensibility.

How does the PostgreSQL plugin for Zabbix Agent 2 work?

There is a main function in which handlers for each metric are called using a unique key. The handler is used to collect data. This is a file that specifies and executes an SQL query to get one or more metrics. The results of the query execution are written to a variable of type int, float, or string. If the result should contain the values ​​of several metrics at once, then it will be converted to JSON at the stage of receiving the request. The results obtained are periodically returned by Zabbix Agent 2 to Zabbix Server.

The plugin and handlers are located in this folder: / plugins / postgres

What are the capabilities of the PostgreSQL monitoring module for Zabbix Agent 2?

  • Support for persistent connection to PostgreSQL.

  • Monitor multiple PostgreSQL instances at the same time.

  • Options for monitoring and checking metrics in real time via the command line.

  • Configuring the plugin through the general agent configuration file.

  • Persistence of state between checks.

  • Quite simple customization of the collection of existing metrics.

  • The ability to write new plugins to suit your requirements.

The plugin has an official template that you can download from the link

It has basic triggers and a screen that displays a combination of multiple graphs. In total, the plugin collects over 95 metrics. A complete list of all metrics can also be found at the link above.

In the Zabbix Server web interface, you can edit the template and its components for your needs. What exactly can be customized?

  1. Change the metric collection interval.

  2. Add trigger for metric.

  3. Add a macro or edit an existing one.

How to install and use the PostgreSQL plugin for Zabbix Agent 2?

1. Create a PostgreSQL user for monitoring:

CREATE USER 'zbx_monitor' WITH PASSWORD '<PASSWORD>' INHERIT;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_ls_dir(text) TO zbx_monitor;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_stat_file(text) TO zbx_monitor;

2. Edit pg_hba.conf to allow connection from Zabbix Agent 2:

# TYPE DATABASE    USER      ADDRESS         METHOD
  host  all     zbx_monitor  127.0.0.1        md5

More information about pg_hba.conf at the link

Now it remains to set the parameters for connecting to PostgreSQL for Zabbix Agent 2. This can be done in two ways:

  • use macros for connection parameters,

  • create a session.

The first way is a little easier. It is enough if you need to configure monitoring of one PostgreSQL instance:

1. In the template, edit the {$ PG.URI} macro, which specifies the path to PostgreSQL in the format.

2. Set up a macro with a username and password ({$ PG.USER} and {$ PG.PASSWORD}). You can also specify the {$ PG.DBNAME} macro. This parameter is optional for most metrics – if it is not specified in the key, the base name specified in the agent configuration file will be used.

In the template, these macros are already specified in the parameters of all keys. Please note that the order of specifying parameters in the key is fixed.

The second method allows you to set connection parameters for multiple PostgreSQL instances:

  1. Setting connection parameters for the session in the configuration file zabbix_agent2.conf in the Postgres plugin section: Postgres.Sessions. .URI, Postgres.Sessions. .User, Postgres.Sessions. .Password. Here, instead of , you need to specify a unique name for the new session.

  2. Create a macro with the session name in the {$ PG. } template.

  3. We specify the macro as the only parameter for metrics in the template.

Let’s look at how to use the plugin to collect additional metrics using the example of adding the uptime metric.

To do this, you need to create a new handler with a request and add its key to the main function.

1. Create a file to get a new metric:

zabbix / src / go / plugins / postgres / handler_uptime.go

We include the postgres package and specify the metrics key (s):

package postgres
const (
keyPostgresUptime = "pgsql.uptime"
)

2. We declare a handler with a request, as well as a variable uptime, where the result will be written:

func uptimeHandler(ctx context.Context, conn PostgresClient, 
                   _ string,  _ map[string]string, _ ...string) (interface{}, error) 
{
  var uptime float64
  query := `SELECT date_part('epoch', now() - pg_postmaster_start_time());

3. We execute the request, check if an error has occurred. If everything is OK, return the uptime variable with the result.

row, err := conn.QueryRow(ctx, query)
if err != nil {
       ...
   }
err = row.Scan(&uptime)
if err != nil {
       ...
   }
return uptime, nil

4. Register the key of the new metric:

var metrics = metric.MetricSet{
       ....,
       keyPostgresUptime: metric.New("Returns uptime.",
       []*metric.Param{paramURI, paramUsername, paramPassword,paramDatabase}, false),
}

We collect the agent!

New functionality

In version Zabbix 5.2, it became possible to calculate metrics by collecting the results of user queries from separate SQL files. Even dynamic queries can be created. This version also updated the module architecture and fixed minor bugs. Let’s demonstrate how to add custom metrics via a SQL file using a simple query with one parameter as an example:

  1. Let’s create a SQL file with a query.

    $touch custom1.sql
    $echo “SELECT id FROM my_table WHERE id=$1;” > custom1.sql

    Here in $ 1 the parameter will be passed when executing the request.

  2. IN zabbix_agent2.conf fill in the Plugins.Postgres.CustomQueriesPath parameter, indicating the path to the directory with the SQL file.

    Plugins.Postgres.CustomQueriesPath = / path / to / the / file

  3. In the template for the pgsql.query.custom key, specify the name of the SQL file and add additional parameters for the query, i.e. the one that replaces $ 1. It is worth noting that you can also create macros in the template for the name of the SQL file and for the parameters.

Additional materials

Presentations of the Zabbix Online Meetup, which took place on June 19

Article by Vadim Ipatov – one of the developers of Zabbix Agent 2

PostgreSQL Monitoring Plugin Template

Zabbix Git for those who want to see more real-life examples and look at all SQL queries to get metrics

Video report at PGConf.Online 2021 “Overview of new functionality and setting up Zabbix Agent 2 for PostgreSQL monitoring”

Still have questions?

All questions can be asked in the comments.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *