Linux Group Policy with Puppet

In connection with well-known events, Russian companies increasingly began to switch to domestic software. And it just so happened that I am an employee of one of these companies.

Personally, as a former windows administrator, when switching to a domestic operating system, I sorely lacked such a tool as group policies (GPO), which allows me to manage the settings and security of computers on the network.

To manage device configurations in my organization, I chose Puppet.

Puppet itself is a great tool for managing a small number of servers. With puppet DSL, we can describe and configure almost any task. There is a good article about puppet here – Introduction to Puppet.

The downside of pure Puppet is the way you select the node (target) to configure. Each configuration manifest begins with a target selector to which it will be applied. Target can be described exactly by hostname, regular expression, or keyword default – for everyone else. The problem is that only one, most specific manifest will be selected for each node. We cannot, for example, describe some general rules for all hosts in the default manifest, and then add some adjustments to these rules only for the necessary computers. This behavior can be tolerated if you need to manage a small number of nodes, but if there are more of them, then something needs to be done.

There are also solutions for managing a large number of nodes – for example, Hiera. With Hiera, you can flexibly manage a large fleet of servers by assigning them functionality and roles. When using Hiera, all logic is taken out into separate modules, and the system itself builds a configuration file consisting of the classes and parameters used. You can read more about using Hiera here – Puppet+Hiera. We squeeze the maximum.

But the use of Hiera assumes that the roles and purpose of the servers are known in advance. When it comes to client machines, as a rule, it is impossible to determine their functionality in advance. I would like to get a way to dynamically manage the configuration of a computer, preferably based on information obtained from Active Directory.

For flexible configuration settings in Puppet, a special mechanism is used – external node classifier (ENC). To simplify it a lot, an external classifier is a program or script that takes a computer name as a parameter and sends the configuration described in yaml format via standard output.

Combenc

Using this mechanism, I in python in agony was born an external classifier called Combenc. The mechanism of the Puppet + Combenc bundle can be described something like this:

  1. A computer in a domain periodically queries the puppet server for configuration.

  2. The server starts Combenc, passing the computer name as a parameter.

  3. Combenc obtains from Active Directory a list of security groups that the computer and the user are members of. Usually, groups include users, but since Puppet only manages hosts, I needed a way to bind a user to a computer. For linking, I fill in the attribute Controlled at the computer object in Active Directory.

  4. The classifier then reads:

    1. a configuration file that sets the rules for all nodes;

    2. configuration files corresponding to the names of security groups;

    3. file corresponding to the computer name.

  5. Combenc then combines the configuration files. Configuration files are yaml files consisting of a structure with class names and parameters applied to classes. In addition to their parameters, classes can have boolean parameters. merge And sealed. When using the parameter merge Combenc will try to combine configuration lists and dictionaries. When using the parameter sealed the class will be “sealed” and will not be redefined if it is encountered further in the configuration files.

  6. As a result, the resulting config is applied to the computer.

How to use it

I’m assuming that you already have a puppet server configured with a number of nodes connected to it.

  1. First, we will place the classifier on the server with Puppet. Let’s create a folder /opt/combenc/ and place the Combenc scripts in it:

    mkdir /opt/combenc
    cd /opt/combenc
    git clone https://github.com/nsuslov/combenc.git .
  2. After downloading, do not forget to give permission to run:

    chmod +x /opt/combenc/combenc.py
  3. Next, you need to fill in the configuration file config.yaml:

    ldap:
      uri: 'ldap://ns01.example.com'
      user: 'CN=puppet,CN=Users,DC=example,DC=com'
      cred: 'Pa$$word'
      base_dn: 'OU=00-Unit,DC=example,DC=com'
    rules:
      folder: '/etc/puppetlabs/code/environments'
    environments:
      dev:
        - my-pc.example.com
      test:
        - 00-pc-01.example.com
        - 00-pc-02.example.com
    • ldap.uri – the address of one of our domain controllers;

    • ldap.user, user.cred – a unique name (dn) and password of the user with which we will collect data from the ldap server;

    • ldap.base_dn – the starting point for searching for computers. Allows you to limit the search to a specific organizational unit;

    • rules.folder is the directory where the environments will be located. I prefer to keep the folder together in the environment modules – a description of the logic of the applied configurations and combenc – descriptions of configurations with parameters;

    • environments – there can be dictionaries with the name of the environment and an array of nodes that are included in it. Computers for which no specific environment is specified run in the production. I prefer a scheme with three environments: development – my development environment, testing – a small group of computers for testing configurations before implementation in production, production – other.

  4. Next, you need to configure the puppet server to use Combenc. To do this, in the config /etc/puppetlabs/puppet/puppet.conf need to add the following lines and restart the service puppetserver:

    [master]
    node_terminus = exec
    external_nodes = /opt/combenc/combenc.py

In general, this setup is completed. Now you can write some module and apply it.

Puppet module example

Suppose we have the following task: we want to enable group users domain-admins connect via ssh to all computers, and users of the group 01-unit-admins connect to computers in the group 01-computers.

  1. Let’s create in /etc/puppetlabs/code/environments/production/modules catalog sshd.

    mkdir -p /etc/puppetlabs/code/environments/production/modules/sshd
  2. Let’s go into it and create the folder structure we need:

    cd /etc/puppetlabs/code/environments/production/modules/sshd
    mkdir {manifests,templates}
  3. Now let’s create a file manifests/init.pp with the following content:

    # Class: sshd
    #
    # Настройка sshd
    #
    # @param groups Группы, имеющие право подключаться по ssh
    #
    # @param merge Объединить правила. Combenc объединит список, 
    #        если правило встретится ниже
    # @param sealed Запечатать правило. Combenc не переопределит это правило, 
    #        если оно встретится ниже
    #
    class sshd (
      Array[String] $groups,
    
      Boolean $merge = false,
      Boolean $sealed = false,
    ) {
      $groups_line = join($groups, ' ')
    
      file { 'sshd config':
        path    => '/etc/ssh/sshd_config.d/10-puppet.conf',
        content => template('sshd/puppet.erb'),
      }
    
      service { 'sshd service':
        ensure    => running,
        name      => 'sshd',
        enable    => true,
        subscribe => File['sshd config'],
      }
    }

    This class will create a file /etc/ssh/sshd_config.d/10-puppet.conf by pattern and restart the service sshdif the file 10-puppet.conf it was changed.

  4. Let’s add a template templates/puppet.erb:

    # Puppet sshd
    # DONT EDIT THIS FILE
    
    <% if (@groups_line) -%>
    AllowGroups <%= @groups_line %>
    <% end -%>

The module is ready. Now let’s write the rules according to which this class will be applied to computers.

Combenc rule example

  1. Create a directory with rules – /etc/puppetlabs/code/environments/production/combenc

    mkdir -p /etc/puppetlabs/code/environments/production/combenc
  2. Then go to it and create the necessary folders: groups – for security group configurations and hosts – for configurations that apply to specific hosts.

    cd /etc/puppetlabs/code/environments/production/combenc
    mkdir {groups,hosts}
  3. Let’s create a general rule for all hosts defautl.yaml with the following content:

    sshd:
      merge: true
      groups:
        - domain-admins
  4. Now let’s create a rule for a group of computers groups/01-computers.yaml:

    sshd:
      groups:
        - 01-unit-admins
  5. Suppose we want to give the local user user the ability to connect to the computer 01-pc-34. Let’s create a file hosts/01-pc-34.example.com.yaml with the following content:

    sshd:
      groups:
        - user

As a result, Combenc should generate a config for computers by combining the rules. We will verify this by calling the script and passing it dns the computer name as a parameter:

/opt/combenc/combenc.py 01-pc-34.example.com

The command will return the following output:

classes:
  sshd:
    groups:
    - domain-admins
    - 01-unit-admins
    - user
environment: production

The next time computers access the puppet server, this configuration will be applied.

As a result, we must add the following file structure:

├── combenc
│   ├── default.yaml
│   ├── groups
│   │   └── 01-computers.yaml
│   └── hosts
│       ├── 01-pc-34.example.com.yaml
└── modules
    └── sshd
        ├── manifests
        │   └── init.pp
        └── templates
            └── puppet.erb

This is the solution we use in our enterprise. If desired, in addition to the security groups from ldap, you can also drag the structure of organizational units. Thus, the user experience will be even closer to the usual GPO, but it seemed to me more convenient to use security groups.

Thank you for your attention! I look forward to suggestions, ideas, criticism and other feedback in the comments.

Combenc repository

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *