Elastic Locked: Enable Elasticsearch Cluster Security Options for Inside and Out Access


Elastic Stack is a well-known tool on the market of SIEM-systems (in fact, not only them). It can collect a lot of diverse data, both sensitive and not very. It’s not entirely correct if access to the Elastic Stack elements themselves is not protected. By default, all Elastic boxed elements (Elasticsearch, Logstash, Kibana, and Beats collectors) operate on open protocols. And in Kibana itself, authentication is disabled. All these interactions can be secured and in this article we will tell you how to do it. For convenience, the narrative was divided into 3 semantic blocks:

  • Data Access Role Model
  • Data Security Inside Elasticsearch Cluster
  • Data Security Outside Elasticsearch Cluster

Details under the cut.

Data Access Role Model

If you install Elasticsearch and cannot tune it in any way – access to all indexes will be open to all comers. Well, or those who can use curl. To avoid this, Elasticsearch has a role model that is available starting with a Basic level subscription (it is free). Schematically looks something like this:

What’s in the picture

  • Users are all who can log in using credentials.
  • A role is a set of rights.
  • Rights are a set of privileges.
  • Privileges are permissions to write, read, delete, etc. (Full list of privileges)
  • Resources are indexes, documents, fields, users, and other storage entities (the role model for some resources is available only in paid subscriptions).

Elasticsearch defaults to box usersto which are attached boxed roles. After enabling security settings, you can immediately start using them.

To activate security in Elasticsearch settings, you need to add to the configuration file (by default, elasticsearch / config / elasticsearch.yml) new line:

xpack.security.enabled: true

After changing the configuration file, start or restart Elasticsearch for the changes to take effect. The next step is to assign passwords to box users. We do this interactively using the command below:

[elastic@node1 ~]$ ./elasticsearch/bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y

Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

We check:

[elastic@node1 ~]$ curl -u elastic 'node1:9200/_cat/nodes?pretty'
Enter host password for user 'elastic': 23 46 14 0.28 0.32 0.18 dim * node1

You can slap yourself on the shoulder – the settings on the Elasticsearch side are complete. Now it’s time to set up Kibana. If you run it now, errors will spill over, so it is important to create a keystore. This is done in two teams (user kibana and password entered in the password creation step in Elasticsearch):

[elastic@node1 ~]$ ./kibana/bin/kibana-keystore add elasticsearch.username
[elastic@node1 ~]$ ./kibana/bin/kibana-keystore add elasticsearch.password

If everything is correct – Kibana will start asking for a username and password. In a Basic level subscription, a role model based on internal users is available. Starting with Gold, you can connect external authentication systems – LDAP, PKI, Active Directory and Single sign-on systems.

Access rights to objects inside Elasticsearch can also be limited. True, to do the same for documents or fields, you will need a paid subscription (this luxury starts from the Platinum level). These settings are available in the Kibana interface or through Security API. You can check through the already familiar Dev Tools menu:

Role creation

PUT /_security/role/ruslan_i_ludmila_role
  "cluster": [],
  "indices": [
      "names": [ "ruslan_i_ludmila" ],
      "privileges": ["read", "view_index_metadata"]

User Creation

POST /_security/user/pushkin
  "password" : "nataliaonelove",
  "roles" : [ "ruslan_i_ludmila_role", "kibana_user" ],
  "full_name" : "Alexander Pushkin",
  "email" : "pushkin@lyceum.edu",
  "metadata" : {
    "hometown" : "Saint-Petersburg"

Data Security Inside Elasticsearch Cluster

When Elasticsearch is running in a cluster (which is common), the security settings within the cluster become important. For secure communication between nodes, Elasticsearch uses the TLS protocol. To set up secure communication between them, you need a certificate. We generate the certificate and private key in the PEM format:

[elastic@node1 ~]$ ./elasticsearch/bin/elasticsearch-certutil ca --pem

After executing the command above, in the directory /../elasticsearch an archive will appear elastic-stack-ca.zip. Inside it you will find a certificate and a private key with extensions crt and key respectively. It is desirable to lay them out on a shared resource, which should be accessible from all nodes of the cluster.

Each node now needs its own certificates and private keys based on those in the shared directory. When the command is executed, they will be asked to set a password. You can add additional options –ip and –dns to fully verify the interacting nodes.

[elastic@node1 ~]$ ./elasticsearch/bin/elasticsearch-certutil cert --ca-cert /shared_folder/ca/ca.crt --ca-key /shared_folder/ca/ca.key

Following the results of the command, we will receive a certificate and a private key in PKCS # 12 format, password protected. It remains to move the generated file p12 to the configuration directory:

[elastic@node1 ~]$ mv elasticsearch/elastic-certificates.p12 elasticsearch/config

Add the password to the certificate in the format p12 in keystore and truststore on each node:

[elastic@node1 ~]$ ./elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
[elastic@node1 ~]$ ./elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password

In already famous elasticsearch.yml it remains to add lines with certificate information:

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

Launch all Elasticsearch nodes and execute curl. If everything was done correctly, the answer will return with several nodes:

[elastic@node1 ~]$ curl node1:9200/_cat/nodes -u elastic:password                                                                            43 75 4 0.00 0.05 0.05 dim * node2                                                                                                             21 75 3 0.00 0.05 0.05 dim - node3                                                                                                             39 75 4 0.00 0.05 0.05 dim - node1

There is another security option – IP address filtering (available in Gold subscriptions). Allows you to create whitelists of IP addresses that are allowed to access nodes.

Data Security Outside Elasticsearch Cluster

Outside the cluster means connecting external tools: Kibana, Logstash, Beats or other external clients.

To configure https support (instead of http), add new lines to elasticsearch.yml:

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: elastic-certificates.p12
xpack.security.http.ssl.truststore.path: elastic-certificates.p12

Because the certificate is password protected, add it to keystore and truststore on each node:

[elastic@node1 ~]$ ./elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
[elastic@node1 ~]$ ./elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password

After adding the keys, Elasticsearch nodes are ready to connect via https. Now they can be started.

The next step is to create a key for connecting Kibana and add it to the configuration. Based on the certificate, which is already located in the shared directory, we will generate the certificate in the PEM format (PKCS # 12 Kibana, Logstash and Beats do not yet support):

[elastic@node1 ~]$ ./elasticsearch/bin/elasticsearch-certutil cert --ca-cert /shared_folder/ca/ca.crt --ca-key /shared_folder/ca/ca.key --pem

It remains to unzip the created keys into the Kibana configuration folder:

[elastic@node1 ~]$ unzip elasticsearch/certificate-bundle.zip -d kibana/config

There are keys, so it remains to change the configuration of Kibana so that it starts using them. In the kibana.yml configuration file, change http to https and add lines with the SSL connection settings. The last three lines set up a secure interaction between the user’s browser and Kibana.

elasticsearch.hosts: ["https://${HOSTNAME}:9200"]
elasticsearch.ssl.certificateAuthorities: /shared_folder/ca/ca.crt
elasticsearch.ssl.verificationMode: certificate
server.ssl.enabled: true
server.ssl.key: /../kibana/config/instance/instance.key
server.ssl.certificate: /../kibana/config/instance/instance.crt

Thus, the settings are made and access to the data in the Elasticsearch cluster is encrypted.

If you have questions about the capabilities of Elastic Stack on free or paid subscriptions, tasks for monitoring or creating a SIEM system, leave a request in feedback form on our website.

More our articles about Elastic Stack on Habré:

Understanding Machine Learning in Elastic Stack (aka Elasticsearch, aka ELK)

Sizing Elasticsearch

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *