Part 1. Christmas tree with meerkat or building your own mini-polygon

cyber polygonssaid that almost everyone has them, competitions are built on them, attacks are simulated, information security systems are tested and exercises are conducted.

Today we “let's unfold the tree” And “we'll attach a meerkat to it“.

ELK Components
  • Elasticsearch is a data search and analysis system based on the Apache Lucene engine. The main function of the system in the stack is storing, searching and analyzing data of almost any scale and in almost real time (NRT Search).

  • Logstash is a system for collecting data from different sources and sending it to a “storage” like Elastic. Before sending, Logstash can “parse”, filter and transform data into the required format for its further indexing and analysis.

  • Kibana is a data analytics and visualization tool used to view data, create interactive charts and graphs to organize events according to different criteria.

Suricata
  • In IDS mode, Suricata analyzes network traffic for malicious activity and anomalies using a set of rules.

  • In IPS mode, Suricata not only detects anomalies and malicious activity, but also blocks malicious traffic, preventing cyber attacks.

Let's move on to deployment:

As a server I will use a virtual machine on Ubuntu 20.04 (let's skip the moment of installing the VM, I believe that you can do it yourself)

Installing Suricata

Before installing meerkat, run the command sudo apt install -y software-properties-common to install the required dependencies

Then add the repository with meerkat and install the system

sudo add-apt-repository -y ppa:oisf/suricata-stable

sudo apt update

sudo apt install -y suricata

Before manipulating configuration files, I recommend always copying the contents of standard files to enable quick “rollback”.

To configure the output of events in JSON format, add the following part to the 'outputs' section:

- eve-log:

      enabled: yes

      filetype: regular

      filename: /var/log/suricata/eve.json

      types:

        - alert:

        - http:

        - dns:

        - tls:

        - filestore:

        - ftp:

        - smtp:

        - ssh:

        - flow:

        - stats:

Then restart Suricata by running the command sudo systemctl restart suricata

To start suricata, run the command sudo suricata -c /etc/suricata/suricata.yaml -i <прослушиваемый_интерфейс> (a list of all interfaces can be viewed using the command ip a).

Installing ELK Stack

Elasticsearch components are not available in Ubuntu's default repositories, so you should add the Elastic package sources list. All packages are signed with the Elasticsearch signing key, so you will additionally need to import the Elasticsearch public GPG key.

To do this, run the command:

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch |sudo gpg --dearmor -o /usr/share/keyrings/elastic.gpg

Then add the Elastic repository to the list of resources from which the apt package manager will “pick up” the necessary packages:

echo "deb [signed-by=/usr/share/keyrings/elastic.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Then update your apt sources list and install Elasticsearch:

sudo apt update

sudo apt install elasticsearch

After installing the system, edit the elasticsearch configuration file. Uncomment the line network.host:localhostto limit access to the data collected.

After that you can start the service using the command sudo systemctl start elasticsearch.

To make the service start automatically every time the server boots, run the command sudo systemctl enable elasticsearch

Test the service using the command curl -X GET "localhost:9200"

If the launch is successful, you will get information about the elasticsearch cluster in the output

Kibana

According to the official documentation, Kibana should only be installed after Elasticsearch. Installing in this order ensures that the components each product depends on are installed correctly in their places.

Since we have already added the elastic package repository, we can install kibana using apt:

sudo apt install kibana

Then start the service and set it to “autostart”

sudo systemctl enable kibana

sudo systemctl start kibana

Since Kibana is configured to listen only on the local host, to access the platform “from the outside” it is worth setting up a reverse proxy, for this we will raise the Nginx server

Installing Nginx

You can install Nginx using apt:

sudo apt install nginx

After installing the web server, run the command echo "admin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.usersto create a user “aadmin” (you can use any username) with administrator rights and save it in the htpasswd.users file

Next, enter and confirm the password for the user and remember the credentials (it is better to write them down)

Then add to the file /etc/nginx/sites-avaliable/your_domain the next block of code, be sure to change the parameter your_domainso that it matches the fully qualified domain name or white IP address of your server:

server {

    listen 80;

     server_name your_domain;

     auth_basic "Restricted Access";

    auth_basic_user_file /etc/nginx/htpasswd.users;

     location / {

        proxy_pass http://localhost:5601;

        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;

        proxy_set_header Connection 'upgrade';

        proxy_set_header Host $host;

        proxy_cache_bypass $http_upgrade;

    }

}

This code configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601.

After that, enable the new configuration by creating a symbolic link to the supported sites directory:

sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/your_domain

To check your Nginx configuration for syntax errors, run the command sudo nginx -t

If there are no errors, reload nginx to use the new configuration: sudo systemctl reload nginx

Additionally, install and configure the firewall using the commands:

sudo apt install ufw

sudo ufw allow ‘Nginx Full’

After this, Kibana will be accessible remotely via your domain or public IP address (and, logically, local access will remain).

Installing Logstash

To install the Logstash service, run the command sudo apt install logstash

After installing Logstash, go to the /etc/logstash/conf.d/ directory and create 2 configuration files – for “input” and “output” data.

Create a file for the “input” data using the command sudo nano /etc/logstash/conf.d/beats_input.conf and add the following part to it:

input {

beats {

port => 5044

}

}

Then create the “output” file using the command sudo nano /etc/logstash/conf.d/elasticsearch-output.conf

And then enter these lines into it:

output {

  if [@metadata][pipeline] {

         elasticsearch {

         hosts => ["localhost:9200"]

         manage_template => false

         index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

         pipeline => "%{[@metadata][pipeline]}"

         }

} else {

         elasticsearch {

         hosts => ["localhost:9200"]

         manage_template => false

         index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

         }

  }

}

Then check the configuration using the command sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

If there are no syntax errors, then at the very end you will see “Config Validation Result: OK.”

After checking the configuration, start the service and set it to “autostart”:

sudo systemctl start logstash

sudo systemctl enable logstash

Installing Filebeat

As a service for collecting data from various sources and then transferring it to Logstash or Elasticsearch, we will deploy Filebeat.

To install the service, run the command sudo apt install filebeat

In our case, the collected data will be sent to Logstash, so in the configuration file in the section output.elasticsearch you need to comment (it is not necessary to put two hashtags, one is enough).

And vice versa uncomment the section output.logstash

Next, we can enable the internal filebeat service module, which collects and analyzes data from Linux system logs:

sudo filebeat modules enable system

Next, we will set up data “reception” pipelines that analyze the received data before sending it through Logstash to Elasticsearch.

To set up an event receiving pipeline for the system module, run the command:

sudo filebeat setup --pipelines --modules system

Then we load the index template into Elasticsearch:

sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

Filebeat comes with sample Kibana dashboards to visualize the data, to load them when Logstash is enabled you need to disable Logstash output and enable Elasticsearch output:

sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601

Next, enable the service and set it to start automatically when the server starts:

sudo systemctl start filebeat

sudo systemctl enable filebeat

If your Elastic Stack is configured correctly, Filebeat will start sending your syslog and authorization logs to Logstash, which will then load that data into Elasticsearch.

You can check this by running the command curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

If your output shows a total match count of 0, Elasticsearch is not loading logs for the index you searched for, and you will need to check your settings for errors.

Next, you can go to the Kibana web interface in the Dashboard section, then select the Filebeat dashboard with the System module and make sure that events are received and visualized.

Integrating Suricata with ELK

After successfully deploying all components, we can integrate Suricata into the stack.

First, you need to add the following config to the outputs section of the Suricata configuration file:

outputs:

  - eve-log:

      enabled: yes

      filetype: regular

      filename: /var/log/suricata/eve.json

      types:

        - alert

        - http

        - dns

        - tls

        - filestore

        - ftp

        - smtp

        - ssh

        - flow

        - stats

After that, you need to create a Logstash configuration file (sudo nano /etc/logstash/conf.d/suricata.conf) and add the following configuration to it:

input {

  file {

    path => "/var/log/suricata/eve.json"

    start_position => "beginning"

    sincedb_path => "/dev/null"

    codec => "json"

  }

}

 

filter {

  if [event_type] == "alert" {

    mutate {

      add_field => { "[@metadata][index]" => "suricata-alert" }

    }

  } else if [event_type] == "dns" {

    mutate {

      add_field => { "[@metadata][index]" => "suricata-dns" }

    }

  } else {

    mutate {

      add_field => { "[@metadata][index]" => "suricata-other" }

    }

  }

}

output {

  elasticsearch {

    hosts => ["localhost:9200"]

    index => "%{[@metadata][index]}-%{+YYYY.MM.dd}"

  }

  stdout { codec => rubydebug }

}

Then restart the Suricata and Logstash services:

sudo systemctl restart suricata.service logstash.service

After that you can go to Kibana and create an index template for Suricata

To do this, in the Management section, select “Stack Management” -> “Kibana” -> “Index Patterns”, then click “Create index pattern”.

In field name enter suricata* and in the field timestamp write @timestampthen press “Create index pattern”

After that, you will be able to see events from Suricata logs in the Discover section.

As a result, we deployed ELK Stack, configured FIlebeat service and integrated Suricata into our system.

In the future material I will tell you how to use this polygon for educational purposes. I hope this article was useful to you, see you in new articles, see you soon!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *