WAF Express, or How to Close RCE in Two Days

Cloud WAF (web application firewall) is a fast and effective way to protect web applications from cyber attacks. It has been proven in practice: in two days, you can reliably cover the vulnerabilities of a simple service. For more complex applications, during this time, you can actually set up a WAF, launch basic filtering, and begin developing custom request filtering policies.

Recently, a client contacted us at K2 Cybersecurity who needed to quickly protect a fairly simple web application, and together with application security engineer Daniil Zolotarev, we carried out an express deployment of PT Cloud Application Firewall. Using this project as an example, we will analyze implementation options, possible limitations, important conditions, and useful recommendations for implementing WAF.

When things went wrong

The information security department discovered a remote code execution (RCE) vulnerability on a pilot stand with a customer's web application. The ideal solution was to update the software to the current version, in which this vulnerability was fixed. However, a problem arose: this was software from a foreign company with suspended technical support. An independent search for a distribution, updating the software and related components, as well as eliminating possible problems without technical support would have certainly taken us far beyond the scope of the pilot project. We discussed the situation and realized that closing the vulnerability ourselves was not an option. At the same time, the vulnerability did not allow us to coordinate the pilot with the information security department. The implementation of the system was delayed and, in order not to miss the deadline, it was necessary to urgently find an alternative, and not a crutch, but a reliable solution.

If the software is vulnerable to remote code execution, the logical solution is to analyze and filter incoming HTTP requests. The solution was obvious – use a WAF.

Which WAF did you choose for protection and why?

We chose PT Cloud Application Firewall from Positive Technologies over on-premise solutions, and here's why:

Lack of free resources. The customer did not have any free servers even in the test environment. The protected service did not require high performance, therefore, creating a whole new server for it was uneconomical.

Speed ​​of deploymentIt was necessary to quickly install an agent for processing traffic without wasting time on deploying and configuring the entire infrastructure.

Cost-effectiveness. Cloud WAFs are usually flexibly licensed and allow you to use a tariff plan with the minimum required amount of resources. This is optimal for a small application with a limited budget.

Oddly enough, Pozitiv's product works on a negative model. That is, it only passes traffic that clearly falls into the legitimate category and strictly cuts off any suspicious requests.

It has many built-in signatures that catch most common attacks, including the one our customer encountered.

PT Cloud Application Firewall is easy and quick to configure and has the same features as the on-premise PT Application Firewall. Cloud WAF is also equipped with an external agent on nginx, but with management via the vendor's cloud. The filtering node (nginx + PT Cloud AF agent) is located locally, and data exchange with the vendor's cloud occurs as follows: logs and security events are sent to the cloud, and security policies and signatures are received from it to the filtering node.

Thus, PT Cloud Application Firewall is a two-tier cloud solution:

  1. Management server in the vendor's cloud.

  2. Filtering node on the K2 Cloud capacities.

From a practical point of view, such architecture is not inferior to on-premise solutions. The traffic of the protected application does not go to the vendor's cloud. Only requests for which the signature or filtering rule has been triggered are sent to the vendor's cloud. At the same time, sensitive data (headers and authorization cookies, passwords, confidential data) are masked. It does not matter where the requests go – to an IP address in the customer's infrastructure or to our cloud. TLS encryption is used everywhere.

“When securing web applications, it is important to consider cost effectiveness and flexibility in integration and deployment. Both can be achieved with cloud technologies. K2 Cloud users can connect the required solution in a few hours, and specialists will customize it to specific needs,” — Alexander Fix, Product Manager at K2 Cloud.

How to Quickly Deploy PT Cloud Application Firewall

To quickly deploy a cloud WAF, two key elements were required: a personal account in the vendor's cloud and the capacity on which the filtering agent would be deployed. Thanks to the cooperation between Positive Technologies and K2 Cloud, we resolved the issue of deploying the WAF in literally two days.

On the vendor's license portal, you can independently create a “pilot” in a couple of minutes through your personal account without the support service. Positive Technologies has many separate personal accounts (tenants) for each customer, isolated from each other. Only the customer and the partner who provides the setup have access to a specific tenant. In fact, this is the main WAF management interface.

To get up and running faster, we used a temporary pass-through license.
It is measured in RPS and does not limit the number of agents and filter nodes as long as the number of requests does not exceed the agreed volume. And most importantly, such a license is issued immediately, without wasting time on coordinating, signing and paying for the contract.

The setup process itself is quite simple. First, it is a standard reverse proxy setup on nginx. Second, it is installing an agent and adding a few directives to the nginx configuration.

To connect the agent, you need a connection string. You can find it and copy it in the PT Cloud Application Firewall console at: System → Isolated Space → Your Tenant.

The next step was to provide capacity for deploying a filter node. Colleagues from K2 Cloud allocated a new VPC and a personal account for us in half a day, including all the necessary approvals. After that, our team independently deployed a server with the required parameters in 10 minutes and published it on an external IP address.

Installing the agent and configuring nginx

There are several ways to install the PT Cloud Application Firewall agent.

You can use an ISO image or a Debian package on a pre-prepared Astra Linux or Debian 10/11 OS. The ISO image is a ready-made operating system with a module already installed. It is enough to deploy a virtual machine from this image, and the system is ready to work. The Debian package method assumes that you have a machine with an installed OS, on which nginx is first installed, and then the PT Cloud Application Firewall module.

PT Cloud Application Firewall also supports a container version. If the customer's infrastructure and server are in the same subnet, the agent can be installed using the Ingress Controller for Kubernetes.

In our case, there was no container, no infrastructure, and no kubernetes, so we chose to deploy a machine on Debian 11 (the latest stable version supported by the agent) and then install nginx and the PT Cloud Application Firewall module. This approach was easier than getting an ISO image from the vendor, uploading it to our cloud, and then deploying the machine.

K2 Cloud offers a ready-made Debian image with standard cloud-based enhanced security policies. That's why we installed the agent on Debian 11 from the deb package, which can be found here. download from Positive Technologies repository.

The first step is to install nginx of a certain version. For the agent on Debian 11, the relevant version of nginx is 1.22.0. When installing, you can follow official nginx instructionsbut with one important change. Instead of the command sudo apt install nginx At the end of the process, you need to do the following:

● display all available versions of the connected apt repository:
apt-cache policy nginx

● copy the line with the version we need. In our case:
1.22.0-1~bullseye

● install nginx:
sudo apt-get install nginx=1.22.0-1~bullseye

Next, move on to configuring nginx. We had a test application even without HTTPS, so the nginx configuration is the simplest and looks like this:

In the module configuration, we specify the connection string for communication with the API of our Positive Technologies personal account. This connection is configured with one key, which is provided with the license. After these simple steps, the agent appears in the personal account without any complications.

Testing and false positives

After installation and configuration, we turn on the node in monitoring mode, pass traffic through it and analyze the presence of attacks. For basic testing, we used a simple and widely known vulnerability – reflected XSS.

Formed a request in Burp. Inserted the payload into the bcKey parameter '};prompt('1');{' in URL encode format and sent the request.

We see the following in the browser:

We catch an attack on WAF. We see that XSS was detected in the bcKey parameter.

We switch WAF to blocking mode.

We carry out the attack again and see how it is blocked.

At this stage, you can encounter false positives, so we asked our colleagues from the customer side to test all the functionality of the protected web application. In parallel, we monitored the WAF behavior via the console. If we found that the triggering was not the result of a targeted attack, we took the following actions:

  1. Disabled the triggering according to a specific rule.

  2. We configured the system to exclude triggering along a certain path.

  3. If necessary, we disabled restrictions on the volume of transferred data if this corresponded to the logic of the application.

For example, if an application had to transfer large files to a web server, we would exclude the rule that limited the transfer size, since in this case exceeding the limit would not be considered an attack.

Due to the specific nature of web applications, it is impossible to create universal protection rules, as is the case with antivirus databases. Therefore, false positives are a common occurrence, tied to the specifics of the logic of a specific application. As a rule, they are processed at the WAF “training” stage.

Alternative solutions

Thus, in two days we successfully protected a small web application of the customer. The deployment and configuration of WAF itself took a minimum of time. In the future, we will fine-tune the security policies.

We used a model with a personal account located in the vendor's cloud and a filtering node in K2 Cloud. However, this example is only one of many ways to solve such problems. There are various options for dividing areas of responsibility between us and the customer.

One of our customers had a large traditional infrastructure, including several dozen applications on nginx. The architecture consisted of two nginx servers in a cluster, which distributed incoming Internet traffic between backend servers. This entire infrastructure was administered by our team from the cloud.

The task was to integrate WAF into this existing architecture. We chose the Positive Technologies solution, but instead of the cloud-based PT Cloud Application Firewall, we used the on-premise version — PT Application Firewall.

Traffic routing remains unchanged: application publishing still works through nginx, and our IT team continues to administer the system. The only change in the configuration is the addition of a connection string to the WAF management server.

This approach allowed to preserve the existing distribution of responsibility zones. The information security team does not interfere with direct interaction between the application and users, but concentrates on analyzing events and grouping, separating between different applications, suppressing triggers, writing private rules, and so on. This option is usually called light on-prem, when the personal account is in the cloud, and the node is on-site, in the customer's data center.

“The cloudiest” option involves using a WAF that is already deployed in our cloud infrastructure and serves multiple clients. It includes two key components:

  1. Personal account in the vendor's cloud (for example, in PT Cloud AF).

  2. Affiliate node is a server in K2 Cloud with an installed agent through which client traffic passes.

Another option in our arsenal is cloud WAF with a management console located in our cloudand not from the vendor. Its features:

  1. Multi-tenant installation serving multiple clients.

  2. Distribution of filtering nodes between clients.

  3. Agents send data not to the vendor's cloud, but to a dedicated K2 Cloud segment.

The key difference of this approach is centralized management in our infrastructure, which provides additional control and flexibility in configuration.

It is important to note that fully cloud-based solutions successfully cope with high loads and a large number of requests. Their performance is not inferior to traditional solutions, since a separate node is allocated for each customer. However, assessing the load on the WAF in advance presents certain difficulties due to many influencing factors:

  1. Traffic profile.

  2. Traffic volume.

  3. Set of included checks.

  4. Specifics of protected web applications.

For example, if web applications often work with large files (Word documents, complex XML structures), the WAF node spends more computing resources on their analysis. Therefore, it is almost impossible to accurately predict the load and traffic. There are synthetic tests, but they do not always reflect real operating conditions and cannot serve as a reliable basis for evaluation.

In this case, we recommend a pilot implementation: deploying WAF on temporary licenses for a limited period of time. This allows you to evaluate the volume of traffic, the effectiveness of filtering, and the overall convenience of the solution.

If we have preliminary data about high loads or significant traffic volumes, we apply the following solution:

  1. We install a load balancer in front of the WAF.

  2. The balancer distributes traffic between several filtering nodes without performing deep analysis.

  3. If necessary, we deploy a whole group of filter units.

  4. We ensure that sessions are bound to specific nodes to maintain processing integrity.

This approach allows for significant scaling of system performance, although it has certain limitations. Ensuring optimal performance may require significant computing resources.

The final version in our arsenal is – classic on-premise solutionIn this case, we install the WAF management console in the customer's cloud or data center.

Key Takeaways for Express WAF Deployment

WAF is often recommended as a universal solution for protecting web applications. However, it is important to emphasize that each project is unique and requires an individual approach. Our rapid deployment scenario PT Cloud Application Firewall, although effective in many cases, is not a panacea.

For example, banks face compliance restrictions that prevent them from disclosing traffic or sharing certificates and keys with external parties. In such cases, it is impossible to install a filtering node in the cloud.

In addition, when blocking client requests, WAF may inadvertently transmit updated personal data to a third party. To avoid this, vendors develop masking systems. They create rules that determine which data should be covered when transmitted to the control server. This does not affect the effectiveness of attack detection, but is essential for the safety of personal data.

Some restrictions are also related to the type of traffic. For example, we usually bypass media traffic via UDP through the firewall. The reason is that WAF cannot process this type of traffic, as it is focused only on HTTP protection of web applications and APIs.

To sum it up, the express deployment scheme of a cloud WAF is best suited for protecting lightweight web applications. Cases with large traffic volumes, architectural and compliance restrictions, the need for detailed policy development, custom reactions or sending files for scanning to an antivirus are a different matter. They are easier to implement on-premise. However, it is not necessary to understand the details. As a rule, it is enough to simply set a task, and we will respond with a ready-made commercial offer with a detailed action plan 🙂

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *