How DDoS attacks on banks are organized. And not only. On the fingers

Over the past week, a number of Russian financial institutions have suffered from massive DDoS attacks: including RSHB, Raiffeisenbank, Gazprombank and Rosbank. Previously, the National Payment Card System (NSPK) and Moscow Exchange were attacked. This led to failures in the operation of payment systems, unavailability of bank applications for users and difficulties in performing certain transactions.

We also record attacks on fintech and are ready to share observations, reveal the specifics of the attackers' approach and ways to combat them. Surprisingly, many do not fully understand what is happening at all, except for the awareness of the very fact of a DDoS attack on a resource – how exactly it is organized, even without technical subtleties and details (which, as always, for many belong to the category of “I didn't understand anything, but it's very interesting”).

While we increasingly talk about the return of commercial attacks (DDoS-for-hire) to the forefront, it is quite clear here that the wave of malicious traffic against financial services is planned and politically motivated.

What exactly is happening to banks

From the outside it may seem as if the attackers move from one bank to another when they achieve the desired result. However, this is not quite so: rather, there is a parallel attack on the entire sphere at once, sometimes “hooking” adjacent spheres.

This is done for greater damage, media coverage and visibility. Information about future victims is collected in advance, then at the X hour all of them are attacked at once.

Unstable operation of websites and applications of several banks at once for 2-3 days is a much more noticeable phenomenon than bank “A” being completely disabled yesterday, bank “B” today, and bank “C” tomorrow.

There is no special technique for carrying out attacks specifically for fintech – the tools are common to all types of victims. That is what makes it dangerous. For example, media and logistics companies are also at risk. Attackers hit pre-explored vulnerable points with mixed types of attacks: from our experience, it is clear that, as a rule, not just one type of attack is used, but a complex of them.

Let's dive into the context

Where would we be without context? It always follows everything. But without it, everything can be reduced to a description of the type “Do you want me to hit him? He will turn purple, with speckles.” Well, that is, the victim of a DDoS attack will have such a look. Perhaps, even at first, without realizing what is happening.

How did we end up in this situation? Let's rewind a couple of years and remember who we saw under attack during this period. Among those attacked were government and municipal authorities, media, Internet providers, industrial and IT companies, financial institutions (MFIs, banks), educational and entertainment web resources, logistics services, and others. And also pizza and sushi delivery services, small local cinemas, local auto parts stores, etc.

The targets are both large, well-known organizations and small regional companies known in a certain geographic area. And here it turns out that not only. Not only banks are experiencing attacks. It's just that when a client is unable to make payments/transfers – this is much more noticeable than spare parts for a favorite car ordered a day later, or news not read over morning coffee.

For example, a small attack is enough to disable a small regional media outlet. Well, simply because a site that is visited by only 20 people per second does not need a large attack. What happens? First, the attacker conducts an initial assessment – how popular the web resource is – and, based on this, an assumption is made about its maximum productivity. In the last two years, with a high degree of probability, several dozen similar media outlets have been selected as a target for attack, and they are also attacked at once. As a result, they simultaneously become unavailable. The media may be small, but when it turns out that local media do not work in neighboring 10 regions, this already becomes noticeable in the media.

The same thing happens with local Internet providers, messages about which often flash in social networks. The only difference here is that the object of the attack is not the site, but the infrastructure. Because a small provider with several tens of thousands of subscribers does not have any excess capacity in the infrastructure in principle. Here, even legitimate subscribers sometimes complain that the Internet provider does not always maintain the declared quality and speed indicators (the author of this article himself has repeatedly encountered when packet losses of 3-4% on the network of such providers for several weeks are a sad version of the norm, but cheap), and if you fill such a provider with 200 gigabits of traffic on top, or at least put DNS servers (of which there are usually two), then all the work will stop.

No frills – it is enough to know that the target is small, and that it simply will not withstand an attack of low power. Then the organizers of the attack evaluate the ratio of their attack power and how many targets it will be enough for. For some types of attacks on such targets, in principle, no super-organized groups were ever required.

To disable a large target, attackers act differently. First, the target is assessed in exactly the same way, based on the resource's fame/popularity. This is where the “on the fingers” section begins. After the attack, many people naturally ask the question “How is this possible? You are big! How is it even possible that service N has been down for so long?”

And if it is suddenly clear that the future victim of the attack is an organization that has a fairly developed infrastructure of its own, then additional reconnaissance is carried out, because beyond the site visible to everyone, you can find a lot of interesting things, and all this is based on open data:

  • does the future victim have its own AS (autonomous system), blocks of IP addresses – a couple of whois requests (in the terminal or on any site, like RIPE), a few seconds of time, and now all the networks and AS belonging to the company are known;

  • in which data centers the infrastructure may be located – this is where the data from the previous point may come in handy;

  • are there branches, do these branches interact with the main branch;

  • are there any publications with technical details, such as “we have updated our computing cluster, now we have switched to X generation M servers”, “we have launched our new cloud in the data center of the city of N-ska”;

  • a study of DNS records is carried out – what exists now and what has been saved in the DNS history;

  • the presence of geo-blocking mechanisms (for example, using reputable checker sites that check the availability of sites from different regions of the world, or using their own resources);

  • availability of protection from any security providers;

  • information on whether the victim has been attacked before, what the outcome was last time, and whether there have been any changes in the infrastructure since then.

Potentially vulnerable infrastructure elements are found (maybe a website, weak/vulnerable network equipment, communication channels, providers, data centers). All this is a fairly valuable array of data at the preparation stage, and it is enough for the core of an organized group of attackers to set vulnerable elements of organizations as targets for thousands of other participants in these organized groups.

Further, while the attack is ongoing, the “researchers” continue to analyze future targets. This has essentially become a continuous process. To collect and analyze such data, only a few dozen people are needed. It is noteworthy that attacks often end at the moment when the attacked resource is protected. And they do not happen again. The very fact of the presence of protection is a factor that the attackers will go looking for an easier target. Why beat on a specially closed door when there are ten open ones nearby that you can enter?

How is it organized?

And how does it work on stream?

For this to work, a large number of participants need to be involved in attack groups, which requires some coordination. And in general, the less technical requirements for a particular attack participant, the more likely it is to involve people in the work of such groups. In the case of complex schemes, the reader sees something like:

go to that server, install pip there,

then back to the console, now here's a snap,

add more script packages,

go around the mountain, dance the lambada,

hit the drum, let this script go,

stop that script, ping it,

and upload the lists here,

and then go dance around the mountain again

Is it nonsense? Or is it not? Yes. Nonsense. A reading hacktivist will say “Chinese literacy. I didn't understand anything, but it's very interesting! I'm morally with you!” And the chances of involving a person in participation are completely different, where from the instructions you have to follow “buy a VDS for 5 bucks a month here, copy and follow these two commands, and you're in business with us!” It is absolutely obvious that a person who would like to get involved, in the second case will more easily join the group of attackers.

It is worth mentioning separately that such groups also have tools for carrying out attacks and documentation on how to set them up. Previously, these were disparate groups of hacktivists posting messages in some channels and publics like “Let’s attack companies A, B, C, D and D today?! — Let’s do it, I’m in!” And it all started with different approaches. Some launched their scripts, some made simple websites that hacktivists had to visit (as always: don’t follow unfamiliar links, don’t open incomprehensible content in emails), and then some js script would start working, which would start the attack from the user’s device.

Now it looks different. These are not dozens and hundreds of disparate groups, but rather organized activities. No more “Let's …?! – Let's …!” Now these are automatically loaded lists into a pre-prepared toolkit. Group members are only required to provide attack power and configure the toolkit on this power according to simple instructions. And we now see messages in publics as reports of successfully carried out attacks (the results of which are sometimes embellished, of course).

But what can such a core of several people do? In fact, a lot. In fact, one person is enough to see in 10-20 minutes at most what is interesting about a potential victim, and how suitable she is for the role of a victim. To sweep away potential victims who have clearly fenced themselves off, to understand that these particular guys do not really keep order in the infrastructure that belongs to them. And that these latter are an excellent candidate for the role of that very victim.

Total. The toolkit works 24/7. The core of the attacking group searches for victims on the fly, and on the fly it produces target designation. The targets move away automatically, without human participation – there is no problem that the conditional 2 thousand participants out of the conditional 5 thousand forgot to update something, and the attack network's power dropped. As a result – destructive attacks, with an unpredictably changing vector, with sharply changing targets. And all this without a lunch break.

What to do?

Well, besides the obvious “get some protection.”

Prepare your defense in advance. Before the attack happens. In case of trouble, it will greatly reduce the affected areas and save nerves/strength/money.

How can you try to protect yourself? Be independent, strong and stable. Pros – you are independent, strong and stable. Cons – it's expensive. Very expensive.

If we are a little more serious, then we can consider the following measures that can be implemented on our own.

  • View your DNS records. The most obvious, simple and budget-friendly measure. The very essence of DNS is a kind of public directory of your services and resources. Many are too lazy to separate records of internal and external services, which is why DNS turns into just a map of what is where. Here we have a task tracker, here are NTP servers, next to us is a corporate cloud in the form of Nextcloud. And here is our pride – office access points bought at a sale 10 years ago for a hat of crackers. It was a good deal! In fact, exposing a “map” of internal infrastructure to the outside will not lead to anything good. If there is a need for the same NTP, DNS resolver on external addresses, then protect them at the firewall level with a list of IPs from which requests will come specifically from your own services. You hardly need to service these requests from other people's networks. Well, and do not forget about such a thing as DNS history. At the moment, the extra services are not visible, but perhaps this was in the past – an additional reason to double-check such services. If in general the attacker will have to spend time scanning the entire network to understand whether the service exists, then exposing internal services simply saves him time.

  • Access based on geo-features. They are geoblocking. It works. To a certain extent. On the one hand, the attack power that services have to process can be significantly reduced this way. And there is no need to accept requests from penguins in Antarctica if you do not have a client base there. On the other hand, it does not work. Because clients can be from a significant part of the world. And they cannot be blocked. There are also false positive triggers, depending on the databases used. And attackers at the research stage can also see the fact of blocking by geo-significance, after which they will try to use capacities with the “correct” geo-significance specifically for this attack. As was said earlier, it works, but it is not a panacea.

  • Separate internal and external infrastructure. This point slightly overlaps with the DNS story. If the infrastructure is needed for employees to work (within the office/network, remote workers via a corporate VPN), then there is no need to tie everything to the same DNS server, for example, or to one web server that serves both external and internal services. In the worst case – a fall under attack – internal services will also fall. It lies outside, it lies inside. Work is at a standstill for both clients and employees. And technical personnel are rushing about trying to help both internal and external users.

  • Divide resources by functionality to reduce the area of ​​damage in case something goes wrong. Split them into different domains by functionality, or by the “browser/mobile client” feature. A classic example. Some API living in the root of the site. The result is a nuclear mixture of browser traffic (live people) and automated requests (mobile applications), as well as just visits by some legitimate and not so robots to this API. And simply trying to somehow separate these categories becomes more complicated. Splitting resources that process different traffic categories into different domains (and also into different servers) can greatly simplify the development of countermeasures. Because everything that comes to the server is of the same type, and we definitely don't expect any traffic to this API that is different from mobile application traffic.

  • Get the appropriate computing power. Both servers and communication channels. In order for the protection measures to work, you need to withstand in principle. And if the server is designed to process one thousand requests as a rule, and 60% more at maximum, then talking about independently countering an attack is, in general, pointless. Just as it is useless to rely on some fine settings and protection systems in end systems in the case when you have a 3 Gbit/sec channel, and an attack is 5. The channel will simply be flooded. Can you then buy 5 Gbit/sec so that everything fits? Yes. You can also buy 10. And 20. But against the background of modern attack volumes, this is insignificant, and the attack will most likely always be greater than the capacity of your channel. But buying 100-400 Gbit/sec is already too expensive for an ordinary, albeit large, company.

  • Link. One of the most significant factors. And here we need to look at the details. Because if the traffic is normally 0.5 Gbit/sec, and the maximum capacity is 2-3 Gbit/sec, this is not a guarantee that this reserve will be enough even with specialized protection tools. For your protection measures to start working, you must receive this traffic into your systems. All of it. And there should also be a little bit of channel capacity left so that the service does not degrade. And with a channel of 5, 10 and even 20 Gbit/sec, it is impossible to receive a traffic flow of 700 Gbit/sec. Physically. On the other hand, even with the presence of protection tools, something may get through. Well, because from the same 700 Gbit/sec, something will get through. And this something may amount to a couple of gigabits. In reality, this is not much. In percentage terms, this is generally a pittance. 0.2-0.4%. But with the channel capacity of those same 2-3 Gbit/sec, this couple of tenths of a percent can be a death sentence. The quality of the services simply begins to degrade due to the channel being overloaded, although the end servers could easily digest such garbage without even straining themselves. Especially with additional fine-tuning done before. As always, engineers are faced with the full-scale task of finding an approximate balance in the systems. Well, and a little preliminary costs may follow in connection with this, yes.

  • Make an action plan in case the hour X comes. Up to the list of commands and operations that need to be performed in a particular system. Well, that is, what can generally help to resist from a technical point of view – mass bans of IPs noticed in the attack at the firewall level, enabling blocking by geographic feature/AS (sometimes an attack is carried out from only one AS and within a country). And what kind of list of attacking IPs is this? And here, too, you need to think in advance about which logs to take it from. For example, what passed through the conditional fail2ban (which, of course, should also be in the systems well before the attack). And how to do it quickly. Also, this plan can include a list of actions for the technical support service in order to help the clients who are suffering with you as much as possible.

  • As a more special case of the plan – look for professional protection solutions in advanceif the hour X comes. Just in case. Just so that there is at least a rough understanding of what needs to be done if the hour X comes and a decision is made to deploy adult protection. Not understanding what needs to be done to protect yourself when you already have a means of protection – this also happens.

Now we can run in a crowd with arguments at the ready to the local system administrators and shake an ultimatum plan to solve the problems? Well, no, don't, spare the people. Their work is still somewhat different. They can provide basic security mechanisms. But fighting a bulldozer with only a fire bucket in hand is not very effective.

What if the attack has already begun?

  1. Do not panic.

  2. Do not panic.

  3. Do not panic.

  4. Follow a pre-planned counteraction plan.

  5. See where the defense fails, or where vulnerable spots are exposed. If it doesn't help now, then at least for the future. If they came to you now, then uninvited guests will drop in next time.

  6. If something still goes wrong, the measures taken have little effect, and protection was not deployed in advance, then it is worth considering whether it is time to turn to those who specialize in protection measures? After all, in the fight against all kinds of malicious code, we have long relied on antiviruses. The new reality of DDoS attacks is such that it is not a shame to consider specialized means of countering this threat.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *