Press enter to see results or esc to cancel.

 

Why Don’t We Block DDoS Attacks?

We are often asked if BotGuard protects against DDoS attacks. The short answer is no. Or more correctly, it depends. Here is a longer explanation that will help you understand why.

A Distributed Denial of Service (DDoS) attack is an internet attack made to take down a targeted website or slow it by flooding the network, server, or application with fake traffic. Anyone with malicious intent, whether financial, ideological, or, sadly, just for fun, can damage an organization by launching a DDoS attack against it.

Malicious traffic is generated by bots using dedicated servers, compromised home routers, vulnerable electronics, and computer virus software. DDoS attacks use an army of zombie devices called a botnet. When a DDoS attack is launched, the botnet will attack the target and deplete the application resources. A successful DDoS attack can prevent users from accessing a website or slow it down enough to increase bounce rate, resulting in financial losses and performance issues.

At BotGuard, we help our customers to mitigate bad traffic generated by hackers, spammers, and content thieves using malicious bots, crawlers, and scrapers. The DDoS attackers also use bots, so it is natural to wonder why we don’t always use our service to protect against DDoS.

The secret is in the details. There are several types of attacks, including volumetric, protocol-based, connection-oriented and so on, and whether we can help depends on the type of attack. Our service effectively filters out various bots to prevent overloading the web application with malicious http-requests; however, it cannot help stop an attack on lower layer network subsystems. It is also ineffective against flood of thousands of malicious requests.

The main reasons we don’t work with DDoS attacks are as follows:

  1. Our system is not designed to protect against volumetric attacks. To process traffic, it must be received at some point. Traditional systems take traffic on their side and forward filtered traffic to the client. We integrate our cloud protection at the web server level and mirror the suspicious requests to the closest BotGuard node for analysis. With this system architecture, traffic has to be received directly on the client’s web server. At this point, it is impossible to prevent overload of the hosting provider network. When it comes to a volumetric or low-level network attack, it is efficient for the hosting provider to simply shut down your server or website to protect other customers. In any case, the hosting provider must have other means to defend against such attacks.
  2. Our system is not designed to protect against low-level network attacks. This category of attacks (UDP, ICMP, TCP SYN flood etc.) is based in simplicity and a large volume of requests overloading OS network stack. As we operate at a higher network layer these attacks block requests to the web server before our integration module can process them. In other words, to mitigate such an attack, there are algorithmically simpler but highly productive solutions. We specialize in more “subtle” and less noticeable hazards.
  3. Our automated service isn’t designed to protect against complex attacks. In difficult cases, the attack is carried out by highly trained criminals. This comes down to a well-known game. The attacker exploits system weaknesses, while the security specialists reinforce the weaknesses in real time. This usually requires a dedicated team of security specialists engaged in fighting a specific attack. They see what is happening in real-time and tweak the various filtering parameters. The attacker, in turn, sees the strategy stop working and tweaks the attack parameters. Thus, they continue to have fun until one of the parties gets bored. Clearly, this requires dedicated monitoring staff and is outside the parameters of our automated service.

It is worth adding that quite often the website owners and administrators believe they are under attack, receiving tens to hundreds of requests per second. The reason is that some parts of web applications are much more sensitive to increased load. With this amount of traffic, this is not a DDoS attack at all. However, for many applications, even such a small number of requests to the targets such as a product search form, are a big problem. In these cases, we are happy to help.

Do you have any suggestions or questions? Please add a comment to this post.