By Andy Still, CTO and Co-Founder, Netacea
Web Application Firewalls (WAFs) are a stalwart of cybersecurity, and have been since their introduction in 1999.
They have been, until recently, a very effective way of keeping certain attacks at bay: SQL injection attacks, session hijacking, cross site scripting and buffer overflows. A WAF is, essentially, a bouncer—it examines each data packet that is being sent to an application, and if it doesn’t meet the rules set down, or is blacklisted, it’s barred from entry.
It’s an effective tool, especially against zero-day exploits. Undiscovered or unpatched vulnerabilities give hackers an easy way in. Using a WAF prevents this type of exploit—again, like a bouncer, but one that’s watching the side door as well as the main entrance.
If the data packets are the type of traffic that could cause problems, then they’re sent away. The WAF doesn’t need to know that a vulnerability exists to turn away an SQL injection attack. It knows that they are trouble.
The problem with WAFs
WAFs have, unfortunately, become vulnerable to the sophisticated attack techniques that have emerged in recent years. While they still send many troublemakers on their way, their bouncer instincts are often fooled by the equivalent of three children in a trench coat attempting to get into a nightclub.
They may look tall enough to get into an age-restricted establishment, but nobody should really be fooled. The WAF’s inflexible rules for entry can fall foul of such manipulation, letting the trio wobble in without a problem.
WAFs are designed to spot illegitimate requests that are aimed at exploiting security weaknesses in a web application. To get around this, many attacks now look legitimate, and seek to exploit the business logic of a website rather than expose a database or exploit a security weakness.
An SQL injection attack, for example, can use a search box or similar on a website to try and manipulate the database behind the application, either corrupting the information held or letting the hacker view information that should be hidden, such as usernames, dates of birth and even passwords. A WAF prevents this, by ensuring that what is entered into these input-fields cannot affect the database directly.
But what if the information being entered into these fields is legitimate, and instead of manipulating the database, it manipulates how the business works? A good example of this is an attack on airlines known as “seat spinning”.
The attacker uses bots to put seats for a flight in an online sales basket but doesn’t check out. This raises the price of seats for any legitimate user. Luckily, there are third-party sites where seats can be bought at a more reasonable price. Who runs those sites? It’s the attacker, making a tidy profit from those inflated prices.
In this instance, none of the WAF’s rules are being broken, so as far as its concerned, everything is fine. Another solution is needed to deal with the problem bots bring.
WAFs are just one part of the solution
The machines have taken over: automated bots account for more than half of the world’s web traffic, and while some bots are used for a legitimate purpose, such as search engine spiders, the majority are malicious.
The overwhelming majority of login attempts, an estimated 90%, are fraudulent. A WAF will successfully block most volumetric, brute force attacks but it won’t prevent the more sophisticated attempts to takeover accounts or subvert business logic. They’re simply not designed to do anything about this problem.
Trying to create rules that would prevent attacks would also be likely to block legitimate traffic. For example, rules that block specific IP addresses are only effective at blocking bots for a short while and can create problems for legitimate users. Rate limiting, a way to prevent too many requests, are easily beaten by bots that test for the limit and ensure they stay just under it.
There needs to be a more sophisticated analysis of traffic patterns on a site to identify bots. Businesses need to know, for every unique visitor to their website, if this is a person or a bot. But they need to know more than that—they need to know the bot’s intent.
There are patterns of bot behaviour that can be identified and used to stop those that have malicious intent. These patterns are not necessarily determined through basic rules but need more sophisticated analysis.
WAFs are not doomed. They are an integral part of a website’s security, but they cannot act alone. The bouncer needs backup. Walk into any casino and, while there will be security around, there are cameras looking for those who are trying to subvert the system by counting cards or are known to try such tricks in other casinos.
Websites need to be less like nightclubs and more like casinos—employing WAF bouncers, but also making sure they identify which of their users are out to play the system.
About the author
Andy Still is a pioneer of digital performance for online systems. As Chief Technology Officer, he leads the technical direction for Netacea’s products, as well as providing consultancy and thought leadership to clients. Andy has authored several books on computing and web performance, application development and non-human web traffic.
Netacea, a bot detection and mitigation platform, takes a smarter approach to bot management and is a recognised leader for its innovative use of threat intelligence and machine learning. Netacea’s Intent Analytics™ engine analyses web and API logs in near real-time to identify and mitigate bot threats. This unique approach provides businesses with transparent, actionable threat intelligence that empowers them to make informed decisions about their traffic.