The escalating intensity of global bot traffic and the increasing severity of its overall impact mean that dedicated bot management solutions are crucial to ensuring business continuity and success. This is particularly true since more sophisticated bad bots can now mimic human behaviour and easily deceive conventional cybersecurity solutions or bot management systems.
Addressing highly sophisticated and automated bot-based cyberthreats require deep analysis of bots’ tactics and intentions. According to Forrester Research, “Attack detection, attack response and threat research are the biggest differentiators. Bot management tools differ greatly in their detection methods; many have very limited — if any — automated response capabilities. Bot management tools must determine the intent of automated traffic in real time to distinguish between good bots and bad bots.”
When selecting a bot mitigation solution, companies must evaluate the following criteria to determine which best fit their unique needs.
Basic Bot Management Features
Organizations should evaluate the range of possible response actions — such as blocking, limiting, the ability to outwit competitors by serving fake data and the ability to take custom actions based on bot signatures and types.
Any solution should have the flexibility to take different mitigation approaches on various sections and subdomains of a website, and the ability to integrate with only a certain subset of from pages of that website — for example, a “monitor mode” with no impact on web traffic to provide users with insight into the solution’s capabilities during the trial before activating real-time active blocking mode.
Additionally, any enterprise-grade solution should be able to be integrated with popular analytics dashboards such as Adobe or Google Analytics to provide reports on nonhuman traffic.
Capability to Detect Large-Scale Distributed Humanlike Bots
When selecting a bot mitigation solution, businesses should try to understand the underlying technique used to identify and manage sophisticated attacks such as large-scale distributed botnet attacks and “low and slow” attacks, which attempt to evade security countermeasures.
Traditional defences fall short of necessary detection features to counter such attacks. Dynamic IP attacks render IP-based mitigation useless. A rate-limiting system without any behavioural learning means dropping real customers when attacks happen. Some WAFs and rate-limiting systems that are often bundled or sold along with content delivery networks (CDNs) are incapable of detecting sophisticated bots that mimic human behaviour.
The rise of highly sophisticated human-like bots in recent years requires more advanced techniques in detection and response. Selection and evaluation criteria should focus on the various methodologies that any vendor’s solution uses to detect bots, e.g., device and browser fingerprinting, intent and behavioural analyses, collective bot intelligence and threat research, as well as other foundational techniques.
A Bot Detection Engine That Continuously Adapts to Beat Scammers and Outsmart Competitors
- How advanced is the solution’s bot detection technology?
- Does it use a unique device and browser fingerprinting?
- Does it leverage intent analysis in addition to user behavioural analysis?
- How deep and effective is the fingerprinting and user behavioural modelling?
- Do they leverage collective threat intelligence?
Any bot management system should accomplish all of this in addition to collecting hundreds of parameters from users’ browsers and devices to uniquely identify them and analyze the behaviour. It should also match the deception capabilities of sophisticated bots. Ask for examples of sophisticated attacks that the solution was able to detect and block.
Impact on User Experience — Latency, Accuracy and Scalability
Website and application latency creates a poor user experience. Any bot mitigation solution shouldn’t add to that latency, but rather should identify issues that help resolve it.
Accuracy of bot detection is critical. Any solution must not only distinguish good bots from malicious ones but also most enhance the user experience and allow authorized bots from search engines and partners. Maintaining a consistent user experience on sites such as B2C e-commerce portals can be difficult during peak hours. The solution should be scalable to handle spikes in traffic.
Keeping false positives to a minimal level to ensure that user experience is not impacted is equally important. Real users should never have to solve a CAPTCHA or prove that they’re not a bot. An enterprise-grade bot detection engine should have deep-learning and self-optimizing capabilities to identify and block constantly evolving bots that alter their characteristics to evade detection by basic security systems.
The author is Managing Director, India, SAARC & Middle East at Radware.