The toughest test out there
Unlike other evaluations that fail to factor in every element of real-world attacks, the SE Labs methodology uses a comprehensive testing model. SE Labs sets up real networks and hacks them the same way threat actors do, while monitoring how successful those attacks are (or in this case, aren’t) against the security solutions they’re testing. The tests score solutions based on detection, prevention, response and remediation, along with their ability to neutralize threats, including living-off-the-land (LOTL) attacks. Just like real life.
In our case, both solutions were subjected to real-world attack scenarios under conditions that can only be described as harsh. SE Labs mounted attacks by 15 known ransomware families—not just two or three well-known threats as other testing regimens do. They assaulted both products with 556 ransomware payload files with both known and unknown variants. (In fact, two-thirds of files were unknown malware variants.)
The attacks fell into two categories: Direct Attacks and Deep Attacks. Direct Attacks pit the product against a wide distribution of malware within a relatively short attack chain—a realistic representation of many attacks that target organizations of all sizes, and a strong test of prevention tools for stopping known and unknown threats. Deep Attacks are modeled after more sophisticated approaches in which attackers attempt to infiltrate an environment and move laterally to deliver their ransomware payload, roost unseen within the environment or inflict their damage in other ways. Defending against these scenarios requires a product to use detection and response capabilities to track an attacker’s movement throughout the entire attack chain; successful products shut down the attack before it does damage.
SE Labs examines not only whether a product detected the ransomware, but also whether it had deep insight into the entire process of how the network was hacked. “This level of visibility,” Edwards observes, “would be a significant advantage for a security professional who is battling a persistent attacker in real time.” If that doesn’t sound like real life for a security analyst, I don’t know what does.
What’s more, SE Labs measures how accurately a product classifies legitimate applications and URLs, while factoring in the interactions the product has with users. A perfect score goes only to those solutions that properly classify legitimate objects as safe—or that properly choose not to classify them at all. “In neither case,” notes the SE Labs reports, “should it bother the user.” For alert-weary SecOps teams, that kind of silence really is golden.