MITRE ATT&CK evaluations framework Cybereason

Who “Wins” the MITRE ATT&CK Evaluations?

Last week, results were released for the fourth round of MITRE Engenuity ATT&CK Evaluations. There were 30 participating vendors this year. Cybereason achieved perfect scores in nearly every aspect of the MITRE evaluations.

“This latest round indicates significant product growth from our vendor participants. We are seeing greater emphasis in threat-informed defense capabilities, which in turn has developed the infosec community’s emphasis on prioritizing the ATT&CK framework,” said Ashwin Radhakrishnan, acting General Manager of ATT&CK Evals, MITRE Engenuity, in a blog post.

Unfortunately, once the results are published, it quickly becomes clear that the primary benefit of MITRE can also be its Achilles heel.

MITRE Does Not Rank Vendors

There is tremendous value in the MITRE evaluations—both for the vendors that participate and for customers who want an objective way to assess and compare solutions.

The point of the ATT&CK framework and conducting these annual evaluation exercises is to understand how cybersecurity solutions perform against emulated real-world threats. It provides participating vendors with valuable, unbiased feedback they can use to adapt and improve, and it provides prospects and customers with helpful insight for selecting effective solutions to protect their environments.

The evaluations are open and fair. There is no “winner” in that MITRE does not rank vendors or make any determination about the results. MITRE remains completely transparent and objective, making the emulation plans and evaluation results freely accessible.

Mental Gymnastics

Of course, just because MITRE doesn’t declare a winner doesn’t mean that participating vendors or cybersecurity media won’t try.

The volume and detail of data shared by MITRE are impressive, but it also enables vendors to contort and spin and do mental gymnastics with the information to frame the results in some unique way that focuses only on the areas they did well in to make it appear they “won.”

It reminds me of some of the statistics in sports. Baseball players strive to have a high batting average, and one player usually leads in that category. But it’s all in how the data is massaged. A player who is not even in the top 100 for batting average overall will step up to the plate, and the announcers will say he has the highest batting average… against left-handed pitchers while playing a home game on a Thursday.

It is technically a true statement. It’s just a true statement that is both irrelevant and misleading. A franchise looking to sign a player is not interested in a player with a poor overall batting average who happens to perform well in a very specific and uncommon scenario. They want to sign the player with the best overall batting average.

The same is true for organizations comparing cybersecurity solutions.

Results Speak for Themselves

It goes against the spirit of the MITRE ATT&CK Evaluations to cherry-pick or spin results in this way. MITRE makes all of the data available so the results can speak for themselves.

The fact is you don’t need an advanced degree in mathematics to determine which vendors performed better than others, you can simply look at the results shared by MITRE. Cybereason achieved the best results in the history of the MITRE ATT&CK evaluations. No spin or creative manipulation–just the highest percentages and results compared to any other participating vendor.

See For Yourself

You should go check out the results on the MITRE website for yourself rather than listening to the marketing spin from individual vendors. If you do want to dig into the complexity of the MITRE ATT&CK Enterprise Evaluations, though, Cybereason is hosting a webinar on April 7th to walk through the results from this year and share the value of mapping to the MITRE ATT&CK framework. You can Register here. 


Scroll to Top