Breaking (Known) Bad


The last 20 years of security have been based on a single model that has changed very little over the years. The model still has its place and serves an important purpose. On the other hand, the security market has grown too reliant on this model—almost like a drug addiction—to the exclusion of new thinking and approaches that are better suited for the challenges all organizations face from internal and external attackers.

The incumbent model of security is one of “known bad.” It is predicated on the idea that threats have identifiable signs to them that can be flagged to stop the particular exploit. Such signs include technical artifacts such as a signature, hash, Indicator of Compromise (IOC), URL or domain or defined behavior for a software routine or application.

The known bad model is essentially reactive and requires an exploit to first be tried on potential victims. Once the exploit is discovered, security researchers find a technical artifact to identify it and then add that definition to their software tool to block it in the future. The first number of victims ends up getting compromised, but, ideally, this process occurs quickly and definitional updates can be rapidly created, distributed and applied among the customer base for that product. This means that the majority should benefit and have protection sometime after the initial discovery.

Technical advances have continued to improve the known bad model, particularly in terms of getting closer to a zero-day time gap and limiting the exposure time where most people are exposed. Sandboxing has also provided a jump forward by examining a piece of software in a safe, quarantined environment against a list of pre-determined behaviors or outcomes.

While there are obvious drawbacks and deficiencies to this type of known bad approach to security, it has served and continues to serve an important role. The biggest problem is that the known bad model is only really suited to malware and does very little to detect and stop a targeted attack being run by a real threat actor. When organizations put all of their “eggs” in the “known bad security basket,” the result is what largely exists today: network attackers going unnoticed for an average of five months with successful data breaches that put their victims on the front page of newspapers and cost them millions of dollars.

The known bad approach may perhaps find signs of an active attack by discovering malware that a bad actor may have used, but it is extremely likely that this malware will be seen as a piece of a much larger campaign that is being actively directed by a cybercriminal or hacker. In one proof of concept evaluation we performed for a prospective customer, we discovered that an attacker had been hidden in their network for well over a year. Earlier in the year they actually found a piece of malware the attacker had used but had no idea that it was connected to a targeted attack. They removed the malware thinking the job was done.

The only effective way to find an active attacker on the network is by their operational activities—in particular, the “east-west” movements of reconnaissance and lateral movement that are essential in understanding a network new to them and positioning themselves to get at valuable assets. Such activities can be detected using a model of “known good” or “normal,” such as behavioral attack detection.

The known good model assumes nothing other than an attacker will eventually find their way to your network. Besides that, there are no preconceived or defined signs or conditions to check against. Good or normal are not defined in advance. They have to be learned based on the real users and devices and their usual activities within a particular network. Every company and every network will be different. The learning produces continuously updated profiles of users and devices. From these profiles, one can detect anomalies, and with advanced machine learning it’s possible to refine the anomalies into those with a high likelihood of being malicious. Machine learning can also be used to understand that multiple events may be associated and provide greater confidence in quickly and accurately detecting an attack.

The same process of behavioral attack detection can be used to find insider attacks and risky behavior that may be unwittingly done by an employee or a result of a mistake made in deploying some hardware or software.

The known good model complements the known bad model. Both provide essential value to security organizations. Insisting solely on the known bad alone will continue to leave companies exposed to network attackers.


About Author

David Thompson serves as the Senior Director of Product Management for LightCyber, responsible for assessing customer and market requirements, conducting sales and channel training and enablement, market education, and overall solution definition. He has been with LightCyber since late 2014. Mr. Thompson has over 15 years of experience focused on information security. Prior to joining LightCyber, he served in Product Management leadership positions for OpenDNS, iPass, Websense, and Voltage Security (now HP). Prior to running product management at Voltage Security, he was a Program Director at Meta Group (now Gartner) responsible for security research topics including encryption, PKI, remote access, and secure network design. He holds a bachelors of science in Physics from Yale University.

Leave A Reply