Standard Reporting & Metrics:
Each year, cybersecurity companies release their equivalent of a “State of the Union.” These summaries, sometimes over a hundred pages long, contain statistics, diagrams, and case studies conducted by security researchers. Each report attempts to distill troves of evidence down to 6 or 7 tangible issue areas, or “attack vectors” that happen to be on the rise in the given year. These conclusions will then shape the company’s approach to cybersecurity the following year.
Despite the fact that billions of dollars have been spent on cybersecurity globally, the trends do not indicate that we’re making significant progress. Shockingly the only thing consistent about these reports is the conclusion, “We currently suck at preventing attacks, and while we are improving in some areas, the overall trend is the same.”
A review of 3 reports:
The issue is not the data per se. It is all being collected by the best researchers on the planet. Often times the conclusions these reports come to is the same, such as announcing that scrapeware is the problem and then on the next page that malware is the problem when the two are essentially the same terminology. By emphasizing malware, these conclusions feed on people’s previously held concerns, instead of the root cause of the problems.
Some other examples include the following:
- One report states that the 3 largest incident action types are: 1) DoS, 2) Loss, and 3) Phishing. Clearly the second two are related, but Mandiant largely blames secure privilege controls. The focus should be on preventing intrusion in the first place.
- Another reports 78% of individuals won’t click on a single phishing campaign all year. It suggests that human error is a clear factor in breaches. Instead the metric should fall not under human error but as a suggestion for augmenting phishing software protection.
- Finally, a source claims that the most damaging attacks came through seemingly official channels. Should the source of the issue be addressed, such as initial intrusion from ransomware, or just augment 2FA or data segmentation to mitigate the intrusion effects?
These conclusions seems inevitable, but there really isn’t a solution provided by any of these summaries. And this continues to be a problem year over year. The point is, the current data is showing that whatever we are doing is not really working.
The problem may be in the type of metrics that are created to measure progress. Maybe the problem is being looked at in the wrong way. One possible solution could be through a “national incident reporting framework.” By requiring the reporting of cyber breaches, this would allow the government or private companies to truly understand why it is that the problem is not getting better. The data collected could provide insight, and with the authority of a government mandate, far more comprehensive and applicable to all domains. This technique helped reduce car crash fatalities by addressing the shortcomings of car designs and it could be applied to the technology of the future.
For this reason, a reporting form should include five standardized metrics:
- Who – who caused the breach, state actor, company, internal employee
- Why/What– was it for financial gain, espionage
- Where – was it a country, person, company?
- Root cause – phishing, human error?
- Impact – fiscal, national security/long term?
By consolidating and assessing all the breach data in the US, it may improve our understanding of why things are not getting better. This is especially necessary due to the constantly evolving nature of cyber breaches.
The data is not the problem, what we do with it is. As the private sector continues to publish its findings each year, the government needs to synthesize this data into a coherent response platform. This will require a considerable amount of resources and collaboration between both domains, but it is necessary to resolve the fragmented nature of the cybersecurity world.