Two weeks ago, the White house held a Cyber Security Summit at Stanford University to address the growing wave of high-profile Cyber Attacks. For anyone who might have missed it, about three weeks ago Anthem was hacked and nearly 80 million records with various types of patient information were stolen (although the true scope of what was taken or not still isn't entirely clear). At the end of 2014, the Sony Pictures hack had people worried that it might lead to some type of real-world conflict.
The White House Cyber Summit was meant to highlight the administration's commitment to Cyber Security as well as to showcase some of their efforts to date. The Summit was also meant to help launch a wider public-private partnership on Cyber Security, but I will address that in a separate post. I'd like to focus on one of the key results that has come out of the administration's efforts on Cyber Security thusfar - The NIST Cyber Security Framework (CSF).
The immediate context or history of the CSF can be traced back to President Obama's Executive Order from February 2013. That wasn't the first action this administration undertook in regards to Cyber Security - there have been other executive orders and high profile meetings going back at least five years. However, not a whole lot concrete action came out of those previous efforts - and now we need to ask how much has resulted from the latest round of activity beginning in 2013. Of all the results of 2013's executive order, the most substantial by far is the NIST Cyber Security Framework.
High level view of the CSF
I'll make a disclaimer here, I understand that the Framework is intended to be high-level… But now that I've said that, two years is a long time to come up with a high level framework. Let's take a moment to characterize just what the CSF represents. Upon first glance, it serves is a very-high level methodology flow - not unlike of like 'design, built, test' etc. The rest of the CSF is comprised of a taxonomy breakout within each stage of the methodology and some mapping between the methodology steps or stages and various technical security standards. While I don't dismiss the need for having these elements, I can't see how it could have possibly taken anyone 2 years to produce and review this. More importantly though, the process it represents is significantly flawed.
Why is it flawed?
If we take a look at the steps illustrated within the CSF they seem innocuous enough at first glance; Identify, Protect, Detect, Respond, Recover. There isn't a step missing per se, it's the expectation of what really needs to be done that's missing. If we accept "Identify" as the necessary first step then we've consigned ourselves to the same thinking that dominated information Assurance (IA) and Computer Security back before the year 2000. What we are in fact doing with this order and allocation of process events is placing ourselves in a permanent reactive mode. The reactive nature of this process flow is also reflected with the step titles for "Respond" & "Recover."
If we take a look at the steps illustrated within the CSF they seem innocuous enough at first glance; Identify, Protect, Detect, Respond, Recover. There isn't a step missing per se, it's the expectation of what really needs to be done that's missing. If we accept "Identify" as the necessary first step then we've consigned ourselves to the same thinking that dominated information Assurance (IA) and Computer Security back before the year 2000. What we are in fact doing with this order and allocation of process events is placing ourselves in a permanent reactive mode. The reactive nature of this process flow is also reflected with the step titles for "Respond" & "Recover."
Why is it Reactive and why does that matter?
Well, if we accept "Identify" as a our first natural step/stage then there is an implication that we have to wait until after an incident has occurred to do anything about it. Granted, "Identify" could reflect identification across a broader range of participants, meaning that incident threats can be classified at a group, national or global level but even with those considerations the overall stance is reactionary in nature. So you might ask, how can one deal with things that haven't happened yet (before they have been identified somewhere)? Well, we do it all the time and it's called "Planning." Or we could call upon examples from other sciences, such as Chemistry and Physics where certain discoveries were predicted long before they were verified in the laboratory.
Well, if we accept "Identify" as a our first natural step/stage then there is an implication that we have to wait until after an incident has occurred to do anything about it. Granted, "Identify" could reflect identification across a broader range of participants, meaning that incident threats can be classified at a group, national or global level but even with those considerations the overall stance is reactionary in nature. So you might ask, how can one deal with things that haven't happened yet (before they have been identified somewhere)? Well, we do it all the time and it's called "Planning." Or we could call upon examples from other sciences, such as Chemistry and Physics where certain discoveries were predicted long before they were verified in the laboratory.
You might be asking yourself why or how this matters in real-world Cyber Defense. Let's take a look a couple of problematic scenarios:
- Malware that hasn't been identified that can live undetected for years. (this has happened quite a bit - it represents a more subtle threat)
- Attacks so devastating that by the time they've been identified have already effectively destroyed the organization in question. (we're getting close here and probably already experienced this)
- Attacks that can't be countered within a necessary window to accomplish a specific goal; such as crippling of a command and control system, theft of funds on the wire or elsewhere, disabling of control systems managing various infrastructure capabilities.
Each of these scenarios is real and each one is made possible primarily due to our reactive philosophy for Cyber Security defense and that is the flaw perpetuated by the CSF process. Essentially each scenario above represents a window which cannot defend against. The reason this philosophy is so dangerous is because it effectively cuts off any action on the part of the defenders until it is too late to do much about it (in the context of the attacks that are happening now). The action that is taken is based upon previous expectations which then practically assures that "Respond" and "Recover" will not go very well until after the initial attacks achieve their objectives. This is very bad news for the future of Cyber Security as there is one thing we can be certain of - there will be no shortage of novel attacks and ever greater incentives to those pursuing them.
Is there an Alternative?
An alternative model - one which I proposed five years ago within the context of a defense industry-related Cyber Consortium is this: the first step in a forward thinking Cyber Framework ought to pivot those traditionally on the defensive into a more proactive position - and the only way that can occur is if one "Predicts" where the problems will occur before they happen. This seemingly simple change actually serves to shift our entire paradigm in regards to Cyber Security. Obviously, there are quite a few solution implications to taking such a stance, but there isn't room in this post to adequately cover it so I will write another one specifically dedicated to the architecture of Cyber Security prediction.
An alternative model - one which I proposed five years ago within the context of a defense industry-related Cyber Consortium is this: the first step in a forward thinking Cyber Framework ought to pivot those traditionally on the defensive into a more proactive position - and the only way that can occur is if one "Predicts" where the problems will occur before they happen. This seemingly simple change actually serves to shift our entire paradigm in regards to Cyber Security. Obviously, there are quite a few solution implications to taking such a stance, but there isn't room in this post to adequately cover it so I will write another one specifically dedicated to the architecture of Cyber Security prediction.
The basic premise I'm posing though is that organizations can no longer afford to "fail first" before being able to effectively Respond to Cyber threats. The ultimate goal should not be Recovery per se, but the complete avoidance of key incidents in the first place. Obviously a Recovery and Response element will still be required - but in the new paradigm that no longer becomes the key metric for success (e.g. how quickly things can back online after failing). The key measurement will become - how long did we go w/o a meaningful incident and we were able to avoid experiencing any or all crippling / severe attacks. Many of the incidents we obsess over today represent "white noise" that may serve only to distract our attention from what are more meaningful events. The difference is important in helping to determine a new metrics paradigm that can serve a proactive framework.
Why does this need to become our new criteria for success? Well, think about it this way, how long does it take to recover from a hack like the one that just occurred at Anthem or Sony or Target? Months, years or longer ? When the level and severity of Cyber Attack becomes high enough to threaten the very existence of the organization being attacked, then a simple Identify, Respond and Recover approach becomes more or less meaningless (The Maginot Line which is casually bypassed on the invader's route to victory).
Another really critical consideration to keep in mind here is this - the methodology being suggested by the CSF (which I'm sure someone considered best practice) has been around for a long time and guess what - it's not working. So, why we would anyone seek to further perpetuate a model which is already failing in such a spectacular fashion? Old thinking dies hard, perhaps moreso in government than elsewhere.
And here is where my criticism should really sting - how is it that it took anyone 2 years to rephrase the old model rather than thinking through the actual challenge? When I went through my process of developing a similar framework 5 or 6 years ago it only took me (by myself) about 6 months doing it on a part time basis (with at least 4 other things happening during that period). The simple diagram I listed above was a small part of what I had developed and my framework extended into best practice architecture (not merely a mapping of relevant technical security standards). I expect more from NIST and the Administration not because I'm picky, but because I understand the implications of not solving this challenge soon. And the clock has run down - we don't have two more years to lose going down the wrong path.
The time is now to redefine Cyber Security from the ground up.
Copyright 2015, Stephen Lahanas
0 comments :
Post a Comment