Showing posts with label NIST Cyber Security Framework. Show all posts
Showing posts with label NIST Cyber Security Framework. Show all posts

Sunday, December 11, 2016

Understanding Security Controls

Security Controls sound a little bit menacing upon first hearing the term, however there’s nothing scary about them – that is unless you have a large organization that doesn’t happen to be using them. Let’s start with a definition:
A Security Control is a specified behavior, process, configuration or capability – or combination thereof – designed to counter specific or non-specific technical threats to an information environment.
Now, there are controls surrounding physical security and mechanical systems; however, in this post we’ll limit our focus to IT Security Controls. Before we go too deep into what they are and how they tend to be operationalized, let’s ask the obvious question first – why do we need them?
The quick answer is that Security Controls (and yes this does imply that come as sets of controls) represent an excellent Framework around which a security architecture and program can be built. Notice I used the terms Framework & Architecture here and that’s deliberate. Being an Architect I tend to view any Framework like those used for Security Controls but also things like ITIL as more or less adjuncts to Enterprise Architecture. The reason why I think that way is because of how similar they are – in many ways one can actually employ a group of Security Controls as the de facto Security Architecture for an organization that might not otherwise have one (and there are a lot of those out there).
Security Controls are at once pragmatic (Tactical) and Strategic – in that the controls help to define not just our immediate approaches for dealing with current threats but also usually provide excellent long-term targets as well. Another important consideration for why Security Controls are so important these days is that things have just gotten a lot more complicated in regards to Cyber Security. I’ve talked about this at some length here in other posts but the bottom line is things have just as scary as those of who’ve been working with Cyber Security said they would be. Granted, we haven’t had any zero day apocalypse yet, but most of the other predictions being made since the late 90’s have already come to pass and even some that many of us didn’t think about (e.g. Russian hacking of the presidential election). Having a framework in place – one developed by hundreds of experts from around the world – helps to demystify the landscape and level the playing field somewhat. No large enterprise in this day of age should attempt to handle security entirely on their own without benefit of Security Control frameworks – it’s more than risky – it’s negligent.
So, what does a Security Control look like? Here’s one from the NIST 800-53 Security and Privacy Controls for Federal Information Systems and Organizations publication:
ACCESS RESTRICTIONS FOR CHANGE Control: The organization defines, documents, approves, and enforces physical and logical access restrictions associated with changes to the information system.
There is much more information regarding this particular control, but the thing to keep in mind is that the control is defining something based on a classification of system issues or vulnerabilities – in other words, it is a taxonomy of practice used to organize several things:
  • Audits or Assessments (that may or may not be formal in nature, but the controls provide the framework for what is to be assessed).
  • Certification (to allow a system to be included on a network or perhaps a larger certification process like SOC 1 or 2 for data centers).
  • Security Hardening (e.g. the specific tools, configurations and processes put into place to counter a particular type of threat. For Access Control this could include a specific set of roles in regards to who might be allowed access at a given level of granularity).  
Another key consideration for a Security Control Framework, like NIST, CIS or OWASP, is that they can provide a foundation around which enterprise security metrics can be built. The metrics can potentially track all aspects of security-related activity in an organization, including things like:
  • Numbers of incidents in general
  • Users accessing or trying to access restricted resources
  • Risk levels against specifically identified threats
  • Patch management
  • Network Traffic and perimeter attacks
  • Instances of sensitive data leaving the enterprise in emails, etc.
  • Attempts to download organizational information onto thumb-drives
  • And much more…
The CIS control framework even comes with its own data model which can be used to build a reporting tool / data warehouse to track this type of information. Add a tool like Tableau on top and you’ve got a pretty slick solution that’s aligned with industry best practices without having to invent the whole thing from scratch.
The key thing to keep in mind is that in most enterprises, the main threat isn’t a specific group of Russian or Chinese hackers, but rather they sheer confusion surrounding the ever growing set of tools and security practices that must be managed as “mission-critical.” Adopting and integrating standard Security Controls still allows for a tremendous amount of flexibility in how to meet that confusion, but it also helps to reduce the complexity challenge almost immediately.  
Copyright 2016, Stephen Lahanas

Tuesday, February 24, 2015

An Assessment of the NIST Cyber Security Framework

Two weeks ago, the White house held a Cyber Security Summit at Stanford University to address the growing wave of high-profile Cyber Attacks. For anyone who might have missed it, about three weeks ago Anthem was hacked and nearly 80 million records with various types of patient information were stolen (although the true scope of what was taken or not still isn't entirely clear). At the end of 2014, the Sony Pictures hack had people worried that it might lead to some type of real-world conflict.
The White House Cyber Summit was meant to highlight the administration's commitment to Cyber Security as well as to showcase some of their efforts to date. The Summit was also meant to help launch a wider public-private partnership on Cyber Security, but I will address that in a separate post. I'd like to focus on one of the key results that has come out of the administration's efforts on Cyber Security thusfar - The NIST Cyber Security Framework (CSF).
The immediate context or history of the CSF can be traced back to President Obama's Executive Order from February 2013. That wasn't the first action this administration undertook in regards to Cyber Security - there have been other executive orders and high profile meetings going back at least five years. However, not a whole lot concrete action came out of those previous efforts - and now we need to ask how much has resulted from the latest round of activity beginning in 2013. Of all the results of 2013's executive order, the most substantial by far is the NIST Cyber Security Framework.
High level view of the CSF
I'll make a disclaimer here, I understand that the Framework is intended to be high-level… But now that I've said that, two years is a long time to come up with a high level framework. Let's take a moment to characterize just what the CSF represents. Upon first glance, it serves is a very-high level methodology flow - not unlike of like 'design, built, test' etc. The rest of the CSF is comprised of a taxonomy breakout within each stage of the methodology and some mapping between the methodology steps or stages and various technical security standards. While I don't dismiss the need for having these elements, I can't see how it could have possibly taken anyone 2 years to produce and review this. More importantly though, the process it represents is significantly flawed.
Why is it flawed?
If we take a look at the steps illustrated within the CSF they seem innocuous enough at first glance; Identify, Protect, Detect, Respond, Recover. There isn't a step missing per se, it's the expectation of what really needs to be done that's missing. If we accept "Identify" as the necessary first step then we've consigned ourselves to the same thinking that dominated information Assurance (IA) and Computer Security back before the year 2000. What we are in fact doing with this order and allocation of process events is placing ourselves in a permanent reactive mode. The reactive nature of this process flow is also reflected with the step titles for "Respond" & "Recover."
Why is it Reactive and why does that matter? 
Well, if we accept "Identify" as a our first natural step/stage then there is an implication that we have to wait until after an incident has occurred to do anything about it. Granted, "Identify" could reflect identification across a broader range of participants, meaning that incident threats can be classified at a group, national or global level but even with those considerations the overall stance is reactionary in nature. So you might ask, how can one deal with things that haven't happened yet (before they have been identified somewhere)? Well, we do it all the time and it's called "Planning." Or we could call upon examples from other sciences, such as Chemistry and Physics where certain discoveries were predicted long before they were verified in the laboratory.
You might be asking yourself why or how this matters in real-world Cyber Defense. Let's take a look a couple of problematic scenarios:
  1. Malware that hasn't been identified that can live undetected for years. (this has happened quite a bit - it represents a more subtle threat)
  2. Attacks so devastating that by the time they've been identified have already effectively destroyed the organization in question. (we're getting close here and probably already experienced this)
  3. Attacks that can't be countered within a necessary window to accomplish a specific goal; such as crippling of a command and control system, theft of funds on the wire or elsewhere, disabling of control systems managing various infrastructure capabilities.
Each of these scenarios is real and each one is made possible primarily due to our reactive philosophy for Cyber Security defense and that is the flaw perpetuated by the CSF process. Essentially each scenario above represents a window which cannot defend against. The reason this philosophy is so dangerous is because it effectively cuts off any action on the part of the defenders until it is too late to do much about it (in the context of the attacks that are happening now). The action that is taken is based upon previous expectations which then practically assures that "Respond" and "Recover" will not go very well until after the initial attacks achieve their objectives. This is very bad news for the future of Cyber Security as there is one thing we can be certain of - there will be no shortage of novel attacks and ever greater incentives to those pursuing them.
Is there an Alternative?
An alternative model - one which I proposed five years ago within the context of a defense industry-related Cyber Consortium is this: the first step in a forward thinking Cyber Framework ought to pivot those traditionally on the defensive into a more proactive position - and the only way that can occur is if one "Predicts" where the problems will occur before they happen. This seemingly simple change actually serves to shift our entire paradigm in regards to Cyber Security. Obviously, there are quite a few solution implications to taking such a stance, but there isn't room in this post to adequately cover it so I will write another one specifically dedicated to the architecture of Cyber Security prediction.
The basic premise I'm posing though is that organizations can no longer afford to "fail first" before being able to effectively Respond to Cyber threats. The ultimate goal should not be Recovery per se, but the complete avoidance of key incidents in the first place. Obviously a Recovery and Response element will still be required - but in the new paradigm that no longer becomes the key metric for success (e.g. how quickly things can back online after failing). The key measurement will become - how long did we go w/o a meaningful incident and we were able to avoid experiencing any or all crippling / severe attacks. Many of the incidents we obsess over today represent "white noise" that may serve only to distract our attention from what are more meaningful events. The difference is important in helping to determine a new metrics paradigm that can serve a proactive framework.
Why does this need to become our new criteria for success? Well, think about it this way, how long does it take to recover from a hack like the one that just occurred at Anthem or Sony or Target? Months, years or longer ? When the level and severity of Cyber Attack becomes high enough to threaten the very existence of the organization being attacked, then a simple Identify, Respond and Recover approach becomes more or less meaningless (The Maginot Line which is casually bypassed on the invader's route to victory).
Another really critical consideration to keep in mind here is this - the methodology being suggested by the CSF (which I'm sure someone considered best practice) has been around for a long time and guess what - it's not working. So, why we would anyone seek to further perpetuate a model which is already failing in such a spectacular fashion? Old thinking dies hard, perhaps moreso in government than elsewhere.
And here is where my criticism should really sting - how is it that it took anyone 2 years to rephrase the old model rather than thinking through the actual challenge? When I went through my process of developing a similar framework 5 or 6 years ago it only took me (by myself) about 6 months doing it on a part time basis (with at least 4 other things happening during that period). The simple diagram I listed above was a small part of what I had developed and my framework extended into best practice architecture (not merely a mapping of relevant technical security standards). I expect more from NIST and the Administration not because I'm picky, but because I understand the implications of not solving this challenge soon. And the clock has run down - we don't have two more years to lose going down the wrong path.
The time is now to redefine Cyber Security from the ground up.

Copyright 2015, Stephen Lahanas