Wednesday, December 14, 2016

Securing The Enterprise - Identifying Sensitive Data

Several months ago, I wrote a post about the Top 3 Mistakes in Data Loss Prevention (DLP). In that post, I mentioned that the typical (and logical) first step for all DLP programs was the ability to identify and classify sensitive data. But that sounds a little easier than it often turns out to be. There are a number of reasons why sensitive data identification can be tricky, here are just a few of those:
  1. Many organizations don’t have well documented data sources – this consideration applies mainly to structured data in databases, where one would hope that data dictionaries are available – but often aren’t.
  2. There are differing opinions as to scope. For example, should any or all unstructured data be considered when doing such assessments? Also, data that passes between organizations can lead to tough ownership and liability questions.
  3. There are different types of sensitivity to consider. There is PCI (payment card industry) data, PII, PHI, SOX, military and other sorts of data that might be considered sensitive based on how it is used or how it could be exploited by attackers. I will review some of these in greater depth in a moment.
  4. There are sometimes differences in opinion as to whether the focus should be directed only to IT systems or should expand to all devices within an enterprise which might contain data (whether attached to networks or not).
  5. The task of actually doing this (depending on the scope and the size of the enterprise) in itself can be quite daunting. This part is typically underestimated and in some cases this sort of assessment may be the first time the enterprise in question has ever tried to understand all of their data.
A big reason why organizations might be thinking about identifying potentially sensitive data is due to massive and continuous breaches which have occurred over the past few years. The latest, announced just this week, was at Quest Diagnostics, thus ensuring that PHI data (Personally Identifiable Health Information) was likely involved. Other breaches, such as the one at the Office of Personnel Management (OPM), involved many more records – in the case of OPM some 22 million PII records of current and former government contractors were stolen. The bottom line is that industry has spent a lot of time, money and effort on security but relatively little effort in determining what exactly might be at risk if or when that security fails. This is actually worse than it sounds for a lot of reasons, but it doesn’t sound good anyway. Here are some of the reasons why not knowing your data is a bad thing:
  • Because if you don’t know you have something that needs to be protected, you’re not likely to protect it. Or perhaps, some things need to be protected much more than others, but they’ve all been grouped into a generic, universal protection paradigm.
  • Just because your organization doesn’t know it has these assets, that doesn’t mean someone in the organization can’t find out and of course people on the outside could as well. This means that there may be mechanisms, both internal and external where this data is getting stolen and no one is aware because the manner of attack isn’t as obvious.
  • Depending on the data involved, there could even be situations where data loss or theft could literally destroy one’s business or organization. This has already happened and is likely to happen again.
The topic of how to actually go about doing this type of evaluation can be rather complex, but I’d like to highlight at least a few principles that could help guide such as effort; they are as follows:
Understand the scope implications – the level of effort associated with the project is entirely dependent upon the scope chosen. If this is understood up front, then the likelihood of completing it on time increases exponentially. This might require there to be some inventory work done even before the project begins to help quantify a rough idea of how many systems and attributes may exist. Scope here also refers to the granularity of the evaluation – for example attribute level versus table level scoring.
Define your criteria up front – There are a number of guidelines and regulations regarding sensitive data to choose from – however some of them overlap or conflict with one another. The key thing to keep in mind here is that common sense can quickly resolve those types of issues and perhaps more important is how metrics will be defined and assessed. A metric in this case would be a risk rating or level based upon various classes of sensitive data that can be used as models for how other data can be rated.
Automate the evaluation as much as possible – This is especially important in larger organizations with many systems and lots of unstructured data. The automation can take many forms, including exporting data dictionaries (where and when they exist) or data model metadata into a tracking database or use of a DLP tool to identify instances of sensitive data found in documents and other unstructured sources. The latter activity depends upon rulesets created in the criteria definition stage – those rulesets will be very much dependent on the tools used.
Have a Mitigation Policy & Approach defined before you start – This is both a common sense and liability consideration. Depending on the regulations involved, for PHI you’d have to be thinking about HIPAA and HITECH for example, there can be penalties for knowingly not correcting sensitive data issues. That means as soon you find sensitive data, you must have a plan in motion to correct it.
Assign the proper resource/s to ensure it’s done right and gets finished – Don’t expect that you’ll have all the necessary talent in-house to conduct an evaluation like this. Even if you do have the right people, chances are they’re already fully occupied with other mission critical tasks. It’s important to get the right people involved as the results will determine the organization’s overall vulnerability for years to come.
Define the next Steps as part of the initial evaluation – There will always findings coming out of an evaluation like this that will require some sort of remediation. Not all of those remediations can occur immediately, thus it is important, and especially considering that there may be liability considerations, that all of the necessary next steps to address the findings are planned out before the evaluation concludes.
While this set of suggestions is not meant to be comprehensive, it does present a starting point for most organizations in how they can address issues relating to sensitive data. The reality is that almost every organization does have some sensitive data and the liability surrounding that data has grown tremendously in recent years. The costs of not getting control of this issue almost always outweigh the costs of implementing that control.

Copyright 2016, Stephen Lahanas

Monday, December 12, 2016

Why Every Enterprise Needs a Hybrid Cloud Governance Strategy

Over the past five years, I’ve had the opportunity to work with more than half a dozen Fortune 500 clients as well as several government agencies in various contexts. There is one thing that every one of these organizations had in common – all of them needed a Hybrid Cloud Governance Strategy and none of them had one. I’m sure that there are a few organizations out there that have tackled this already, but if there are, they’ve certainly jumped well ahead of the curve. Adoption of a Hybrid mix of Cloud and non-cloud capabilities has quite literally exploded in the past few years. There are few if any enterprises that haven’t already begun moving down that path. However, this is definitely a case where the technology trend is outpacing our ability to integrate it into business operations and processes (at least in a fairly well-defined or unified manner).
The time has definitely come where this is no longer a nice to have or something for trailblazers only. Today, every enterprise needs to tackle the challenge of immediate or near-term Hybrid capability management. And it might be better referred to that way because if we were only concerned about the Cloud portions of what the enterprise were managing, then we’d still only be dealing with a partial landscape. Here are a few reasons why you need it…
1 – Because piecemeal governance is ineffective. In order to meet service level expectations or objectives with clients, there has to be a comprehensive ability to manage all of the resources that support such a capability, regardless of how that capability may actually be distributed. Now, there may always be some dispute as to what exactly “management” represents given that some Cloud providers will give limited access or control rights to their infrastructure, which begs the question – can you manage something you don’t control? The answer to that is a yes, you can – but let’s specify what we’re referring to as management:
  • You can manage access control from your user base to the capability
  • You can manage capacity planning and execution
  • You can manage upgrades, enhancements and expansion of the user base (in tandem with the provider/s)
  • You can manage network security expectations (to and from your various capability elements)
You can manage integration between capability elements or between separate capabilities
2 – Because Holistic Governance has always been an area for improvement, now it is a necessity: Most organizations have had some challenges implementing or maintaining Governance processes (I discussed this recently in another post on Data Governance). What tends to happen more often than not is that an organization settles into one or two subsets of Governance, for example a focus on Data Governance or Portfolio Governance. But Governance can cut across everything, from SDLC to Security, to Architecture to ITIL – basically Governance could or should touch every core process in the enterprise. Dealing with new Hybrid Cloud capabilities presents an excellent opportunity to tackle an age old problem.
3 – Because where there is more complexity, there needs to be more cooperation to effectively coordinate: One major advantage of managing all of one’s own capability on premise is at least the potential that a single point contact or service provider may be available to help coordinate everything. With a Hybrid Enterprise, the number of diverse providers and teams may increase quite a bit. And while you won’t likely have a single POC anymore, you can mitigate that through implementation of a single process.
4 – Because this diversification of capability will likely continue for some time to come. The trend towards greater capability diversification is likely to accelerate quite a bit in the coming years. While there will be exceptions where some organizations will migrate all of their capability to one Cloud provider (like AWS), this will definitely not be the norm. We are going to retain some on premise capability for a long time, especially where mainframe technology is involved and there will be a wide variety of other SaaS players that will continue to pull the typical enterprise in multiple directions
So how can your organization get started with Hybrid Cloud Governance? Here are a couple of suggestions…
1 – Create a Capability Inventory (coordinate with provider inventory). This suggestion is generally a worthwhile exercise anyway, particularly as a planning or strategy aid. It is typically done up front and can be relatively quick. Capability can be viewed in multiple contexts, the two most common being; Business Capability and Technical Capability.
2 – Determine your metrics approach and standardize it across capabilities. This is sometimes tricky in that Cloud providers often have different types of SLAs and SLOs, but rather than trying to map to everyone else’s definitions, set up your own and map everyone’s to that.
3 – Dedicate a team to facilitate it. This doesn’t have to be big team, in some organizations it might even be one person. The important thing to keep in mind though is if you don’t have someone dedicated to cross the capability barriers, then things will likely remained stove-piped.
Copyright 2016, Stephen Lahanas

Sunday, December 11, 2016

Understanding Security Controls

Security Controls sound a little bit menacing upon first hearing the term, however there’s nothing scary about them – that is unless you have a large organization that doesn’t happen to be using them. Let’s start with a definition:
A Security Control is a specified behavior, process, configuration or capability – or combination thereof – designed to counter specific or non-specific technical threats to an information environment.
Now, there are controls surrounding physical security and mechanical systems; however, in this post we’ll limit our focus to IT Security Controls. Before we go too deep into what they are and how they tend to be operationalized, let’s ask the obvious question first – why do we need them?
The quick answer is that Security Controls (and yes this does imply that come as sets of controls) represent an excellent Framework around which a security architecture and program can be built. Notice I used the terms Framework & Architecture here and that’s deliberate. Being an Architect I tend to view any Framework like those used for Security Controls but also things like ITIL as more or less adjuncts to Enterprise Architecture. The reason why I think that way is because of how similar they are – in many ways one can actually employ a group of Security Controls as the de facto Security Architecture for an organization that might not otherwise have one (and there are a lot of those out there).
Security Controls are at once pragmatic (Tactical) and Strategic – in that the controls help to define not just our immediate approaches for dealing with current threats but also usually provide excellent long-term targets as well. Another important consideration for why Security Controls are so important these days is that things have just gotten a lot more complicated in regards to Cyber Security. I’ve talked about this at some length here in other posts but the bottom line is things have just as scary as those of who’ve been working with Cyber Security said they would be. Granted, we haven’t had any zero day apocalypse yet, but most of the other predictions being made since the late 90’s have already come to pass and even some that many of us didn’t think about (e.g. Russian hacking of the presidential election). Having a framework in place – one developed by hundreds of experts from around the world – helps to demystify the landscape and level the playing field somewhat. No large enterprise in this day of age should attempt to handle security entirely on their own without benefit of Security Control frameworks – it’s more than risky – it’s negligent.
So, what does a Security Control look like? Here’s one from the NIST 800-53 Security and Privacy Controls for Federal Information Systems and Organizations publication:
ACCESS RESTRICTIONS FOR CHANGE Control: The organization defines, documents, approves, and enforces physical and logical access restrictions associated with changes to the information system.
There is much more information regarding this particular control, but the thing to keep in mind is that the control is defining something based on a classification of system issues or vulnerabilities – in other words, it is a taxonomy of practice used to organize several things:
  • Audits or Assessments (that may or may not be formal in nature, but the controls provide the framework for what is to be assessed).
  • Certification (to allow a system to be included on a network or perhaps a larger certification process like SOC 1 or 2 for data centers).
  • Security Hardening (e.g. the specific tools, configurations and processes put into place to counter a particular type of threat. For Access Control this could include a specific set of roles in regards to who might be allowed access at a given level of granularity).  
Another key consideration for a Security Control Framework, like NIST, CIS or OWASP, is that they can provide a foundation around which enterprise security metrics can be built. The metrics can potentially track all aspects of security-related activity in an organization, including things like:
  • Numbers of incidents in general
  • Users accessing or trying to access restricted resources
  • Risk levels against specifically identified threats
  • Patch management
  • Network Traffic and perimeter attacks
  • Instances of sensitive data leaving the enterprise in emails, etc.
  • Attempts to download organizational information onto thumb-drives
  • And much more…
The CIS control framework even comes with its own data model which can be used to build a reporting tool / data warehouse to track this type of information. Add a tool like Tableau on top and you’ve got a pretty slick solution that’s aligned with industry best practices without having to invent the whole thing from scratch.
The key thing to keep in mind is that in most enterprises, the main threat isn’t a specific group of Russian or Chinese hackers, but rather they sheer confusion surrounding the ever growing set of tools and security practices that must be managed as “mission-critical.” Adopting and integrating standard Security Controls still allows for a tremendous amount of flexibility in how to meet that confusion, but it also helps to reduce the complexity challenge almost immediately.  
Copyright 2016, Stephen Lahanas

Saturday, December 10, 2016

The IT Architect as Honest Broker

What exactly is an Honest Broker? Sometimes the term is heard in the context of political discussion, however the phrase applies to just about any field of endeavor. The role of Honest Broker refers to someone who applies their expertise in a fair and unbiased manner and more importantly communicates that expertise forthrightly without fear or concern for potential reprisal. Another way to think about the role is that the Honest Broker is the polar opposite to the “Yes Man” – a person who only communicates what others expect to hear. Why is this of value and what does it have to do with IT Architecture? I’ll try to address both of those points…



Why having an Honest Broker Matters – The obvious reason may start with the realization that if one hires an expert for their expertise, they may actually want the expert to demonstrate that value. This may not always be case of course, some people hire experts and then expect them to merely mimic precisely what they wish hear. The flaw with that is the value proposition is nearly always lost in those type of situations – any problem which required the expert in order to be corrected is that less likely to be solved. It is worth emphasizing here that this applies to any field, any profession, any industry, not just IT. The difference with IT might be that there is at least the expectation that less of the ‘Yes Men’ type roles would be accepted or tolerated because IT is the main focus of innovation these days. This isn’t necessarily the case though.

Another consideration in the overall value proposition behind having an honest broker is a related phenomenon called Group-Think. Group-Think isn’t quite the same as having people act as Yes Men per se, it is more of an organizational, self-imposed boundary to what is or isn’t acceptable to think. Thus it is a cultural phenomenon, but one that exists primarily within certain sub-cultures – these can be companies or any type of organization really. The NET effect of Group-Think existing in such an environment is an overall reduction in problem-solving effectiveness. In organizations where problem solving (either individual or collective) is not really needed, Group-Think may be considered an asset. This tends not to be the case in IT however, as the field is dynamic enough that it requires problem-solving on almost a constant basis at all levels. Group-Think can be thought of as the ‘Box’ in the phrase, “Think outside of the box.”

Architecture & The Honest Broker – In IT, we hold the advantage of having a fairly well-recognized role (at least in recent years) that turns out to be perfectly suited for combating Group-Think and solving problems at all levels. That role is the IT Architect and it represents a significant part of the overall value proposition behind IT Architecture. Here are a few reasons why IT Architects make excellent Honest Brokers:

1.      Because in IT, intelligence and open-mindedness are rewarded perhaps more often than in any other field (at least that I know of). Some might say Science in general might be the place where this really holds true, but I think not. There are aspects to traditional science that are still much more rigid than IT in regards to how non-traditional thinking is accepted. IT is results-oriented in a pragmatic way that Science sometimes isn’t.

2.      IT Architects tend to be experts in more than one area, but also tend to have the ability to become experts in others quite quickly and have to deal with many more areas where they are not expert. This imbues the Architect with a dynamic and relatively unbiased world-view; as architects we don’t see the world of problems in black and white where there is an absolute set of right and wrong choices to be made. We also embrace change because we see it every day and can easily gauge massive shifts in both technology and practice within the bounds of our own careers. The IT Architect is flexible, open-minded and most importantly of all quick to challenge their own and other’s assumptions in the performance of whatever task needs doing. This is because we haven’t built our thinking around a belief system, we’ve built it around problem-solving with direct, tangible results as our primary measure of success.

3.      And perhaps most importantly, IT Architects tend to be leaders or close to leadership roles. We are in the right place at the right time and usually talking to the right people to make a difference. Matching the skills and thinking to the situation is crucial.

Being an honest broker does not imply that someone (whether they be an IT Architect or not), can say whatever pops into their head. No, being “honest” in this sense means that the Broker is fairly assessing an issue and fairly determining courses of action and communicating that information in the manner which the organization expects it to be presented without undue restriction or censorship. As I alluded to earlier, there are some organizations which don’t value this type of role and that’s fine. Others may pretend to support it, but in practice don’t and it’s not uncommon at all for Architects to be forced to one type of conclusion or another in order to maintain their position. But ultimately, the most successful IT organizations do value this type of role and once you know you’re in that type of organization fulfilling that type of role your potential to add value as an IT Architect increases exponentially.



Copyright 2016, Stephen Lahanas

Friday, December 9, 2016

The Top 7 Reasons for Data Governance

In the Age of Big Data, many people might think that the practice of Data Governance is a thing of the past – nothing could be further from the truth. Data Governance has often been misunderstood or underappreciated and relatively few organizations have taken the time and made the investment to integrate it into their enterprise processes. So, there are actually several questions that need to be answered here:
  1. Does the de-normalization of data through exploitation of Big Data technologies discount the need for Data Governance?
  2. Why isn’t Data Governance more widespread if it indeed still has value and
  3. What is the value proposition behind Data Governance? (what are the 7 reasons why you need it)
We’ll tackle these questions one at a time.
1 - Does Big Data Require Governance?
The immediate expectation in response to this question might be – well no - as Governance seems to represent the formal and complex approach used for both RDBMS and OLAP data structures. But does this make sense per se? Classifying a data model as normalized, star schema or a de-normalized Big Table doesn’t necessarily impact the nature of the data attributes themselves. In other words, we still need to understand that data regardless of where or how it housed – we still need to know where it comes from, who owns it, where it goes, how it is transformed and so on. If we want the data to be valid, accurate and managed across a lifecycle, Governance is still needed. The technology itself does nothing to prevent us from experiencing a ‘Garbage in / Garbage out’ situation. The adoption of new technology doesn’t imply the need to discard common sense.
2 – Why isn’t Data Governance More Accepted?
This is a tougher question and in fact can’t easily be broken down into a single reason. Some of the most common reasons include:
  • To govern data you first have to understand it holistically, and that initial assessment / analysis is a generally the hardest part – and is often why things don’t progress beyond that point (as many of these assessments simply never get completed)
  • Often times, all Governance within an organization may be lacking because of the perception that some processes can’t be Agile and just hold things back or slow them down too much. While there is some truth to that, there is also truth in the lesson learned innumerable times that bypassing that Governance causes tremendous impacts later (to cost, efficiency and the ability to deliver and maintain capability).
  • Because sometimes the tools get confused with the practice and while there are a number of great data governance tools available – sometimes they become an obstacle in themselves (e.g. some may be considered too expensive, others too complicated or perhaps there might be too many in the mix). The reality is that a lot of Data Governance can occur before or even sometimes without making that investment. It is the practice and not the software used to facilitate the practice that really matters.
3 – Why do Most Enterprises Need Data Governance? Here are 7 good reasons that tend to represent the more or less universal value proposition:
  • Data Governance reduces enterprise complexity. At first, as I alluded to earlier, the impression here might be the opposite. But one only needs consider a highly typical data Use Case to see how Governance cuts right through complexity. Perhaps the number one integration issue I’ve seen faced over the past twenty years pretty much everywhere is the proliferation of similar or even the same data across multiple systems (this can include both multiple databases and reporting platforms). This quickly leads to all sorts of confusion and ultimately costs more to manage as long as it stays, well, confused. Governance tackles this type of problem at its core, by first designating authoritative systems and then more strictly controlling the use or reuse of such data. This can translate into business rules across the stack and often results in the elimination of both redundant data elements as well as duplicate systems.
  • Data Governance Enhances Security – How one might ask, does it do that? Well, precisely through some of what I’ve already mentioned; including an assessment and classification of what data assets your enterprise has as well as determination of rules and architectural requirements for safeguarding both Data at Rest and Data in Motion. All of this starts with and becomes part of Data Governance. And if we think a little deeper about it, this is only logical when we consider that Data Assets are in fact the number one target of every major Cyber-Attack ever launched. To protect your enterprise, you must first know what’s in it and secondly you must have the ability to control the flow of that information.
  • Data Governance is the best 1st Step for Integration – Almost every integration challenge is at its heart a data challenge. How we transport data, transform data and keep everything aligned is to a large part dependent on how well we understand that data. Messaging / Middleware / API Frameworks / EDI / SOA /EAI – you name it - it’s all about the data. Once integration is place, it must be governed – data interfaces (through messaging or other similar mechanisms) – is actually one of the most pragmatic initial places where Data Governance can be instituted.
  • Data Governance Enables more Sophisticated Capabilities – such as MDM – Master Data Management is an example of a valuable enterprise capability that simply couldn’t exist if some level of Governance weren’t in place. To deploy MDM, an organization has to understand its core business entities and how they relate to attributes and be able to control them in a consistent manner. Every MDM solution I’ve ever seen either has Data Governance built in or relies on some other existing Data Governance process. MDM is not the only capability dependent on Governance though.
  • Data Governance is Critical to Achieving an Effective Analytics Solution – The last thing any organization wants to be getting different answers to the same or similar questions. Data Governance not only helps to de-conflict issues at the data level – it can be used to de-conflict entire solutions. In other words, data governance helps drive consolidation of reporting and reporting architectures as well as the source systems underneath them.
  • Data Governance can Impact the Bottom Line – Having Data Governance can make your enterprise more effective, not just from an IT perspective, but also the Business perspective as well. I’ve seen many organizations reduce duplicate systems and eliminate conflicting data and experience immediate results. The amount of benefit is dependent on how many systems can be consolidated or turned off and how improving data accuracy will impact whatever the business mission of the organization may be – but in almost every case – these types of benefits will be realized to some degree.
  • Data Governance is often the Keystone upon which more Effective Enterprise Governance is Built – It is a great place to start if no Governance is in place or an even better place to expand if perhaps there are already some pockets of Governance already deployed. Since Data tends to be a cross-cutter, both organizationally and architecturally – it can become the foundation for a wider Governance framework.
In my experience, even in the organizations that didn’t fully implement Data Governance, the elements which were deployed provided obvious and immediate value. The current technology trends tend to point to a heightened need for Governance rather than the other way around, especially with the massive levels of adoption of Hybrid Cloud capability. I’ll talk about that in an upcoming post.

Copyright 2016, Stephen Lahanas

Saturday, December 3, 2016

The 5 Principles of Performance Engineering

One of the tasks an IT Architect is typically assigned with is reconciling infrastructure and application needs, this holds true for the Cloud just the same as on-premise solutions (and all Hybrid variations of the 2). This type of reconciliation is typically referred to as Performance Engineering. Performance Engineering is seldom a one-off activity, although Architects tend to get involved during crucial junctures such as planning phases or in the midst of crisis scenarios, and operational staff monitor and make adjustments on a more regular basis. First, let’s take a look at what tends to make up a typical performance engineering activity or assessment:
  • Preliminary Requirements Estimation
  • Initial Capacity Assessment
  • Network Benchmarking
  • Application Benchmarking
  • Application Tuning
  • Capacity Adjustments
  • Application Testing
Not every Performance Engineering activity requires this full lifecycle approach, however it should occur as least once and preferably be continuous in coordination with major application releases or transformation initiatives. Elements from each of these lifecycle pillars can be built into the continuous performance monitoring framework as well – although what you can do is highly contingent on what types of performance monitoring or automation you have in place.
Regardless of whether you are doing a preliminary performance exercise or preparing for a major release or transformation, there are a number of principles that tend to apply which need to be considered in any assessment:
1 – Context is King: The idea of performance is a somewhat relative concept; what one audience or set of clients finds acceptable may not be acceptable to another. Also, even within the same audience, expectations are prone to change. The first step in any performance exercise is determining what is or isn’t acceptable in a given context. This will drive all sorts of decisions and tends to happen mainly in the requirements phase but can be recurring.
2 – Don’t depend solely on one information source: Too often, teams can make decisions on an incomplete picture, relying solely on utilization statistics or UAT results, etc. Performance is a dynamic science – it requires inputs from all available sources in order to obtain an accurate picture. In some situations, important sources of performance information are missing altogether – such as lack of performance automation tools or a performance environment. In these cases, those information deficits must be considered serious operational risks and treated as such.
3 – Always utilize a performance environment – one that mirrors Production: This principle of course is not just for performance management, it is required in order to get a sense of how Production will really operate. However, not all QA environments are precisely matched with Production and when that happens there is always an increased risk that both performance and functionality cannot be accurately predicted or assessed until the application goes live – which is never a good thing.  
4 – Get to know your application, holistically: Every application is different, moreover an application is more than likely of a system of system conglomerate of capabilities working in tandem. It is critical that all of these elements are assessed – from the Operating System to the data interfaces to network connectivity and load balancing. Division of labor often separates interests in terms of who is responsible for what, but ultimately someone is responsible for the whole thing and that typically is the IT Architect.
5 – Err on the side of caution: In most cases, the cost of procuring additional computing or networking resources now is relatively inexpensive compared to the good old days in IT. While it doesn’t make sense to massively over-provision an environment, by the same token it make little sense to ‘see how long you can ride on the fumes either.” Give yourself a healthy buffer, one that allows for application growth and can handle unforeseen events.
I’ve found that these principles help to provide a framework for about any performance assessment scenario. If I were to add another principle it might be this one – whenever doing performing assessments or engineering, leverage expert knowledge across the stack rather than assuming that you can answer every question entirely on your own. In today’s complex environments, it’s difficult to find someone who is expert in all aspects of the solution – being able to find and take full advantage of subject matter expertise is an important skill for any IT Architect.
Copyright 2016 - Stephen Lahanas