Posts

Operationally Cumbersome?

The leading cause of death in the workplace is falls. 36.5% of all fatalities are due to falls, followed by 10.1% caused by being struck with an object. Recognizing the problem, OSHA created requirements to protect workers from falls, including:

  • guardrail systems
  • safety net systems
  • personal fall arrest systems
  • covers
  • positioning device systems
  • fences
  • barricades
  • controlled access zones

All these controls, when used properly, save lives.

Hypothetical Scenario

A successful construction company is working on a 30-story office building. Timelines were already tight, but a series of material delivery delays has put them way behind schedule. In a rush to complete the project, it’s easy overlook certain things. In this case, a properly configured personal fall arrest system was overlooked. They bought the system, the system was onsite, but the system wasn’t installed correctly. Nobody noticed until one day a worker, twenty stories up, slipped and fell to his death.

As you can imagine, there was a serious investigation. In the end, the company admitted their oversight, received a fine, settled a lawsuit with the worker’s family, and continued operations.

A few weeks later, same thing happens. Another investigation, another slap on the wrist, another settled lawsuit, and back to business as usual.

A few months go by, and there’s another incident! The investigation cited the same cause as the others, a poorly configured/installed personal fall arrest system. This time, OSHA wants a public hearing and invites company representatives to answer questions before their panel. At the hearing, company representatives were asked the following question:

If a properly deployed personal fall arrest system had been used, would these lives have been saved?

A company representative responds:

It depends. In theory, it’s a sound thing, but it’s academic. In practice it is operationally cumbersome.

Seems reasonable, right?. We certainly don’t want to get in the way of company production!

Or, wait a second. This doesn’t seem right. Poor safety because good safety is “operationally cumbersome” doesn’t sit well with you. Good, it shouldn’t!

Sadly, a similar analogy plays out all over the information security industry every day.

Hearing on the Hack of U.S. Networks by a Foreign Adversary

The construction analogy hit home while watching recent testimony in front of the U.S. Select Committee on Intelligence.

On February 23rd, 2021, Kevin Mandia (FireEye CEO), Sudhakar Ramakrishna (SolarWinds CEO), Brad Smith (Microsoft President), and George Kurtz (CrowdStrike President and CEO) were invited to give their testimony about the attacks on SolarWinds Orion last year (and ongoing). These are four very powerful men in our industry, and I appreciate what they’ve accomplished. In general, I have a great amount of respect for these men, but I’m not comfortable in their representation of our industry without also considering (many) others. Some of the reasons I’m not comfortable, include these facts:

  • They run billion and multi-billion dollar companies that sell products and services to protect things.
    • If people were already protected, they’d have nothing to sell. There is incentive to keep people insecure.
    • Companies must continue to produce new products (See: product life cycle diagram below). Without new products, sales decline. As long as people keep buying (regardless of need), they’ll keep making.

  • They have significant personal financial interests in the performance (sales, profit, etc.) of their companies.
  • They represent shareholders who have significant financial interests in the performance of their companies.
  • They may lack clear perspective of what most Americans and American companies are struggling with due to where they sit.

A hearing such as this is a fantastic opportunity for people to tout their accomplishments (which they do), tout their companies accomplishments (which they do),  and sell more stuff as a result. I DO NOT fault the witnesses for doing these things. It’s their job!

Let’s just hope our Senators take the hearing and witnesses in proper context and seek many more perspectives before attempting to draft new policy.

IMPORTANT NOTE: It may appear in this article that I’m critical of the people in this Senate hearing, but this is NOT the point. The people participating in the hearing have done tremendous things for our industry and our country. For all we know, if we were in one of their seats, we would respond in much the same way they did. If anything, I’m critical of us, our industry. We have tools sitting right under our noses that we don’t use correctly. Instead of learning to use our tools correctly, and actually using our tools correctly, we go looking for more tools. This is ILLOGICAL, and might should be negligent.

The point.

At one point during the hearing (1:22:08, if you’re watching the video), Senator Wyden (D-OR) begins a logical and enlightening line of questioning.

Senator Wyden:

The impression that the American people might get from this hearing is that the hackers are such formidable adversaries that there was nothing that the American government or our biggest tech companies could have done to protect themselves. My view is that message leads to privacy violating laws and billions of more taxpayer funds for cybersecurity. Now it might be embarrassing, but the first order of business has to be identifying where well-know cybersecurity measures could have mitigated the damage caused by the breach. For example, there are concrete ways for the government to improve its ability to identify hackers without resorting to warrantless monitoring of the domestic internet. So, my first question is about properly configured firewalls. Now the initial malware in SolarWinds Orion software was basically harmless. It was only after that malware called home that the hackers took control, and this is consistent with what the Internal Revenue Service told me. Which is while the IRS installed Orion, their server was not connected to the Internet, and so the malware couldn’t communicate with the hackers. So, this raises the question of why other agencies didn’t take steps to stop the malware from calling home. So, my question will be for Mr. Ramakrishna, and I indicated to your folks I was going to ask this. You stated that the back door only worked if Orion had access to the internet, which was not required for Orion to operate. In your view, shouldn’t government agencies using Orion have installed it on servers that were either completely disconnected from the internet, or were behind firewalls that blocked access to the outside world?”

To which Mr. Ramakrishna (SolarWinds) responds:

Thanks for the question Senator Wyden. It is true that the Orion platform software does not need connectivity to the internet for it to perform its regular duties, which could be network monitoring, system monitoring, application monitoring on premises of our customers.”

Key points:

  1. SolarWinds Orion did not require Internet connectivity to function.
  2. The IRS had Orion.
  3. The IRS did not permit Orion to communicate with the Internet.
  4. Attackers were not able to control the IRS Orion server (because it couldn’t communicate home).
  5. The attack against the IRS was mitigated.

Senator Wyden continues:

Yeah, it just seems to me what I’m asking about is network security 101, and any responsible organization wouldn’t allow software with this level of access to internal systems to connect to the outside world, and you basically said almost the same thing. My question then, for all of you is, the idea that organizations should use firewalls to control what parts of their networks are connected to the outside world is not exactly brand new. NSA recommends that organizations only allow traffic that is required for operational tasks, all other traffic ought to be denied. And NIST, the standards and technology group recommends that firewall policies should be based on blocking all inbound and outbound traffic with exceptions made for desired traffic. So, I would like to go down the row and ask each one of you for a “yes” or “no” answer whether you agree with the firewall advice that would really offer a measure of protection from the NSA and NIST. Just yes or no, and ah, if I don’t have my glasses on maybe I can’t see all the name tags, but let’s just go down the row.”

Points made by Senator Wyden:

  1. Network security 101 includes blocking high-risk applications from connecting to the Internet when it’s not specifically required for functionality.
  2. Firewalls are designed to block unwanted and unnecessary network traffic.
  3. There is good authoritative guidance for using firewalls properly, including from the NSA and NIST.
  4. None of this is new.
  5. Organizations that don’t follow “network security 101” are irresponsible.

Kevin Mandia responds first:

And I’m gonna give you the “it depends”. The bottom line is this, we do over 6oo red teams a year, firewalls have never stopped one of them. A firewall is like having a gate guard outside a New York City apartment building, and they can recognize if you live there or not, and some attackers are perfectly disguised as someone who lives in the building and walks right by the gate guard. It’s ah, in theory, it’s a sound thing, but it’s academic. In practice it is operationally cumbersome.

OK, here the logic falls apart. The answer “it depends”, followed by “firewalls never stopped” a FireEye red team exercise, did NOT answer Senator Wyden’s question. Logically, this (non) answer would only be valid if (at a minimum):

  • The FireEye red team exercises were run against a “network security 101” firewall configuration.
  • The FireEye red team exercises were a variant or emulation of the SolarWinds attack.

The question was whether a “network security 101” (or a properly configured) firewall would have mitigated the SolarWinds attack (meaning a firewall configured to only permit necessary traffic, as per NSA and NIST guidance). The non-answer justification continues by mentioning “in theory, it’s a sound thing, but it’s academic”. Since it’s been brought up, this IS NOT theoretical, it’s factual. If an attacker cannot communicate with a system (either directly or by proxy), the attacker cannot attack or control the system.

The last part of this statement brings us (finally) to our original point. Using a firewall, the way it’s supposed to be used (“network security 101”) is “operationally cumbersome”.

Responses from the others:

  • Mr. RamakrishnaSo my answer Senator is “yes”. Do standards such as NIST 800-53 and others that define specific guidelines and rules. (THE BEST ANSWER)
  • Mr. SmithI’m squarely in the “it depends” camp. (Um, OK. So, a non-answer.)
  • Mr. KurtzYes, and I would say firewalls help, but are insufficient, and as Kevin said, and I would agree with him. There isn’t a breach that we’ve investigated that the company didn’t have a firewall or even legacy antivirus. So, when you look at the capabilities of a firewall, they’re needed, but certainly they’re not be all end goal, and generally they’re a speed bump on the information super highway for the bad guys. (Basically the same statement as the first. DID NOT answer the question.).

So the score is 3 to 1, “it depends” (without answering the question) versus “yes” (the correct answer).

Operationally Cumbersome

If a firewall (or any tool) is effective in preventing harm when it’s used correctly, why aren’t we using it correctly? The reason “because it’s operationally cumbersome” is NOT a valid argument.

It’s like saying “I don’t do things correctly because it’s hard” or “I don’t have time to do things right, so I don’t” or (as in our construction example) “We don’t have time to use a personal fall arrest system correctly, so people die”? Truth is, our infrastructures are so interconnected today, a failure to configure a firewall properly could/will eventually result in someone’s death.

So what do we do today? We do the illogical:

  • Since we don’t have time (or skill or operational bandwidth or whatever) to use an effective tool effectively, we purchase another tool.
  • We won’t have the time (or skill or operational bandwidth or whatever) to use this new tool effectively either, so we purchase another tool.
  • We won’t have time (or skill or operational bandwidth or whatever) to use the new tool and this newer tool effectively, so we purchase yet another tool.
  • The insanity continues…

What we must do (sooner or later):

  • inventory the tools we already have
  • learn how to use the tools we already have properly (knowledge/skill)
  • use the tools we already have properly (in practice)
  • then (and ONLY then) seek additional (or different) tools to address the remaining gaps

As an industry, we must (sooner or later):

  • make this “network security 101” (it’s not new, so we can’t call it the “new network security 101”)
  • hold organizations responsible for “network security 101” (the opposite being, the “new irresponsible” or negligent)

Other facts

Firewalls are NOT the end all, but they are an important part of security strategy. Here we are, many years down the road and we’re still fighting the same fight: the basics.

  • Firewalls have been around for more than 35 years.
  • Firewalls block unwanted and unnecessary network traffic (inbound/ingress and outbound/egress).
  • A properly configured, “network security 101”, “responsible”, “best practice” implementation of a firewall would have mitigated the SolarWinds (or similar) attack.
  • Many (maybe most) U.S. organizations have a firewall that is capable to mitigating the SolarWinds (or similar) attack.
  • There are still ways to bypass a firewall, but if you don’t have your firewall configured properly, what are the chances you’d stop a bypass anyway?
    • application vulnerabilities
    • SQL injection
    • social engineering
    • physical access
    • man-in-the-middle

Operationally cumbersome is not a valid excuse for our failures to understand and follow the basics.

F is for Fundamentals

Despite how much I’d like to use “F” for something else:

  • What the ____ are you doing?!
  • ____ you!
  • Who the ____ told you to do that?!
  • Why the ____ do I bother?

I’ll fight the urge and use “F” in a more decent manner, even if it is a little less honest.

So why does “F” stand for Fundamentals? For starters, fundamentals are critical. Without understanding and implementing fundamentals, the information security program you’ve poured your heart, soul, and money into will fail. Fundamentals form the foundation, and a house with a crappy foundation looks like this…

You might think your information security program looks better than this house, but if you lack fundamentals, you’re wrong. Sadly, we’ve seen too many information security programs look exactly like this house; falling apart, unsafe, and in need of serious rebuilding (or starting over). So, why do so many information security programs look like this house?

The quick answer:

  1. People don’t understand the fundamentals of information security. (AND/OR)
  2. People don’t practice the fundamentals of information security.

Let’s start with #1

People Don’t Understand Information Security Fundamentals

Seems we’ve preached “fundamentals” so many times, I’m beginning to wonder if we’re using the word right. Let’s look at the definition, then use logic (our friend) to take us down the path of understanding.

Here’s the definition of “fundamental” from from Merriam-Webster (along with my notes):

  1. serving as a basis supporting existence or determining essential structure or function – the “basis” or foundation of information security.
  2. of or relating to essential structure, function, or facts – the words “essential structure” reinforces the idea of foundation. We can’t build anything practical without a good foundation; therefore, we need to figure out what makes a good information security foundation (based upon its function).
  3. of central importance – what is the “central importance” of information security? We get this answer from understanding the purpose of information security.

OK, now let’s take “fundamental” and apply it to “information security”. My definition of information security is:

Managing risk to unauthorized disclosure, modification, and destruction of information using administrative, physical, and technical means (or controls).

Does the definition of information security meet the objectives set by the definition of “fundamental”? Think about it. Re-read if necessary.

Settled?

If the answer is “no”, then define information security for yourself. Write it down. (let’s hope ours are close to the same)

The definition of “information security” is the most fundamental aspect of information security. If we don’t have a solid fundamental understanding of information security, good luck with the rest.

OK, so what’s next?

Notice the words “managing risk” in the definition? Information security isn’t “eliminating risk” because that’s not possible. Managing risk; however, is quite possible. Seems our next fundamental is to define how to manage risk. Logic is still our friend, so let’s use it again:

  • You cannot manage risk unless you define risk. = risk definition
  • You cannot manage risk unless you understand it. = risk assessment
  • You cannot manage risk unless you measure it. = risk measurement (management 101 – “you can’t manage what you can’t measure“)
  • You cannot manage risk unless you know what to do with it. = risk decision-making
Risk Definition

If managing risk is fundamental to information security, it’s a good idea for us to define risk. The dictionary definitions of risk are not entirely helpful or practical. For instance:

  1. possibility of loss or injury – this only accounts for likelihood and says nothing of impact.
  2. someone or something that creates or suggests a hazard – this is more “threat” than risk.

In simple terms, risk is:

the likelihood of something bad happening and the impact if it did

OK, but how do we then determine likelihoods and impacts?

These are functions of threats and vulnerabilities. More logic, this time theoretical:

  • If you have no weakness (in a control), it doesn’t matter what the threat is. You have zero risk.
  • If you have infinite weakness (meaning no control), but have no threats, you also have zero risk.
  • If you have infinite weakness (meaning no control), and have many applicable threats, you (potentially) have infinite risk.
  • Zero risk and infinite risk are not practically feasible; therefore, risk is between zero and infinity.

Makes sense. The important things to remember about risk are likelihood, impact, threat, and vulnerability. Also, it helps to remember that risk is always relative.

Risk Assessment

The next fundamental in “managing risk” is to assess risk. To some folks, assessing information security risk seems like a daunting and/or useless exercise. There are several reasons for this. One reason might be because it is new to you. Risk assessments aren’t new (we do risk assessments all the time), but doing them in the context of information security is new.

Examples of everyday risk assessments:

  • You’re driving down the road and the traffic light turns yellow. The risk assessment is quick and mostly effective. What’s the likelihood of an accident or a police officer watching? What would the repercussions be (or impact)? You quickly look around, checking each direction. You assess your speed and distance. If you assess the risk to be acceptable, you go for it. If you assess the risk to be unacceptable, you hit the brakes.

NOTE: Risk decision-making for information security comes later in this post.

  • You just used the restroom. Do you wash your hands or not? You assess the risk of not washing your hands. Will I get sick, or worse, get someone else sick if I don’t wash? What are the chances? What could be the outcome if you don’t wash your hands? If you deem the risk to be acceptable without washing, you might just walk out the door. If you deem the risk to be unacceptable (hopefully), you’ll take a minute or two and wash your hands.

We all do risk assessments, and we do them throughout the day. We’re used to these risk assessments, and we don’t think much about them. Most of us aren’t used to information security risk assessments. There are so many controls and threats (known and unknown). It’s easy to become overwhelmed, confused, and paralyzed; leading to inaction.

Some truth about information security (risk) assessments:

  • There is no such thing as a perfect one.
  • Your one is probably going to be your worst and most painful one.
  • You cannot manage information security without one.
  • They’re fundamental.

Just do an information security risk assessment. Worry about comparisons, good ones versus a bad ones, later (you’re probably not ready to judge anyway).

Risk Measurement

People argue about measurements. Don’t. Fight the urge.

You can use an existing risk measurement; FAIR, S2Score, etc. or create one yourself. If you’re going to create your own risk measurement, here are some simple tips:

  1. Make the measurement as objective as possible. Instead of open-ended inputs or subjective inputs, use binary ones. Binary inputs are things like true/false, yes/no, etc.
  2. Use the measurement consistently. An inch is an inch, no matter where you apply it. A meter is a meter, no matter where you use it. For example, if a “true” answer to some criteria results in a vulnerability score of 5 today. It should be a 5 tomorrow too. Applying threats may change things, but the algorithm is still the same.
  3. The criteria being measured are relevant. For instance, take the crime rate in a neighborhood. Is it relevant to information security risk? The answer is yes. Our definition of information security is “administrative, physical, and technical” risk. Crime rates are relevant to physical security threats.

If you are new(er) to information security risk management, you may want to use a metric that’s already been defined by someone else. Again, caution against trying to find the perfect measurement. It’s like arguing whether an inch is a better measurement than a centimeter. Don’t get me started…

Risk Decision-Making

Alright, so you did your information security risk assessment.

Done?

Nope, just getting going now. Before doing your risk assessment, you were risk ignorant. Now, you’re risk learned. Yay you!

What to do with all this risk?

Let’s say your organization scored a 409 on a scale of 300 (worst) – 850 (best), and you discovered several areas where the organization scored close to 300. There’s LOTS of room for improvement. Now you need to make decisions about what you’re going to do. To keep things simple, you only have four options:

  1. Accept the risk as-is. The risk is acceptable to the organization and no additional work is required.
  2. Transfer the risk. The risk is not acceptable, but it’s also not a risk your organization is going to mitigate or avoid. You can transfer the risk, often to a third-party through insurance or other means.
  3. Mitigate the risk. The risk is not acceptable, and your organization has decided to do something about it. Risks are mitigated by reducing vulnerability (or weakness) or by reducing threats.
  4. Avoid the risk. The risk is not acceptable, and your organization has decided to stop doing whatever activity led to the risk.

That’s it. No other choices. Risk ignorance was not a valid option.

There you go! Now you have a start to the fundamentals of information security! The foundation.

Did you notice that I didn’t mention anything about security standards, models, frameworks, identification, authentication, etc.?

These are all fundamentals too, but first things first.

People don’t practice the fundamentals of information security.

We live in an easy button, instant gratification, shortcut world today. Information security is simple, but it’s definitely NOT easy. Good information security takes work, a lot of dirty (NOT sexy) work. What happens when you cut corners in laying a foundation? Bad things.

  • Hacking things. That’s a lot sexier than doing a risk assessment.
  • Blinky lights. These are a lot sexier than making formal risk decisions.
  • Cool buzzwords. So much sexier than the basics. The basics are boring!

Hacking, blinky lights and buzzwords all have their place, but not at the expense of fundamentals.

You have no excuse for not doing the fundamentals. Zero. The truth is, if you know the fundamentals and fail to do them, you’re negligent (or should be found as such). Reminds me, there are a few more fundamentals you should know about before we finish:

  • Roles & Responsibilities – Ultimately, the head of the organization (work and/or home) is the one responsible for information security; all of it. He/she may delegate certain things, but the buck always stops at the top of the food chain. Whatever’s delegated must be crystal clear, and documentation helps. We should always know who does what. (See: E is for Everyone).
  • Asset Management – You can’t secure what you don’t know you have. Assets are things of value; tangible (hardware) and intangible (software, data, people, etc.). Tangible asset management is the place to start, because it’s easier to understand. Once you’ve nailed down your tangible assets, go tackle your intangible ones.
  • Control (access, change, configuration, etc.) – You can’t secure what you can’t control. Administrative controls (the things we use to govern and influence people), physical controls, and technical controls.
    • Start with administrative controls; policies, standards, guidelines, and procedures. These are the rules for the game, and this is where standards like ISO 27002, COBIT, NIST SP 800-53, CIS Controls, etc. can help.
    • Access control; identity management and access management. Authentication plays here.
    • Configuration control; vulnerabilities love to live here (not just missing patches).
    • Change control; one crappy change can lead to complete vulnerability and compromise.

Last fundamental is cycle. Cycle through risk assessment, risk decision-making, and action. The frequency of the cycle depends on you.

Summary

I’d rather over-simplify information security than over-complicate it. Simplification is always a friend, along with logic. Quick summary of the fundamentals of information security:

  • Fundamental #1 – Learn and work within the context of what information security is (risk management).
  • Fundamental #2 – Roles and responsibilities.
  • Fundamental #3 – Asset management.
  • Fundamental #4 – Administrative control.
  • Fundamental #5 – Other controls (several).

Honorable Mention for “F”

As was true in previous ABCs, I got some great suggestions. Here’s some honorable mentions for “F”:

  • Facial Recognition
  • Failover
  • Failure
  • Faraday Cage
  • Fat Finger
  • Fear Uncertainty & Doubt (FUD)
  • Federal Information Processing Standards (FIPS)
  • Federal Information Security Management Act (FISMA)
  • Federal Risk and Authorization Program (FedRAMP)
  • Federated Identity Management (FIM)
  • Feistel Network
  • FERPA
  • Fibonacci Sequence
  • File Integrity Monitoring (FIM)
  • File
  • Fingerprint
  • Firewall
  • Foobar/Fubar
  • Fortran
  • Fraud over Internet Protocol
  • Fuzz Testing

Hope this helps you in your journey! Now on to “G”.

 

A is for Accountability

Information security ABCs – An exercise in the fundamentals and basics of information security for everyone.

Accountability

the state of being accountable, liable, or answerable.

This is where information security starts. If accountability were better understood, agreed upon, practiced, and enforced, we’d have much better information security.

Who’s ultimately responsible for information security in your organization?

This is a question I’ve asked 100s of organizations over the years. You’d be surprised by the answers:

  • “I don’t know.”
  • “That’s a good question.”
  • “Well, I am (the CIO, CISO, etc.).”
  • “We all are.”
  • “Nobody is.”

What’s the right answer? Simple, do this:

  • Grab an organization chart.
  • Find the person/people at the top of the chart

This is the correct answer. Always.

Sample Org Chart

Three questions then:

  1. Does the person/people at the top know they’re ultimately responsible for information security?
  2. If so, do they act like it (demand periodic status updates, champion the cause, plot direction, delegate effectively, etc.)?
  3. If not, who’s responsible for telling them?

The sample organization chart above is semi-typical for a business. Let’s look at a city, county, and/or school district. Same thing applies, the person/people at the top is/are ultimately responsible.

This slideshow requires JavaScript.

If this ultimate accountability is missing or broken, then expect the information security program to be missing or broken. The lack of accountability at the top permeates through all other information security efforts.

Tip: Define ultimate responsibility for information security in your organization and document it in an information security charter.

Top-Down

There’s a saying, “information security is everyone’s responsibility.” This is sort of true, but sort of not true. It’s true that everyone has responsibilities in information security, it’s not true that information security is everyone’s responsibility. Ultimately, information security is a responsibility that lies at the top. Only once this is realized, can we effectively begin to define and communicate delegated and supporting responsibilities.

Don’t assume that people know what their responsibilities are. Once responsibilities are defined and agreed upon, we can start practicing/enforcing accountability.

The CISO

In simplest terms, a CISO only has two responsibilities.

  1. Consult on information security risk, enabling the business to make sound risk decisions.
  2. Implement the business’ risk decisions in the best manner possible.

Both of these responsibilities are delegated from the top. In some cases, the top may delegate risk decisions to the CISO as well. This can work if the parameters are well-defined (and documented) and the CISO is empowered to do so.

NOTE: This approach is a delegation only, and should/does not absolve the top from their responsibility.

Honorable Mention for “A”

  • Asset (and asset management) – something that has value to a person or organization. Assets can be tangible (hardware, facility, etc.) or intangible (software, data, intellectual property, etc.).
  • Authentication – proof of an identity (subject or object). Three factors; something you know (password, PIN code, etc.), something you have (token, mobile phone, etc.), and something you are (biometric).
  • Access (Control) – what a subject can do with a system, file, object, etc.

Next up, “B”.