Apologies for not posting something about last week’s show, episode 130. We were honored and pleased to welcome John Strand from Black Hills Information Security as our guest. John, Brad and talked openly about John’s path through information security, what Black Hills is working on, the different pockets of security people, why it’s important to work together as information security vendors to improve the community, and John’s latest Pay What You Can (PWYC) Series.
My good friend, Security Shit Show co-host, hacker extraordinaire, and all around great guy Chris Roberts is stopping in for a chat.
Special Guest – Chris Roberts
Chris and I (Evan) were introduced to each other by our mutual friend Tony Cole maybe three years ago, but we didn’t get to know each other well until the last 13, 14 months. We’re both REALLY busy guys, so our circles just didn’t cross much. In the past year, we’ve gotten to know each other quite well which is no surprise seeing that we spend more than two hours together each week on the Security Shit Show with Ryan Cloutier (another great guy).
Episode 133 IF YOU WANT TO BE A GUEST OR HAVE A SUGGESTION, LET US KNOW!
Lots of GREAT conversations with lots of GREAT information security folks!
SHOW NOTES – Episode 131 – Tuesday May 11th, 2021
[Evan] Welcome listeners! Thanks for tuning into this episode of the UNSECURITY Podcast. This is episode 131, and the date is May 11th, 2021. Joining me is my good friend, infosec buddy and partner in crime.
Also joining the UNSECURITY Podcast is our special guest, Mr. Chris Roberts! Welcome my friend. It’s an honor to have you on our show!
Introducing Chris Roberts
Let’s start with trying to figure out how Chris first got into the information security industry.
Next, we’ll see how far we can get down his career path before 1) we start chasing squirrels (we’re both ADD) or 2) we run out of time (because there’s A LOT there).
The Colonial Pipeline Attack and global security tensions/consequences.
We’ll see if we get to his plane hacking antics too, but I’m not sure we’ll have the time.
We’ll probably skip news in this show. Guessing that Brad, Ron, and myself will have no problem filling the entire show with good discussion.
Wrapping Up – Shout Outs
Who’s getting shout outs this week?
Thank you to all our listeners! HUGE thank you to Chris for joining us. If you have something you’d like to tell us, feel free to email the show at firstname.lastname@example.org. If you’re the social type, socialize with us on Twitter, I’m @evanfrancen, and Brad’s @BradNigh.
The leading cause of death in the workplace is falls. 36.5% of all fatalities are due to falls, followed by 10.1% caused by being struck with an object. Recognizing the problem, OSHA created requirements to protect workers from falls, including:
safety net systems
personal fall arrest systems
positioning device systems
controlled access zones
All these controls, when used properly, save lives.
A successful construction company is working on a 30-story office building. Timelines were already tight, but a series of material delivery delays has put them way behind schedule. In a rush to complete the project, it’s easy overlook certain things. In this case, a properly configured personal fall arrest system was overlooked. They bought the system, the system was onsite, but the system wasn’t installed correctly. Nobody noticed until one day a worker, twenty stories up, slipped and fell to his death.
As you can imagine, there was a serious investigation. In the end, the company admitted their oversight, received a fine, settled a lawsuit with the worker’s family, and continued operations.
A few weeks later, same thing happens. Another investigation, another slap on the wrist, another settled lawsuit, and back to business as usual.
A few months go by, and there’s another incident! The investigation cited the same cause as the others, a poorly configured/installed personal fall arrest system. This time, OSHA wants a public hearing and invites company representatives to answer questions before their panel. At the hearing, company representatives were asked the following question:
“If a properly deployed personal fall arrest system had been used, would these lives have been saved?“
A company representative responds:
“It depends. In theory, it’s a sound thing, but it’s academic. In practice it is operationally cumbersome.“
Seems reasonable, right?. We certainly don’t want to get in the way of company production!
Or, wait a second. This doesn’t seem right. Poor safety because good safety is “operationally cumbersome” doesn’t sit well with you. Good, it shouldn’t!
Sadly, a similar analogy plays out all over the information security industry every day.
Hearing on the Hack of U.S. Networks by a Foreign Adversary
The construction analogy hit home while watching recent testimony in front of the U.S. Select Committee on Intelligence.
On February 23rd, 2021, Kevin Mandia (FireEye CEO), Sudhakar Ramakrishna (SolarWinds CEO), Brad Smith (Microsoft President), and George Kurtz (CrowdStrike President and CEO) were invited to give their testimony about the attacks on SolarWinds Orion last year (and ongoing). These are four very powerful men in our industry, and I appreciate what they’ve accomplished. In general, I have a great amount of respect for these men, but I’m not comfortable in their representation of our industry without also considering (many) others. Some of the reasons I’m not comfortable, include these facts:
They run billion and multi-billion dollar companies that sell products and services to protect things.
If people were already protected, they’d have nothing to sell. There is incentive to keep people insecure.
Companies must continue to produce new products (See: product life cycle diagram below). Without new products, sales decline. As long as people keep buying (regardless of need), they’ll keep making.
They have significant personal financial interests in the performance (sales, profit, etc.) of their companies.
They represent shareholders who have significant financial interests in the performance of their companies.
They may lack clear perspective of what most Americans and American companies are struggling with due to where they sit.
A hearing such as this is a fantastic opportunity for people to tout their accomplishments (which they do), tout their companies accomplishments (which they do), and sell more stuff as a result. I DO NOT fault the witnesses for doing these things. It’s their job!
Let’s just hope our Senators take the hearing and witnesses in proper context and seek many more perspectives before attempting to draft new policy.
IMPORTANT NOTE: It may appear in this article that I’m critical of the people in this Senate hearing, but this is NOT the point. The people participating in the hearing have done tremendous things for our industry and our country. For all we know, if we were in one of their seats, we would respond in much the same way they did. If anything, I’m critical of us, our industry. We have tools sitting right under our noses that we don’t use correctly. Instead of learning to use our tools correctly, and actually using our tools correctly, we go looking for more tools. This is ILLOGICAL, and might should be negligent.
At one point during the hearing (1:22:08, if you’re watching the video), Senator Wyden (D-OR) begins a logical and enlightening line of questioning.
“The impression that the American people might get from this hearing is that the hackers are such formidable adversaries that there was nothing that the American government or our biggest tech companies could have done to protect themselves. My view is that message leads to privacy violating laws and billions of more taxpayer funds for cybersecurity. Now it might be embarrassing, but the first order of business has to be identifying where well-know cybersecurity measures could have mitigated the damage caused by the breach. For example, there are concrete ways for the government to improve its ability to identify hackers without resorting to warrantless monitoring of the domestic internet. So, my first question is about properly configured firewalls. Now the initial malware in SolarWinds Orion software was basically harmless. It was only after that malware called home that the hackers took control, and this is consistent with what the Internal Revenue Service told me. Which is while the IRS installed Orion, their server was not connected to the Internet, and so the malware couldn’t communicate with the hackers. So, this raises the question of why other agencies didn’t take steps to stop the malware from calling home. So, my question will be for Mr. Ramakrishna, and I indicated to your folks I was going to ask this. You stated that the back door only worked if Orion had access to the internet, which was not required for Orion to operate. In your view, shouldn’t government agencies using Orion have installed it on servers that were either completely disconnected from the internet, or were behind firewalls that blocked access to the outside world?”
To which Mr. Ramakrishna (SolarWinds) responds:
“Thanks for the question Senator Wyden. It is true that the Orion platform software does not need connectivity to the internet for it to perform its regular duties, which could be network monitoring, system monitoring, application monitoring on premises of our customers.”
SolarWinds Orion did not require Internet connectivity to function.
The IRS had Orion.
The IRS did not permit Orion to communicate with the Internet.
Attackers were not able to control the IRS Orion server (because it couldn’t communicate home).
The attack against the IRS was mitigated.
Senator Wyden continues:
“Yeah, it just seems to me what I’m asking about is network security 101, and any responsible organization wouldn’t allow software with this level of access to internal systems to connect to the outside world, and you basically said almost the same thing. My question then, for all of you is, the idea that organizations should use firewalls to control what parts of their networks are connected to the outside world is not exactly brand new. NSA recommends that organizations only allow traffic that is required for operational tasks, all other traffic ought to be denied. And NIST, the standards and technology group recommends that firewall policies should be based on blocking all inbound and outbound traffic with exceptions made for desired traffic. So, I would like to go down the row and ask each one of you for a “yes” or “no” answer whether you agree with the firewall advice that would really offer a measure of protection from the NSA and NIST. Just yes or no, and ah, if I don’t have my glasses on maybe I can’t see all the name tags, but let’s just go down the row.”
Points made by Senator Wyden:
Network security 101 includes blocking high-risk applications from connecting to the Internet when it’s not specifically required for functionality.
Firewalls are designed to block unwanted and unnecessary network traffic.
There is good authoritative guidance for using firewalls properly, including from the NSA and NIST.
None of this is new.
Organizations that don’t follow “network security 101” are irresponsible.
Kevin Mandia responds first:
“And I’m gonna give you the “it depends”. The bottom line is this, we do over 6oo red teams a year, firewalls have never stopped one of them. A firewall is like having a gate guard outside a New York City apartment building, and they can recognize if you live there or not, and some attackers are perfectly disguised as someone who lives in the building and walks right by the gate guard. It’s ah, in theory, it’s a sound thing, but it’s academic. In practice it is operationally cumbersome.“
OK, here the logic falls apart. The answer “it depends”, followed by “firewalls never stopped” a FireEye red team exercise, did NOT answer Senator Wyden’s question. Logically, this (non) answer would only be valid if (at a minimum):
The FireEye red team exercises were run against a “network security 101” firewall configuration.
The FireEye red team exercises were a variant or emulation of the SolarWinds attack.
The question was whether a “network security 101” (or a properly configured) firewall would have mitigated the SolarWinds attack (meaning a firewall configured to only permit necessary traffic, as per NSA and NIST guidance). The non-answer justification continues by mentioning “in theory, it’s a sound thing, but it’s academic”. Since it’s been brought up, this IS NOT theoretical, it’s factual. If an attacker cannot communicate with a system (either directly or by proxy), the attacker cannot attack or control the system.
The last part of this statement brings us (finally) to our original point. Using a firewall, the way it’s supposed to be used (“network security 101”) is “operationally cumbersome”.
Responses from the others:
Mr. Ramakrishna: So my answer Senator is “yes”. Do standards such as NIST 800-53 and others that define specific guidelines and rules. (THE BEST ANSWER)
Mr. Smith: I’m squarely in the “it depends” camp. (Um, OK. So, a non-answer.)
Mr. Kurtz: Yes, and I would say firewalls help, but are insufficient, and as Kevin said, and I would agree with him. There isn’t a breach that we’ve investigated that the company didn’t have a firewall or even legacy antivirus. So, when you look at the capabilities of a firewall, they’re needed, but certainly they’re not be all end goal, and generally they’re a speed bump on the information super highway for the bad guys. (Basically the same statement as the first. DID NOT answer the question.).
So the score is 3 to 1, “it depends” (without answering the question) versus “yes” (the correct answer).
If a firewall (or any tool) is effective in preventing harm when it’s used correctly, why aren’t we using it correctly? The reason “because it’s operationally cumbersome” is NOT a valid argument.
It’s like saying “I don’t do things correctly because it’s hard” or “I don’t have time to do things right, so I don’t” or (as in our construction example) “We don’t have time to use a personal fall arrest system correctly, so people die”? Truth is, our infrastructures are so interconnected today, a failure to configure a firewall properly could/will eventually result in someone’s death.
So what do we do today? We do the illogical:
Since we don’t have time (or skill or operational bandwidth or whatever) to use an effective tool effectively, we purchase another tool.
We won’t have the time (or skill or operational bandwidth or whatever) to use this new tool effectively either, so we purchase another tool.
We won’t have time (or skill or operational bandwidth or whatever) to use the new tool and this newer tool effectively, so we purchase yet another tool.
The insanity continues…
What we must do (sooner or later):
inventory the tools we already have
learn how to use the tools we already have properly (knowledge/skill)
use the tools we already have properly (in practice)
then (and ONLY then) seek additional (or different) tools to address the remaining gaps
As an industry, we must (sooner or later):
make this “network security 101” (it’s not new, so we can’t call it the “new network security 101”)
hold organizations responsible for “network security 101” (the opposite being, the “new irresponsible” or negligent)
Firewalls are NOT the end all, but they are an important part of security strategy. Here we are, many years down the road and we’re still fighting the same fight: the basics.
Firewalls have been around for more than 35 years.
Firewalls block unwanted and unnecessary network traffic (inbound/ingress and outbound/egress).
A properly configured, “network security 101”, “responsible”, “best practice” implementation of a firewall would have mitigated the SolarWinds (or similar) attack.
Many (maybe most) U.S. organizations have a firewall that is capable to mitigating the SolarWinds (or similar) attack.
There are still ways to bypass a firewall, but if you don’t have your firewall configured properly, what are the chances you’d stop a bypass anyway?
Operationally cumbersome is not a valid excuse for our failures to understand and follow the basics.
We have another great guest for episode 129, and we’re excited to get his take on things!
Special Guest – Ron Woerner
In this episode of the UNSECURITY Podcast, we’re joined by another good friend of ours, Ron Woerner.
Ron and I (Evan) first met at the RSA Conference last year (2020) after being introduced to each other by Ryan Cloutier, another good friend. Ron is a no nonsense, plain English-speaking information security expert with a heart for helping people from all walks protect themselves better. I love this guy and I’m excited to chat with him on the show!
Believe it or not, I have never met John in person. Despite running in some of the same circles for many years, this will be the first time I meet him.
John also has a laundry list of accomplishments. He’s the Founder and Owner of Black Hills Information Security, Senior Instructor with the SANS Institute, teaches SEC504: Hacker Techniques, Exploits, and Incident Handling; SEC560: Network Penetration Testing and Ethical Hacking; SEC580: Metasploit Kung Fu for Enterprise Pen Testing; and SEC464: Hacker Detection for System Administrators. John is the course author for SEC464: Hacker Detection for System Administrators and the co-author for SEC580: Metasploit Kung Fu for Enterprise Pen Testing. He’s also presented at the FBI, NASA, NSA, DefCon, and lots of other places.
Lots of GREAT conversations with lots of GREAT information security folks!
SHOW NOTES – Episode 129 – Tuesday April 27th, 2021
Recorded Monday April 26th, 2021
[Evan] Welcome listeners! Thanks for tuning into this episode of the UNSECURITY Podcast. This is episode 129, and the date is April 27th, 2021. Joining me is my good friend, solid partner, and and top infosec expert Brad Nigh. Welcome Brad!
Also joining the UNSECURITY Podcast is our special guest, Mr. Ron Woerner! Welcome Ron. It’s an honor to have you on our show!
Introducing Ron Woerner
It’s great to have Ron on our show! He gets information security and he always has an interesting perspective on things.
Top of mind things.
Pretty sure we’ll get to talk about Ron’s talks at RSA, his work/lectures at Bellevue University, social engineering things, information security as a life skill, and other goodies!
We’ll probably skip news in this show. Guessing that Brad, Ron, and myself will have no problem filling the entire show with good discussion.
Wrapping Up – Shout Outs
Who’s getting shout outs this week?
Thank you to all our listeners! HUGE thank you to Ron for joining us. If you have something you’d like to tell us, feel free to email the show at email@example.com. If you’re the social type, socialize with us on Twitter, I’m @evanfrancen, and Brad’s @BradNigh.
Ron can be reached on LinkedIn, Twitter (@RonW123), and other places he’ll probably share during the show.
Despite how much I’d like to use “F” for something else:
What the ____ are you doing?!
Who the ____ told you to do that?!
Why the ____ do I bother?
I’ll fight the urge and use “F” in a more decent manner, even if it is a little less honest.
So why does “F” stand for Fundamentals? For starters, fundamentals are critical. Without understanding and implementing fundamentals, the information security program you’ve poured your heart, soul, and money into will fail. Fundamentals form the foundation, and a house with a crappy foundation looks like this…
You might think your information security program looks better than this house, but if you lack fundamentals, you’re wrong. Sadly, we’ve seen too many information security programs look exactly like this house; falling apart, unsafe, and in need of serious rebuilding (or starting over). So, why do so many information security programs look like this house?
The quick answer:
People don’t understand the fundamentals of information security. (AND/OR)
People don’t practice the fundamentals of information security.
Let’s start with #1
People Don’t Understand Information Security Fundamentals
Seems we’ve preached “fundamentals” so many times, I’m beginning to wonder if we’re using the word right. Let’s look at the definition, then use logic (our friend) to take us down the path of understanding.
Here’s the definition of “fundamental” from from Merriam-Webster (along with my notes):
serving as a basis supporting existence or determining essential structure or function – the “basis” or foundation of information security.
of or relating to essential structure, function, or facts – the words “essential structure” reinforces the idea of foundation. We can’t build anything practical without a good foundation; therefore, we need to figure out what makes a good information security foundation (based upon its function).
of central importance – what is the “central importance” of information security? We get this answer from understanding the purposeof information security.
OK, now let’s take “fundamental” and apply it to “information security”. My definition of information security is:
Managing risk to unauthorized disclosure, modification, and destruction of information using administrative, physical, and technical means (or controls).
Does the definition of information security meet the objectives set by the definition of “fundamental”? Think about it. Re-read if necessary.
If the answer is “no”, then define information security for yourself. Write it down. (let’s hope ours are close to the same)
The definition of “information security” is the most fundamental aspect of information security. If we don’t have a solid fundamental understanding of information security, good luck with the rest.
OK, so what’s next?
Notice the words “managing risk” in the definition? Information security isn’t “eliminating risk” because that’s not possible. Managing risk; however, is quite possible. Seems our next fundamental is to define how to manage risk. Logic is still our friend, so let’s use it again:
You cannot manage risk unless you define risk. = risk definition
You cannot manage risk unless you understand it. = risk assessment
You cannot manage risk unless you measure it. = risk measurement (management 101 – “you can’t manage what you can’t measure“)
You cannot manage risk unless you know what to do with it. = risk decision-making
If managing risk is fundamental to information security, it’s a good idea for us to define risk. The dictionary definitions of risk are not entirely helpful or practical. For instance:
possibility of loss or injury – this only accounts for likelihood and says nothing of impact.
someone or something that creates or suggests a hazard – this is more “threat” than risk.
In simple terms, risk is:
the likelihood of something bad happening and the impact if it did
OK, but how do we then determine likelihoods and impacts?
These are functions of threats and vulnerabilities. More logic, this time theoretical:
If you have no weakness (in a control), it doesn’t matter what the threat is. You have zero risk.
If you have infinite weakness (meaning no control), but have no threats, you also have zero risk.
If you have infinite weakness (meaning no control), and have many applicable threats, you (potentially) have infinite risk.
Zero risk and infinite risk are not practically feasible; therefore, risk is between zero and infinity.
Makes sense. The important things to remember about risk are likelihood, impact, threat, and vulnerability. Also, it helps to remember that risk is always relative.
The next fundamental in “managing risk” is to assess risk. To some folks, assessing information security risk seems like a daunting and/or useless exercise. There are several reasons for this. One reason might be because it is new to you. Risk assessments aren’t new (we do risk assessments all the time), but doing them in the context of information security is new.
Examples of everyday risk assessments:
You’re driving down the road and the traffic light turns yellow. The risk assessment is quick and mostly effective. What’s the likelihood of an accident or a police officer watching? What would the repercussions be (or impact)? You quickly look around, checking each direction. You assess your speed and distance. If you assess the risk to be acceptable, you go for it. If you assess the risk to be unacceptable, you hit the brakes.
NOTE: Risk decision-making for information security comes later in this post.
You just used the restroom. Do you wash your hands or not? You assess the risk of not washing your hands. Will I get sick, or worse, get someone else sick if I don’t wash? What are the chances? What could be the outcome if you don’t wash your hands? If you deem the risk to be acceptable without washing, you might just walk out the door. If you deem the risk to be unacceptable (hopefully), you’ll take a minute or two and wash your hands.
We all do risk assessments, and we do them throughout the day. We’re used to these risk assessments, and we don’t think much about them. Most of us aren’t used to information security risk assessments. There are so many controls and threats (known and unknown). It’s easy to become overwhelmed, confused, and paralyzed; leading to inaction.
Some truth about information security (risk) assessments:
There is no such thing as a perfect one.
Your one is probably going to be your worst and most painful one.
You cannot manage information security without one.
Just do an information security risk assessment. Worry about comparisons, good ones versus a bad ones, later (you’re probably not ready to judge anyway).
People argue about measurements. Don’t. Fight the urge.
You can use an existing risk measurement; FAIR, S2Score, etc. or create one yourself. If you’re going to create your own risk measurement, here are some simple tips:
Make the measurement as objective as possible. Instead of open-ended inputs or subjective inputs, use binary ones. Binary inputs are things like true/false, yes/no, etc.
Use the measurement consistently. An inch is an inch, no matter where you apply it. A meter is a meter, no matter where you use it. For example, if a “true” answer to some criteria results in a vulnerability score of 5 today. It should be a 5 tomorrow too. Applying threats may change things, but the algorithm is still the same.
The criteria being measured are relevant. For instance, take the crime rate in a neighborhood. Is it relevant to information security risk? The answer is yes. Our definition of information security is “administrative, physical, and technical” risk. Crime rates are relevant to physical security threats.
If you are new(er) to information security risk management, you may want to use a metric that’s already been defined by someone else. Again, caution against trying to find the perfect measurement. It’s like arguing whether an inch is a better measurement than a centimeter. Don’t get me started…
Alright, so you did your information security risk assessment.
Nope, just getting going now. Before doing your risk assessment, you were risk ignorant. Now, you’re risk learned. Yay you!
What to do with all this risk?
Let’s say your organization scored a 409 on a scale of 300 (worst) – 850 (best), and you discovered several areas where the organization scored close to 300. There’s LOTS of room for improvement. Now you need to make decisions about what you’re going to do. To keep things simple, you only have four options:
Accept the risk as-is. The risk is acceptable to the organization and no additional work is required.
Transfer the risk. The risk is not acceptable, but it’s also not a risk your organization is going to mitigate or avoid. You can transfer the risk, often to a third-party through insurance or other means.
Mitigate the risk. The risk is not acceptable, and your organization has decided to do something about it. Risks are mitigated by reducing vulnerability (or weakness) or by reducing threats.
Avoid the risk. The risk is not acceptable, and your organization has decided to stop doing whatever activity led to the risk.
That’s it. No other choices. Risk ignorance was not a valid option.
There you go! Now you have a start to the fundamentals of information security! The foundation.
Did you notice that I didn’t mention anything about security standards, models, frameworks, identification, authentication, etc.?
These are all fundamentals too, but first things first.
People don’t practice the fundamentals of information security.
We live in an easy button, instant gratification, shortcut world today. Information security is simple, but it’s definitely NOT easy. Good information security takes work, a lot of dirty (NOT sexy) work. What happens when you cut corners in laying a foundation? Bad things.
Hacking things. That’s a lot sexier than doing a risk assessment.
Blinky lights. These are a lot sexier than making formal risk decisions.
Cool buzzwords. So much sexier than the basics. The basics are boring!
Hacking, blinky lights and buzzwords all have their place, but not at the expense of fundamentals.
You have no excuse for not doing the fundamentals. Zero. The truth is, if you know the fundamentals and fail to do them, you’re negligent (or should be found as such). Reminds me, there are a few more fundamentals you should know about before we finish:
Roles & Responsibilities – Ultimately, the head of the organization (work and/or home) is the one responsible for information security; all of it. He/she may delegate certain things, but the buck always stops at the top of the food chain. Whatever’s delegated must be crystal clear, and documentation helps. We should always know who does what. (See: E is for Everyone).
Asset Management – You can’t secure what you don’t know you have. Assets are things of value; tangible (hardware) and intangible (software, data, people, etc.). Tangible asset management is the place to start, because it’s easier to understand. Once you’ve nailed down your tangible assets, go tackle your intangible ones.
Control (access, change, configuration, etc.) – You can’t secure what you can’t control. Administrative controls (the things we use to govern and influence people), physical controls, and technical controls.
Start with administrative controls; policies, standards, guidelines, and procedures. These are the rules for the game, and this is where standards like ISO 27002, COBIT, NIST SP 800-53, CIS Controls, etc. can help.
Access control; identity management and access management. Authentication plays here.
Configuration control; vulnerabilities love to live here (not just missing patches).
Change control; one crappy change can lead to complete vulnerability and compromise.
Last fundamental is cycle. Cycle through risk assessment, risk decision-making, and action. The frequency of the cycle depends on you.
I’d rather over-simplify information security than over-complicate it. Simplification is always a friend, along with logic. Quick summary of the fundamentals of information security:
Fundamental #1 – Learn and work within the context of what information security is (risk management).
Fundamental #2 – Roles and responsibilities.
Fundamental #3 – Asset management.
Fundamental #4 – Administrative control.
Fundamental #5 – Other controls (several).
Honorable Mention for “F”
As was true in previous ABCs, I got some great suggestions. Here’s some honorable mentions for “F”:
Fear Uncertainty & Doubt (FUD)
Federal Information Processing Standards (FIPS)
Federal Information Security Management Act (FISMA)
Federal Risk and Authorization Program (FedRAMP)
Federated Identity Management (FIM)
File Integrity Monitoring (FIM)
Fraud over Internet Protocol
Hope this helps you in your journey! Now on to “G”.
https://i0.wp.com/evanfrancen.com/wp-content/uploads/2020/11/foundation.png?fit=513%2C439&ssl=1439513Evan Francenhttps://evanfrancen.com/wp-content/uploads/2022/09/259CD09D-F0B8-4FFC-A050-C84AD4D31150.pngEvan Francen2020-11-09 12:10:282020-11-09 12:54:00F is for Fundamentals
Information security ABCs – An exercise in the fundamentals and basics of information security for everyone.
the state of being accountable, liable, or answerable.
This is where information security starts. If accountability were better understood, agreed upon, practiced, and enforced, we’d have much better information security.
Who’s ultimately responsible for information security in your organization?
This is a question I’ve asked 100s of organizations over the years. You’d be surprised by the answers:
“I don’t know.”
“That’s a good question.”
“Well, I am (the CIO, CISO, etc.).”
“We all are.”
What’s the right answer? Simple, do this:
Grab an organization chart.
Find the person/people at the top of the chart
This is the correct answer. Always.
Three questions then:
Does the person/people at the top know they’re ultimately responsible for information security?
If so, do they act like it (demand periodic status updates, champion the cause, plot direction, delegate effectively, etc.)?
If not, who’s responsible for telling them?
The sample organization chart above is semi-typical for a business. Let’s look at a city, county, and/or school district. Same thing applies, the person/people at the top is/are ultimately responsible.
If this ultimate accountability is missing or broken, then expect the information security program to be missing or broken. The lack of accountability at the top permeates through all other information security efforts.
Tip: Define ultimate responsibility for information security in your organization and document it in an information security charter.
There’s a saying, “information security is everyone’s responsibility.” This is sort of true, but sort of not true. It’s true that everyone has responsibilities in information security, it’s not true that information security is everyone’s responsibility. Ultimately, information security is a responsibility that lies at the top. Only once this is realized, can we effectively begin to define and communicate delegated and supporting responsibilities.
Don’t assume that people know what their responsibilities are. Once responsibilities are defined and agreed upon, we can start practicing/enforcing accountability.
In simplest terms, a CISO only has two responsibilities.
Consult on information security risk, enabling the business to make sound risk decisions.
Implement the business’ risk decisions in the best manner possible.
Both of these responsibilities are delegated from the top. In some cases, the top may delegate risk decisions to the CISO as well. This can work if the parameters are well-defined (and documented) and the CISO is empowered to do so.
NOTE: This approach is a delegation only, and should/does not absolve the top from their responsibility.
Honorable Mention for “A”
Asset (and asset management) – something that has value to a person or organization. Assets can be tangible (hardware, facility, etc.) or intangible (software, data, intellectual property, etc.).
Authentication – proof of an identity (subject or object). Three factors; something you know (password, PIN code, etc.), something you have (token, mobile phone, etc.), and something you are (biometric).
Access (Control) – what a subject can do with a system, file, object, etc.
Next up, “B”.
https://i0.wp.com/evanfrancen.com/wp-content/uploads/2020/10/lettera.png?fit=800%2C533&ssl=1533800Evan Francenhttps://evanfrancen.com/wp-content/uploads/2022/09/259CD09D-F0B8-4FFC-A050-C84AD4D31150.pngEvan Francen2020-10-05 12:30:182020-10-05 12:30:18A is for Accountability
We write our show notes either at the end of the week (Friday) or at the very beginning of the next (Sunday). It’s easier to remember the things that happened during the week on Friday than Sunday, that’s for sure! Only one day away (Saturday), and it’s easy to forget all that we did.
Are you feeling like things are slowly returning to normal? I am, and it’s great news! Personally, I don’t like the term “new normal”. I think I don’t like it because I feel like people have twisted it to serve their own desires and/or opinions without any factual basis. Normal is normal, and the greatest abnormality (in my opinion) has been our lack of in-person contact. We’ve been built, or wired, for analog personal interaction. Digital, online interaction will never substitute for it, and the longer we go without it, the more mentally unhealthy we become.
Last week was a great week! Four cool things stand out in particular:
Last week’s podcast was awesome! I love every opportunity to chat with Brad, and it’s a blessing to hang out every Monday morning. Recording episode 79 was a great way to kick things off last week. If you missed it, we talked about information security in K12, and you should go catch it.
We made great progress in helping state governments last week! Had a great conversation with Minnesota’s CISO, Rohit Tandon, on Wednesday as we discussed third-party information security risk management. This was followed by the scheduling of a similar meeting with the State of New Mexico and joining the National Association of State CIOs (NASCIO) Cybersecurity Committee on Thursday.
Chris Roberts, Ryan Cloutier, and I did Episode #1 of The Security Shit Show on Thursday night. It was a ton of fun hanging out with these guys! We’re planning to do our episodes/shows live every Thursday night at 10pm CDT, record them for future playback, and use he audio for our podcast. It’s definitely entertaining for our viewers/listeners and therapeutic for us. Be sure to tune in if you can!
The Daily inSANITY Check-ins are still going strong, and this past week was great! People supporting each other and helping where we can is what it’s all about. Come join us when you can.
There were many great things about last week, but these were the four that came to mind when I sat down to write these show notes.
Speaking of show notes, let’s get to it! Today we’re going to talk about Zero Trust; what it is, why it’s a hot topic today, and what you should be doing about it.
SHOW NOTES – Episode 80
Date: Monday, May 18th, 2020
Episode 80 Topics
Catching Up (as per usual)
Wrapping Up – Shout outs
[Evan] Hey everyone! Welcome to the UNSECURITY Podcast. This is episode 80, the date is May 18th, 2020, and I’m Evan Francen. With me today is my co-host, Brad Nigh. Good morning Brad!
[Brad] We’ll see what sort of mood Brad is in this morning…
[Evan] We’ve got a good show planned today! There’s this thing called “zero trust” that people are talking about, and I thought it’d be good for you and I to discuss it. Personally, I’ve received a lot of questions about it, and I’m sure you have too Brad. Like always, before we dig in, let’s catch up. What were some highlights for you from last week and how was your weekend?
Quick discussion about last week, last weekend, COVID-19, life, and other stuff.
[Evan] A simple Google search of Zero Trust turns up “About 691,000,000 results”. A Google search of “Zero Trust” (with quotes) turns up “About 1,940,000 results“. So, clearly there are a lot of people who know what it means, right? Here’s some returns from the first page of search results:
Then there are a bunch of and “normal” search results with titles like “What is Zero Trust? A model for more effective security”, “What is Zero Trust?”, “Zero Trust Security | What’s a Zero Trust Network?”. etc.
The fact that there are so many “what is zero trust?” search returns might be a hint that people are confused. Let’s tackle this!
Zero Trust Discussion
Let’s try to clear some of the confusion:
What is Zero Trust?
Is it really new?
Is Zero Trust possible?
If I want Zero Trust, what do I need to do?
What common mistakes should I look out for?
[Evan] Alright. Good talk Brad. Thanks for sharing your insight! I think our listeners have a clearer picture of Zero Trust and what it means to them. If they have additional questions or comments, they can always contact us for more!
[Evan] News stuff! What the heck happened in the world last week? Let’s see…
I found four articles that caught my attention. Let’s talk about them!
[Evan] Never a shortage of things to talk about in this industry is there? Well, episode 80 of the UNSECURITY Podcast is just about a wrap. Brad, you have any shoutouts?
[Brad] Maybe he does, maybe he doesn’t…
[Evan] Here’s mine…
[Evan] Can’t say enough thanks to our listeners! Crazy how we run into you in all sorts of places. Stay safe and let us know how we can help you. Send things to us by email at firstname.lastname@example.org. If you’re the social type, socialize with us on Twitter, I’m @evanfrancen and Brad’s @BradNigh. Thinking about coming to hang out at the Daily inSANITY Check-in? You can follow this on Twitter too at @InSanityIn.
There you go, have a great week!
https://i0.wp.com/evanfrancen.com/wp-content/uploads/2020/05/8165172604_030a01aede_b.jpg?fit=1023%2C580&ssl=15801023Evan Francenhttps://evanfrancen.com/wp-content/uploads/2022/09/259CD09D-F0B8-4FFC-A050-C84AD4D31150.pngEvan Francen2020-05-17 17:54:432020-05-17 17:54:43The UNSECURITY Podcast – Episode 80 Show Notes – Zero Trust
That’s how many days have passed since we officially closed our (physical) offices at FRSecure and SecurityStudio. The date was March 16th, 2020, and it’s a common closure date for many organizations. It’s crazy, but I hardly remember the month of April or the first week and a half of May! I’ve either lost context, or I’m losing it in a big way. These are times like no other.
This thought about context got me thinking about how it applies to our work as information security professionals. I believe one of the biggest tells about good or bad information security leadership is the ability or inability to put risk into context. I think there’s a whole series of podcasts we could do on this topic focusing on how we can help people understand context better. The better we understand context, the better our information security decisions will be. Maybe we’ll start tackling this in a series of podcasts, starting with episode 80 next week.
This week, we’ve got a slightly different topic.
Today, in episode 79, we’re going to focus our attention on a recent report from the Consortium for School Networking (CoSN) titled “The State of Edtech Leadership in 2020“. There’s some really good information in this report, and kudos to CoSN for pulling it together!
Let’s just get to it, episode 79 show notes below…
SHOW NOTES – Episode 79
Date: Monday, May 11th, 2020
Episode 79 Topics
Catching Up (as per usual)
The State of Edtech Leadership in 2020
Wrapping Up – Shout outs
[Evan] Hey everyone! Welcome to the UNSECURITY Podcast. This is episode 79, the date is May 11th, 2020, and I’m Evan Francen. With me today is my co-host, Brad Nigh. Good morning Brad!
[Brad] Brad’ll say good morning I bet. He’s a super nice guy like that!
[Evan] We’ve got a good show planned today! You and I both love helping people, and I think we’re covering some things in this episode that should help all our listeners. Before we get too deep though, let’s catch up. It’s what we do! How you doing and what’s new Brad?
Quick discussion about COVID-19, life, and other stuff.
The State of Edtech Leadership in 2020
[Evan] Like you Brad, I get asked a lot for my opinion about this or that in information security. If the question I get is focused, it’s easier to provide a quick answer, but when a question is vague or open-ended, it takes much longer. This hit home for me this weekend when I was asked to chime in on this article; K-12 Tech Leaders Prioritize Cybersecurity, But Many Underestimate Risks, Survey Says. There’s a lot to unpack here, and a good opinion takes more time.
[Brad] He probably hasn’t read the article yet, but we’ll see…
[Evan] One thought that came to mind when I was asked for my opinion was the concept of context. Anything taken out of context can be made to look anyway we want, good, bad, and/or anything in between. When I read the article, one statement stood out right away:
fewer than 20 percent marked any items on a list of cybersecurity threats as “high-risk” from their perspective
[Evan] What caught my attention were the words “from their perspective”. Questions popped into my head. How do Edtech leaders define “cybersecurity”? What’s on their list of “cybersecurity threats”? What’s “high-risk”? This is a can of worms.
The following are key quotes directly from the CoSN report.
Cybersecurity remains the number one technology priority for IT Leaders, yet the threat is generally underestimated.
For the third straight year, cybersecurity has ranked as the top priority. When it comes to maintaining network security, 69% of districts say they are proactive or very proactive – up significantly over last year’s 52%. Districts employ a variety of strategies to minimize risk, including the vast majority in which IT staff training is a top practice and a majority requiring teachers and principals to receive training as well. Despite concerns, the survey also found that less than a fifth of respondents (18%) have a dedicated full-time employee (FTE) whose sole job is cybersecurity. IT Leaders feel phishing scams pose the greatest risk to network security, with almost half (49%) rating them medium/high risk to high risk. Despite this, results also showed an overall trend to underestimate risk—less than a fifth of respondents considered any specific threat as high risk. This runs counter to the reality that school systems are being specifically targeted by cybercriminals with reported cyber incidents tripling in one year.
Artificial Intelligence (AI) holds both promise and peril for IT Leaders.
The majority (55%) of IT Leaders anticipate that of the emerging technologies, AI will play a significant or transformational role in teaching and learning over the next five years. However, AI also poses concerns, with privacy being the biggest. Before AI becomes adopted at scale and can deliver on its promise, privacy issues will need to be addressed.
The top three challenges persist: budget, professional development, and department silos.
These three areas have been vexing IT Leaders since 2017. While budget is often beyond district control and directly affects professional development, it is within districts’ abilities to address the existence of silos. As outlined in CoSN’s “Digital Leap Success Matrix,” cross-functional executive team leadership is integral to the development of a successful digital learning environment. Until the executive leadership breaks down the silos, IT Leaders will continue to face difficulty in achieving their district’s own technology goals.
Other items from the report
Districts without a dedicated person on staff use a variety of methods to monitor network security. The most common approach is sharing the responsibility across several jobs (46%) followed by incorporating network security monitoring as part of another job (30%). Outsourcing is used by 11% of respondents. A concerning 10% of respondents have an ad hoc approach and do not have anyone assigned to monitoring their district’s network security. A makeshift approach to addressing cybersecurity is one reason why “school districts are proving to be particularly enticing to hackers.”
When it comes to maintaining network security, 69% of districts say they are proactive or very proactive. This represents a significant increase over the prior year’s 52%. Only 13% describe their activity as reactive or very reactive, a decrease from 23% the prior year. These year-over-year results indicate that districts are highly aware of increased network attacks in K-12 environments and are increasing efforts to thwart them. It is likely that lack of resources, not lack of awareness, is responsible for the 13% described as reactive/very reactive. As one respondent lamented: How is our small district able to fend off a multitude of possible cyber threats with the staff we have?
When asked to rate their perception of various risks to network security, respondents did not make significant distinctions between threat types. The largest segment fell into the Medium risk range—low/medium, medium, high/medium. With 49% rating it medium/high risk or high risk, phishing was deemed the greatest risk. It is surprising more did not consider it a greater risk. Phishing attacks have reached the “highest level in three years” with more than two-thirds of all phishing sites using SSL protection. With SSL decreasing as a reliable indicator of security, risks increase for users unable to spot phishing sites. Less than a third (31%) of respondents perceive ransomware attacks as medium/high riisk or high risk. This risk level assessment is also likely lower than it should be as the FBI is reporting ransomware schemes are being specifically designed to target public schools.8 With less than a fifth of respondents rating any threat as high risk (phishing received the most with 16%), threats overall appear underrated. Only 5% assessed student data to be at high risk, yet, according the most recent data on reported K-12 cybersecurity incidents, “the most frequently experienced type of school-related cyber incident…..were data breaches, primarily involving the unauthorized disclosure of student data.” With the number of reported K-12 cybersecurity incidents rising—nearly triple from 2018 to 201910—perceptions in perceived risks should start to realign more closely with reality.
[Evan] No doubt, we have a lot of work to do in K-12. It’s our obligation to do everything we can to help. Check out SecurityStudio’s free resources and do a holistic information security risk assessment like the S2School we developed earlier this year. Put information security risk into perspective and make much better choices.
[Evan] Alright. Good talk. Thanks Brad! Let’s cover a couple of interesting news stories before we wrap this up. Here are a couple stories that caught my attention:
[Evan] Sheesh! Lots of stuff. Well, that’s it for episode 79. Brad, you have any shoutouts?
[Brad] Maybe he does, maybe he doesn’t…
[Evan] Here’s mine…
[Evan] Seriously, a huge thank you to our listeners! We love your encouragement and we don’t take your advice lightly. You’re all great! Keep the questions and feedback coming. Send things to us by email at email@example.com. If you’re the social type, socialize with us on Twitter, I’m @evanfrancen and Brad’s @BradNigh.