I’ve spent more than 30 years in information security, and here’s a truth that still makes people uncomfortable:
Information security isn’t really about information or security. It’s about people.
But not just one kind of people.
Every security incident—every breach, outage, or “technical failure”—involves three distinct groups of people, whether we acknowledge it or not:
- The people who make decisions — those who design systems, write code, configure environments, approve risk, and adopt technology
- The people closest to the incident — the ones who clicked the link, reused the password, opened the file, or trusted the wrong thing
- The people who suffer — customers, patients, citizens, families, and communities who live with the consequences
Security failures don’t make sense unless you understand all three.
People Create Risk
Every incident we label as “technical” can be traced back to human decisions.
- Misconfigurations don’t configure themselves
- Vulnerabilities don’t magically appear in code
- Irresponsible technology adoption doesn’t come from thin air
- Bad security decisions don’t make themselves
People design systems.
People build them.
People deploy them.
People maintain them.
People ignore the warnings.
Even “automated” failures are just deferred human decisions.
Risk isn’t created by technology.
Technology just expresses the choices we’d already made.
People Suffer the Consequences
This is the part the industry is most uncomfortable talking about.
If security failures didn’t harm real people—financially, emotionally, physically—information security would be a niche engineering discipline. Interesting, maybe even elegant, but optional.
That’s not the world we live in.
- Someone drains your bank account
- Your identity gets stolen and follows you for years
- A small business shuts down
- A hospital system goes offline
- A patient doesn’t get care when they need it
- You pay higher taxes to cover the ransomware payment
- A family inherits a problem they never agreed to take on
The people who suffer usually:
- Didn’t design the system
- Didn’t approve the architecture
- Didn’t accept the risk
- Didn’t benefit from the tradeoffs
And yet, they pay the price.
That’s why security matters.
Not because of compliance.
Not because of frameworks.
Not because of tools.
Security matters because people matter.
This Isn’t Theoretical
Too many times, I’ve sat across the table from people after an information security incident.
I’m not just talking about executives.
Or architects.
Or cybersecurity “experts”.
I mean “regular” people.
I’ve watched someone realize—in real time—that their life just got harder because of a decision they had no part in making.
They didn’t care about root cause analysis.
They didn’t care what framework was followed.
They didn’t care how well-documented the policy was.
They didn’t care about the SOC2 or your ISO certification.
They cared that their problem just became their problem.
That’s when security stops being abstract.
The Industry’s Favorite Lie: “People Are the Weakest Link”
This phrase has been around for decades, and I hate it more every year.
When we say “people are the weakest link,” we’re usually pointing at the person closest to the incident—the one who clicked the link, chose a weak password, or opened the infected file.
Sometimes that person is the victim—like a home user who gets scammed.
But in organizations, they often aren’t.
In those cases, the person who made the mistake might face consequences:
- Embarrassment
- Discipline
- Mandatory training
- Even job loss
But they’re rarely the ones who suffer the most.
That burden falls on:
- Customers
- Patients
- Citizens
- Partners
- Anyone who trusted the organization to protect them
The person who made the mistake is visible.
The people who suffer are usually invisible.
The “weakest link” is rarely the one who pays the highest price.
When we reduce incidents to “human error,” what we’re really doing is:
- Fixating on the last action instead of the full chain of decisions
- Blaming the easiest target instead of examining the system
- Ignoring incentives, pressure, and context
- Avoiding accountability for decisions made higher up the food chain
Designing systems that assume people won’t make mistakes is unrealistic.
Designing systems that punish individuals who had no say in the risk decision(s) is unethical.
Security Fails When Leadership Fails
Most security incidents aren’t caused by a lack of tools; however, some are caused by lack of talent, but all security incidents are caused by decisions—often made far away from the people who live with the consequences.
- Leaders who didn’t understand the risk—but approved it anyway (risk ignorance is risk acceptance by default BTW)
- Organizations that adopted technology faster than they could use it responsibly
- Incentives that rewarded growth, speed, and profit over safety
These are not technical problems.
These are human and leadership problems.
And until we’re willing to confront this honestly, we’ll keep doing the same things and expecting different results.
This Is a Core Truth of UNSECURITY 2.0
This idea will be woven throughout UNSECURITY 2.0, because it sits at the heart of what’s broken in our industry.
We don’t fix security by:
- Buying more tools
- Writing longer policies
- Blaming end users
- Chasing the next framework
We fix security by:
- Being honest about who makes decisions
- Understanding who suffers when those decisions fail
- Taking responsibility for risk we accept on others’ behalf
- Designing systems that respect human limitations
- Remembering why this work matters in the first place
Information security is about people.
Just not all the same people—
and not all in the same way.
Everything else is just implementation detail.