Last updated at Fri, 30 Aug 2024 19:45:46 GMT
In a previous blog post, I referenced some research on how people plan for, or rather how they fail to plan for, natural disasters like floods. At the end of the blog post I mentioned that people who have poor mental models about disasters fail to prepare fully. I keep coming back to the idea of mental models because it starts to explain why we have such a gap between security practitioners and senior executives.
I asked one CISO how he talks to other executives and the company's board about a recent breach in the news. He told me that the CEO doesn't have much time for security, so he uses a shorthand. He talks to the CEO in analogies. He explains that they've already put metaphorical locks on the front door, but to be sure that they don't make the same mistakes as the latest company in the news, they'll need to put locks on the back door.
This approach isn't uncommon, but it has a few flaws. First, it doesn't take much time to show that this analogy doesn't work well. The way attacks work today, the attackers will not be prevented from breaking into this metaphorical house. Instead, they'll get a ladder from the garage and climb in the upstairs bedroom window. Of course, you can put more locks on those windows, but again, the attackers are going to find a way in if your security strategy is based solely on locks (prevention). In this analogy, where are other defender activities like identification, detection, response, and recovery?
The second reason the lock analogy fails is because it tends to create a problem/solution dynamic. If it's a bug, go fix it. But again, that's not how the attackers work. In other spaces this approach can work. For example, if your web site is experiencing performance problems, you can assign an appropriate engineer to fix the problem. After some analysis, she'll come back with recommendations. Maybe she'll propose buying more machines/instances, or maybe there's a bottleneck in the code that can be refactored given the new website load patterns. But in general, she'll be able to fix the problem and it will stay fixed. That's not how security works. When the defenders make a change that improves security, the attackers get to decide if the cost of the attack is worth continuing or not. Or perhaps they're already in the network so far that the improved security doesn't affect them. In many cases, they'll modify their approach or tools to get past these changes. In many cases, the security improvements will be little more than a short lived setback.
If you are an executive who views security decisions through the “problem/solution” lens, you'll be tempted to offer the security team budget or headcount to “fix” the problem. Someone presented you with a problem, and you gave them a solution. Implicit in this transaction is a shift of the responsibility and accountability back to the security team. They asked for money for more locks, and you gave it to them. If there is a breach, the security team will be accountable, not you.
The metaphor of locks on doors isn't the only one you've heard. Others include outrunning the next guy rather than the bear, hard crunchy exterior/soft chewy interior, seat belts, guard rails, airbags, and so on. Bruce Schneier also talked about the problems of metaphors:
It's an old song by now, one we heard after the 9/11 attacks in 2001 and after the Underwear Bomber's failed attack in 2009. The problem is that connecting the dots is a bad metaphor, and focusing on it makes us more likely to implement useless reforms.
Trying to communicate using the wrong mental models leads to real problems for security practitioners and the data they are trying to protect. So what are the right mental models?
The single biggest improvement in your mental models you can make is to understand that you are up against dedicated, human adversaries.
Until defenders, executives, and stakeholders in an organization internalize this fact, we will continue to see them miscommunicate and then plan and execute poorly. And the results will be security by tool rather than security by strategy. And that will lead to more breaches in the news (and many not!).
The key words to ponder are “dedicated” and “human”. In some cases, the attackers have a job, and they are being paid to attack you. Or maybe they feel a moral purpose in attacking you. Some work alone, some in teams with different specializations. But they are dedicated. And of course we know that they are human. But that has implications that most executives (and many security teams) haven't pondered. It means they read about your latest acquisition and begin to probe the target company as a way into yours. They can correlate previous breach data with your employees to find a likely candidate for a spear phishing attack. They look for your technical debt. They find systems orphaned by a recent reorg or layoff. Humans can be creative, patient, and insightful.
As an aside, all of this makes security unlike any other part of your organization. No other part of your organization has the sort of dedicated, human adversaries that seek to benefit from the value of your data in the way security attackers will. What about the legal team, you may ask? Don't they have dedicated and human adversaries? Yup. But let's walk through the steps in a legal “attack”. First, the adversary notifies you that you are under attack. While there have been some high-profile announcements that a company's networks and systems were under attack, it's not common. As a reminder, the average time between intrusion and detection is measured in months and quarters. During that time, attack takes place without anyone knowing. Next, both the attacker and defender play by roughly the same rules, and those rules are enforced by a neutral referee who decides if both sides are abiding by these rules. You get the idea. The legal analogy isn't even close to what infosec defenders deal with.
There's a common saying in the CISO world that ”security practitioners need to learn to speak the language of the business”. That's absolutely true. There's no doubt in my mind. We need to continue to learn how the business works, and we need to get better at saying “yes” while at the same time reducing risk. That fact is necessary but not sufficient for us to close the gap between security people and senior decision makers. The other major factor will be those senior decision makers breaking free of simplistic metaphors and faulty mental models. It's never really been a communication gap. It's been a mental model gap. Without shared mental models, communication will always be faulty.
Getting all levels of an organization aligned on the right mental models is clearly not an easy task. What will work in one organization isn't what will work for another. Not all stakeholders understand the importance of spending time to learn how attacks work. However, I would propose a few things. If you are a security practitioner, don't shy away from teaching others how attacks work. You should be looking at your security program through the lens of a kill-chain or attacker lifecycle model. When you present, teach people how you think. Explain why this next budget request will address a specific concern, but that others remain. Explain what you think your adversaries will do next. Resist the temptation to reduce those complex dynamics down to locks on doors. Focus your conversations on models, not metaphors. That's true for all your communications, reports, quarterly plans, and elevator chats.
If you are a senior decision maker and don't come from a security or intelligence background, you may find it challenging and time consuming to learn to think more like an attacker. Resist the urge to say “I don't need to be a subject matter expert in security; that's why I have a security team”. While that statement is true, just by saying it you prevent yourself from learning just enough to make smart decisions. You are already expert-enough in numerous other domains. Security and privacy awareness will be critical skills for your success in the coming years. Think ahead to the inevitable (yes, inevitable!) breach where outsiders will hold you accountable in potentially unexpected ways. Assess your organization's culture of security objectively rather than the way you hope it is. And make sure your actions match your words.
Have a story for me about about mental models gone wrong? Drop me a line on Twitter at @boblord. If you want to tell me confidentially, send me a DM! My Twitter settings allow you to DM me even if I don't follow you.