A Thought About STRIDE and Reality
Mar 2026
Threat modeling is the practice of trying to understand how a system can be attacked before it happens. You map out how the system works, what it connects to, what data it handles, and where something could go wrong. STRIDE is a common framework used for this, grouping threats into categories like spoofing, tampering, and privilege escalation, often mapped against data flows and system components to bring structure to the analysis.
STRIDE is a solid framework. It gives structure, a checklist, and a way to think through systems. That works well when systems behave in predictable ways and when you can clearly define boundaries, inputs, and outcomes.
The problem is that real attackers do not think in categories. They think in paths, goals, and outcomes. STRIDE pushes you to analyze threats one box at a time, one data flow at a time, while attackers chain small things together into something bigger. Modern attacks are not cleanly "tampering" or "information disclosure". They are sequences that move across systems, contexts, and time. STRIDE tends to break those into pieces, and in doing so, it can miss how they actually play out in reality.
It also assumes a level of determinism that does not exist anymore, or is quickly degrading. Traditional threat modeling expects that if you understand the system, you can reason about its behavior. That assumption starts to break with humans, and now even more with AI. Humans do unexpected things. They click, trust, ignore, and work around controls. AI systems do something similar. They respond to inputs in ways that are not always predictable, and small changes in input can lead to very different outcomes. That kind of behavior does not map cleanly to fixed categories or static diagrams.
The output is also a snapshot. You model a system at a point in time, based on how you think it works. But environments change fast, especially now. New integrations, new data flows, new behaviors. The model becomes outdated quickly, and when that happens, it creates a false sense of security because you think you have covered the system.
Where this really breaks down is with non deterministic actors. STRIDE is good at modeling what a system can do when it is used or misused in expected ways. It struggles when behavior itself becomes the attack surface. Social engineering, prompt injection, influence, goal hijacking. These are not clean violations of a single category. They are all about steering behavior over time. In AI systems especially, attacks can look like normal usage, just slightly nudged in the wrong direction, and then compounded across steps.
So the issue is not that STRIDE is wrong. It is that it is rigid. It is built for a cleaner, more predictable world than the one we are operating in. It can guide thinking, but if you rely on it too heavily, you start modeling diagrams instead of reality. And reality includes messy behavior, chained attacks, and actors that do not follow your assumptions.
That is why a survivability mindset matters more. Instead of trying to perfectly model every possible threat category, you focus on what happens when things go wrong. How bad it gets and how long it lasts. Because with humans and AI in the loop, something will always go wrong. The question is whether your model prepared you for that or just made you feel like it did.
Just at thought...
A more effective way to approach threat modeling, I think, would be to stop treating it like a diagramming exercise and start treating it like a living risk conversation tied to how systems actually fail.
Start with outcomes, not categories. What actually hurts the business if it goes wrong. Data exposure, loss of control, operational disruption, irreversible actions. Anchor everything there. Then work backwards through the system and ask how those outcomes could happen, not which STRIDE box they fit into.
Shift from static models to continuous assessment. Systems change, behaviors change, integrations change. The model should move with them. Instead of building a perfect snapshot, keep a lightweight, evolving view of where risk is increasing or decreasing. This is closer to how attackers operate. They probe, adapt, and chain opportunities over time. Focus on paths, not points. Real attacks are sequences. A small misconfiguration, a weak assumption, an over-permissioned component, all chained together. Model those paths. How does something low impact become high impact over a few steps. Where are the pivots. Where does context change. That is where risk actually lives.
Account for behavior as part of the system. Humans and AI agents are not edge cases, they are core components now. Model how they can be influenced, how they make decisions, and how they might deviate from expectations. Treat inputs as adversarial, even when they look normal. Assume that behavior can be steered over time.
Use the three levers to keep it grounded. Susceptibility, damage, and recovery time. How easy is it to influence or break something. If it breaks, how far does it spread. If it spreads, how long until you contain it. This keeps the model focused on outcomes instead of theory.
Design controls around failure, not perfection. You are not going to predict every path or prevent every issue. Build controls that limit blast radius, detect deviation early, and allow fast interruption and recovery. Isolation, least privilege, runtime monitoring, and the ability to stop or roll back actions matter more than perfect prevention.
Keep it simple enough to use. If the model takes weeks to build and is outdated the moment it is done, it is not useful. The goal is fast, repeatable thinking that can be applied as systems evolve.
This approach is less about labeling threats and more about understanding how things actually break. It accepts that systems are messy, actors are unpredictable, and change is constant. That is closer to reality, and it leads to decisions that hold up when something goes wrong.