Most threat modelling frameworks were designed in a world where software shipped quarterly. Today's teams ship daily. The result: threat models that take two weeks to produce, become outdated before they're reviewed, and gather dust in a Confluence page nobody reads.
STRIDE is still the right mental model. But the way most organisations apply it is incompatible with modern engineering velocity. Here's how to make it work.
STRIDE is a mnemonic — Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege — designed to help you think systematically about threats to a system. It's not a checklist to complete. It's a lens to apply.
The value isn't in filling out a STRIDE table for every component. It's in using STRIDE categories to ask better questions during design: who could impersonate this service? What happens if this queue message is tampered with? Can an attacker deny having made this request?
| Category | Threat | Microservices Example | Mitigation |
|---|---|---|---|
| Spoofing | Impersonating another service or user | Service A calls Service B without mTLS — attacker on the network spoofs Service A | mTLS between services, JWT validation, service identity |
| Tampering | Modifying data in transit or at rest | Event on message queue modified by attacker with queue write access | Message signing, integrity checks, queue access control |
| Repudiation | Denying an action occurred | Service performs privileged action with no audit log | Immutable audit logging, correlation IDs across services |
| Info Disclosure | Exposing data to unauthorised parties | Error response includes stack trace with internal service URLs | Sanitised error responses, secrets management, field-level encryption |
| DoS | Making a service unavailable | Downstream service called without timeout — one slow dependency brings down the chain | Timeouts, circuit breakers, rate limiting, bulkheads |
| EoP | Gaining higher privileges than granted | Service token with broad IAM permissions used beyond its intended scope | Principle of least privilege, scoped tokens, regular IAM review |
The practical failure of most threat modelling programmes is the assumption that a threat model is a document you produce once. In microservices, threat modelling needs to happen continuously — at the feature level, not the system level.
Here's what this looks like in practice:
The output isn't a document. It's 3–6 security stories added to the sprint backlog. This keeps threat modelling proportionate to feature complexity and means findings actually get addressed.
The most common gap I find in microservices assessments: internal service calls that carry no authentication. Teams often reason that because the services are "inside the perimeter," they don't need auth. This is the assumption that makes lateral movement trivially easy.
Every service-to-service call should carry a verifiable identity. mTLS is the gold standard. Short-lived JWTs with service-scoped claims are a practical alternative. What's not acceptable: shared API keys that never rotate, or no authentication at all.
Message queues introduce a threat that synchronous APIs don't have: a persistent, potentially accessible store of messages between services. If an attacker can write to a queue — through a compromised service, misconfigured IAM, or direct queue access — they can inject malicious events that get processed as legitimate business logic.
For high-value event flows, sign messages at the producer and verify at the consumer. At minimum, implement strict queue access control and treat queue messages with the same scepticism as HTTP requests from external users.
In a monolith, audit logging is straightforward. In a microservices architecture, a single business transaction spans multiple services, often with no shared correlation ID. When something goes wrong — or when a regulator asks what happened — reconstructing the event timeline is painful or impossible.
Build correlation IDs in from the start. Every request entering your system should carry a traceable ID that propagates through every service call and gets written to every audit log. This is both a security control and a debugging superpower.
Not every feature needs a 30-minute threat model. Some need more. The triggers for a more thorough exercise:
Threat modelling programmes fail when they're owned by the security team rather than by engineering. The security team's job is to teach the process, facilitate the high-complexity sessions, and review the outputs — not to run every threat model themselves.
Run a one-hour STRIDE workshop with each squad using one of their own recent features as the example. The abstract becomes concrete instantly. Teams that have done this once apply STRIDE naturally in their design discussions without being asked.
STRIDE works for microservices. What doesn't work is applying a process designed for quarterly shipping cycles to a team that ships daily. Make threat modelling small, fast, and integrated into the sprint — and it becomes a habit rather than a ceremony.
Start with one feature next sprint. Draw the data flow, spend ten minutes with STRIDE, capture three security stories. That's a threat modelling programme.