Back to Blog
Threat Modelling 8 min read 18 September 2024 views views

STRIDE in Practice: Threat Modelling Microservices That Actually Ship

Threat Modelling STRIDE Microservices AppSec

Most threat modelling frameworks were designed in a world where software shipped quarterly. Today's teams ship daily. The result: threat models that take two weeks to produce, become outdated before they're reviewed, and gather dust in a Confluence page nobody reads.

STRIDE is still the right mental model. But the way most organisations apply it is incompatible with modern engineering velocity. Here's how to make it work.

What STRIDE Is Actually For

STRIDE is a mnemonic — Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege — designed to help you think systematically about threats to a system. It's not a checklist to complete. It's a lens to apply.

The value isn't in filling out a STRIDE table for every component. It's in using STRIDE categories to ask better questions during design: who could impersonate this service? What happens if this queue message is tampered with? Can an attacker deny having made this request?

The STRIDE Reference Table

CategoryThreatMicroservices ExampleMitigation
SpoofingImpersonating another service or userService A calls Service B without mTLS — attacker on the network spoofs Service AmTLS between services, JWT validation, service identity
TamperingModifying data in transit or at restEvent on message queue modified by attacker with queue write accessMessage signing, integrity checks, queue access control
RepudiationDenying an action occurredService performs privileged action with no audit logImmutable audit logging, correlation IDs across services
Info DisclosureExposing data to unauthorised partiesError response includes stack trace with internal service URLsSanitised error responses, secrets management, field-level encryption
DoSMaking a service unavailableDownstream service called without timeout — one slow dependency brings down the chainTimeouts, circuit breakers, rate limiting, bulkheads
EoPGaining higher privileges than grantedService token with broad IAM permissions used beyond its intended scopePrinciple of least privilege, scoped tokens, regular IAM review

The Sprint-Compatible Threat Model

The practical failure of most threat modelling programmes is the assumption that a threat model is a document you produce once. In microservices, threat modelling needs to happen continuously — at the feature level, not the system level.

Here's what this looks like in practice:

The 30-Minute Feature Threat Model
  • 10 min — Draw the data flow: On a whiteboard or Miro, map who calls what, where data enters and exits, and what's trusted vs untrusted
  • 10 min — Apply STRIDE to trust boundaries: For each place data crosses a trust boundary, run through S/T/R/I/D/E with the team
  • 10 min — Capture findings as user stories: Each identified threat becomes a security story in the backlog, not a separate report

The output isn't a document. It's 3–6 security stories added to the sprint backlog. This keeps threat modelling proportionate to feature complexity and means findings actually get addressed.

Microservices-Specific STRIDE Patterns

Service-to-Service Authentication (Spoofing)

The most common gap I find in microservices assessments: internal service calls that carry no authentication. Teams often reason that because the services are "inside the perimeter," they don't need auth. This is the assumption that makes lateral movement trivially easy.

Every service-to-service call should carry a verifiable identity. mTLS is the gold standard. Short-lived JWTs with service-scoped claims are a practical alternative. What's not acceptable: shared API keys that never rotate, or no authentication at all.

Event-Driven Tampering

Message queues introduce a threat that synchronous APIs don't have: a persistent, potentially accessible store of messages between services. If an attacker can write to a queue — through a compromised service, misconfigured IAM, or direct queue access — they can inject malicious events that get processed as legitimate business logic.

For high-value event flows, sign messages at the producer and verify at the consumer. At minimum, implement strict queue access control and treat queue messages with the same scepticism as HTTP requests from external users.

Distributed Repudiation

In a monolith, audit logging is straightforward. In a microservices architecture, a single business transaction spans multiple services, often with no shared correlation ID. When something goes wrong — or when a regulator asks what happened — reconstructing the event timeline is painful or impossible.

Build correlation IDs in from the start. Every request entering your system should carry a traceable ID that propagates through every service call and gets written to every audit log. This is both a security control and a debugging superpower.

When to Do a Full STRIDE Model

Not every feature needs a 30-minute threat model. Some need more. The triggers for a more thorough exercise:

Making It Stick

Threat modelling programmes fail when they're owned by the security team rather than by engineering. The security team's job is to teach the process, facilitate the high-complexity sessions, and review the outputs — not to run every threat model themselves.

Run a one-hour STRIDE workshop with each squad using one of their own recent features as the example. The abstract becomes concrete instantly. Teams that have done this once apply STRIDE naturally in their design discussions without being asked.

The Bottom Line

STRIDE works for microservices. What doesn't work is applying a process designed for quarterly shipping cycles to a team that ships daily. Make threat modelling small, fast, and integrated into the sprint — and it becomes a habit rather than a ceremony.

Start with one feature next sprint. Draw the data flow, spend ten minutes with STRIDE, capture three security stories. That's a threat modelling programme.