STRIDE Threat Model: The Complete Guide to Microsoft’s Security Framework
Threat modeling is one of the most powerful and misunderstood practices in cybersecurity. You’ve probably seen it treated as a compliance checkbox, or as an abstract exercise disconnected from real engineering. But when done properly, threat modeling becomes a design superpower.
Among all threat modeling frameworks, STRIDE remains the most widely taught and globally adopted. Originally developed at Microsoft in 1999, STRIDE gives you a structured way to think about how systems can fail securely, and how attackers might exploit those failures. This guide covers STRIDE: what it is, how to apply it, where it excels, where it struggles in modern systems, and how it adapts to AI and agentic architectures.
What Is the STRIDE Threat Model?
The STRIDE threat model is a threat classification framework originally created by Loren Kohnfelder and Praerit Garg at Microsoft in 1999. It was later formalized within Microsoft’s Security Development Lifecycle (SDL) and popularized in Adam Shostack’s 2014 book Threat Modeling: Designing for Security.
STRIDE is an acronym representing six categories of threats, each mapped to a violated security property:
| STRIDE Category | Violated Property |
|---|---|
| Spoofing | Authentication |
| Tampering | Integrity |
| Repudiation | Non-repudiation |
| Information Disclosure | Confidentiality |
| Denial of Service | Availability |
| Elevation of Privilege | Authorization |
Rather than listing specific attacks, STRIDE provides a structured lens through which to analyze systems. It is typically applied by:
- Creating a Data Flow Diagram (DFD)
- Identifying system elements (processes, data stores, data flows, external entities)
- Applying STRIDE categories to each element
Although originally designed for traditional, human-written software with predictable control flows, STRIDE remains the foundation of threat modeling education worldwide.
The Six STRIDE Categories Explained
Here’s what each of the six STRIDE categories look like in practice and in modern systems.
1. Spoofing (Violates Authentication)
Spoofing occurs when an attacker impersonates a legitimate user, service, or system component to gain unauthorized access. Classic examples are:
- Credential theft
- Session hijacking
- Forged authentication tokens
- IP spoofing
- OAuth token replay
If you’re running distributed systems, watch the service-to-service authentication; that’s where spoofing hits the hardest in microservices.
In agentic systems, spoofing looks different: one AI agent impersonating another, forged tool execution identities, or malicious plugins claiming legitimate trust levels.
2. Tampering (Violates Integrity)
Tampering involves unauthorized modification of data, in transit, at rest, or during processing. Examples include:
- Man-in-the-middle attacks
- SQL injection leading to data modification
- Unsigned code alteration
- File system manipulation
Tampering often overlaps with injection vulnerabilities. AI systems add new forms of tampering: prompt injection altering agent instructions mid-execution, model poisoning attacks modifying training data, and context window manipulation in LLM applications. Prompt injection maps to Tampering, but STRIDE’s original categories don’t capture the specifics of AI threats very well.
3. Repudiation (Violates Non-repudiation)
Repudiation is when someone does something, denies it, and you can’t prove otherwise. Repudiation is often overlooked because it does not directly “break” the system, but it destroys accountability. Common issues include:
- Missing logs
- Unsigned transactions
- Weak audit trails
- Untraceable API calls
The AI angle makes this worse. Autonomous agents produce untraceable reasoning chains, make decisions without clear provenance, and often have incomplete logging of tool calls. If an AI agent makes a financial decision or a security change, how do you prove why it happened?
4. Information Disclosure (Violates Confidentiality)
Information Disclosure is what it sounds like; sensitive data gets exposed to people who shouldn’t see it. Examples include:
- Verbose error messages revealing stack traces
- Unencrypted API traffic
- Misconfigured access controls
- Memory disclosure vulnerabilities (e.g., Heartbleed-class bugs)
AI applications introduce new disclosure risks: context window leaks revealing prior user data, system prompt leakage, training data memorization exposure, and inference attacks. In LLM systems, the attack surface for confidential data expands beyond storage and transmission into model behavior.
5. Denial of Service (Violates Availability)
Denial of Service is resource exhaustion. The goal is to make your system unavailable. Examples include:
- Distributed flooding attacks
- API rate abuse
- Storage saturation
- Memory exhaustion
In AI and agentic systems, recursive agent loops consume unbounded compute, prompt amplification spikes token usage, and tool chaining triggers runaway execution paths. Availability threats today are inflating your operational costs.
6. Elevation of Privilege (Violates Authorization)
Elevation of Privilege (EoP) occurs when a user or process gains access rights beyond what was intended. Examples include:
- Buffer overflow exploits
- RBAC misconfigurations
- Token escalation
- Container escapes
Authorization boundaries are significantly more complex in autonomous architectures. Modern AI context in agentic systems includes:
- AI agents may gain tool access beyond intended scope
- Poorly constrained permissions enable “excessive agency”
- Cross-agent privilege inheritance becomes dangerous
How to Apply STRIDE: Step-by-Step
STRIDE is most powerful when applied methodically; here is the five step process:
Step 1: Define the System Scope
Threat modeling an entire platform at once often becomes overwhelming. Start small with a feature, service, or workflow. Clarify:
- What system are you modeling?
- What features are included?
- What is out of scope?
- Where are trust boundaries?
Step 2: Create a Data Flow Diagram (DFD)
A DFD includes:
- External entities
- Processes
- Data stores
- Data flows
- Trust boundaries
A login flow DFD might include:
- User
- Web server
- Authentication service
- Session database
- Logging system
Step 3: Enumerate Threats Per Element
Not every STRIDE category applies to every element type. Here’s the standard mapping:
| DFD Element | Applicable STRIDE Categories |
|---|---|
| External Entities | Spoofing, Repudiation |
| Processes | All six (S, T, R, I, D, E) |
| Data Stores | Tampering, Information Disclosure, Denial of Service |
| Data Flows | Tampering, Information Disclosure, Denial of Service |
Step 4: Document and Prioritize
Documentation for each threat should include:
- Threat description
- Affected component
- STRIDE category
- Likelihood
- Impact
- Mitigation
Prioritization methods should include:
- DREAD scoring
- Risk matrices
- Business impact analysis
- CVSS alignment
Step 5: Validate and Iterate
Threat models are living documents. Revisit them:
- After architectural changes
- Before major releases
- After security incidents
- During compliance audits
Worked Example: E-Commerce Login Flow
Consider a simple login system.
Potential threats:
- Spoofing: Session token replay
- Tampering: Login request manipulation
- Information Disclosure: Password logging
- Elevation of Privilege: Admin flag injection
- Denial of Service: Credential stuffing causing rate exhaustion
- Repudiation: Missing login attempt logs
STRIDE Mitigations by Category
Mitigations should align directly with violated properties.
| Category | Security Property | Mitigations |
|---|---|---|
| Spoofing | Authentication | MFA, OAuth 2.0/OIDC, mutual TLS |
| Tampering | Integrity | Input validation, digital signatures, HMAC |
| Repudiation | Non-repudiation | Centralized logging, tamper-evident logs |
| Info Disclosure | Confidentiality | TLS 1.3, encryption at rest, least privilege |
| DoS | Availability | Rate limiting, auto-scaling, CDNs |
| Elevation of Privilege | Authorization | RBAC/ABAC, sandboxing |
STRIDE and OWASP Alignment
STRIDE maps closely to OWASP Top 10 2021 categories:
- Spoofing → Identification & Authentication Failures
- Tampering → Injection & Data Integrity Failures
- Repudiation → Logging & Monitoring Failures
- Info Disclosure → Broken Access Control & Cryptographic Failures
- Elevation of Privilege → Broken Access Control
What Does STRIDE Mean for LLMs
Large language models break assumptions that STRIDE was built on. Traditional software has predictable control flows; LLMs respond differently to the same input depending on context, temperature, and prior conversation. That makes threat identification harder because the attack surface isn’t static.
The OWASP Top 10 for LLM Applications 2025 provides the closest structured mapping between STRIDE categories and LLM-specific vulnerabilities:
- Tampering → Prompt Injection
- Info Disclosure → Sensitive Information Disclosure
- DoS → Unbounded Consumption
- Elevation of Privilege → Excessive Agency
If you’re building with LLMs, STRIDE still gives you the right questions. But you’ll need these LLM-specific categories to get useful answers.
STRIDE vs DREAD vs PASTA
STRIDE and DREAD are complementary. STRIDE identifies threats. DREAD scores them. If your team is new to threat modeling or works with DFD-based architectures, STRIDE is the place to start.
PASTA covers both and adds business objectives, technical profiling, and attack simulation on top. If you need business-aligned threat analysis, PASTA is better suited. Microsoft deprecated DREAD internally due to subjective and inconsistent scoring, though it remains in widespread industry use.
Framework definitions:
- STRIDE: Threat identification/classification framework
- DREAD: Risk scoring model (Damage, Reproducibility, Exploitability, Affected Users, Discoverability)
- PASTA: 7-stage risk-centric methodology (Process for Attack Simulation and Threat Analysis)
Note: Microsoft deprecated DREAD internally due to scoring inconsistencies, but it remains widely used.
| Aspect | STRIDE | DREAD | PASTA |
|---|---|---|---|
| Purpose | Threat identification | Risk scoring | End-to-end risk analysis |
| Approach | Category-based classification | 5-factor scoring (1-10) | 7-stage methodology |
| Input | DFDs + system architecture | Identified threats | Business objectives + tech profile |
| Best for | Developers, early-stage TM | Quick prioritization | Mature security programs |
| Complexity | Low | Low | High |
| Status | Industry standard | Deprecated by Microsoft | Growing adoption |
STRIDE in Modern Architectures
Microservices & APIs
In microservices architectures, the number of trust boundaries and data flows increases dramatically compared to monoliths, multiplying the STRIDE attack surface.
More services = more trust boundaries = more attack surface. Focus areas include:
- Service mesh authentication
- API gateway tampering
- Distributed logging
Serverless & FaaS
Serverless functions introduce ephemeral execution contexts that require different observability patterns than traditional host-based systems. Standard host-level logging doesn’t apply, but cloud-native alternatives (CloudWatch, X-Ray, structured logging) address many tracing needs. The STRIDE-relevant challenge is that Repudiation and Denial of Service analysis requires deliberate instrumentation that doesn’t come by default.
Infrastructure as Code
Your IaC templates are threat model targets. A tampered IaC template can deploy insecure infrastructure at scale. Scan for misconfigurations, sign your infrastructure policies, and implement drift detection.
DevSecOps Integration
Modern teams embed STRIDE into CI/CD. Integrating STRIDE into CI/CD pipelines means automating threat identification at pull request time, not as a one-time workshop artifact. Threat modeling shifts from workshops to pipelines. Implementation approaches:
- Threat model as code
- Automated DFD generation from infrastructure
- PR-triggered STRIDE checks
Agile & Sprint-Based Development
In Agile environments, threat modeling shifts from a waterfall gating activity to an incremental process. Each user story that changes data flows or trust boundaries triggers a STRIDE review of affected components.
Where STRIDE Falls Short
STRIDE provides six broad threat categories, but it was never intended as an exhaustive taxonomy. It emerged in the context of traditional, human-written software with predictable control flows, and its categories reflect that.
Gaps: What STRIDE Misses
- STRIDE has no native category for supply chain attacks, dependency confusion, or third-party component risk — threats that dominate modern software.
- STRIDE does not address business logic flaws, race conditions, or semantic vulnerabilities that don’t map cleanly to its six categories.
- Privacy threats (data minimization, consent, purpose limitation) fall outside STRIDE’s framework. Microsoft’s LINDDUN framework was created to address this gap.
False Positives and Coverage Gaps
- You’ll generate false positives if you mechanically apply all six categories to every DFD element without using STRIDE-per-element mapping.
- STRIDE can also miss emerging threats — prompt injection, model poisoning, agent manipulation — because they map imperfectly to existing categories. The nuance of their attack mechanics gets lost. For example, prompt injection maps to Tampering, but that mapping doesn’t capture the semantic manipulation of an AI system’s decision logic.
Evolution Since 1999
- STRIDE-per-element limits which categories apply to specific DFD element types, cutting down false positives.
- STRIDE-per-interaction goes further, analyzing threats at the interaction level between elements rather than at the element level.
- Microsoft’s Threat Modeling Tool automates some STRIDE-per-element analysis but remains focused on traditional architecture. Powerful, but incomplete.
STRIDE for AI and Agentic Systems
Why Traditional STRIDE Falls Short for AI
AI systems introduce non-deterministic behavior; outputs vary based on input context, training data, and model state. Traditional STRIDE assumes predictable input-output relationships, but AI systems are non-deterministic. STRIDE is necessary, but insufficient for AI-native architectures.
Agentic AI systems add autonomous decision-making, multi-step execution, and dynamic tool access, creating threat vectors that require new threat categories extending beyond the original six. While many agentic threats can be mapped to STRIDE categories (as shown below), the mapping loses critical nuance around intent misalignment, autonomous scope creep, and multi-agent trust dynamics. While AI threats can map to STRIDE categories, the mapping often loses nuance.
STRIDE-to-Agentic Threat Mapping
| STRIDE Category | Traditional Threat | Agentic AI Equivalent |
|---|---|---|
| Spoofing | User impersonation | Agent identity spoofing — one agent impersonating another |
| Tampering | Data modification | Prompt injection — altering agent instructions mid-execution |
| Repudiation | Missing audit trails | Untraceable AI decision chains — no reasoning provenance |
| Info Disclosure | Data leaks | Context window leakage — model exposing training data or prior sessions |
| Denial of Service | Resource exhaustion | Recursive agent loops — unbounded compute consumption |
| Elevation of Privilege | Permission escalation | Excessive agency — agent accessing tools beyond its designated scope |
OWASP Top 10 for LLM Applications 2025 provides complementary coverage:
- Tampering → LLM01 (Prompt Injection) + LLM04 (Data and Model Poisoning)
- Elevation of Privilege → LLM06 (Excessive Agency)
- Information Disclosure → LLM02 (Sensitive Information Disclosure) + LLM07 (System Prompt Leakage)
- Denial of Service → LLM10 (Unbounded Consumption)
What’s Needed Beyond STRIDE
Teams building agentic AI security systems need threat modeling approaches that account for:
- Intent misalignment
- Prompt-driven control fragility
- Autonomous decision scope
- Model behavior drift
- Multi-agent trust boundaries
Trent AI‘s Agentic Threat Assessor extends traditional STRIDE analysis to agentic systems, mapping threats across both classical and AI-specific categories using your actual architecture as context.
Tools for STRIDE Threat Modeling
There are many tools out there, and modern tools extend STRIDE to AI-specific threats using architecture-aware automation. Here are some tools to check out:
Open-Source & Free
- Microsoft Threat Modeling Tool — Free, DFD-based, STRIDE-per-element automation. Windows only.
- OWASP Threat Dragon — Open-source, cross-platform (web + desktop), DFD and STRIDE support.
- Threagile — Threat modeling as code, YAML-based, automated risk identification.
Commercial & AI-Assisted
Trent AI’s Agentic Threat Assessor: Contextual, agentic STRIDE analysis grounded in your specific codebase, architecture, and CI/CD pipeline. Extends STRIDE to AI-native threats automatically.
STRIDE Training Resources
- Microsoft’s original STRIDE documentation and SDL resources
- OWASP Threat Modeling community and playbook
- SAFECode threat modeling practices guide
- Practical DevSecOps (CTMP certification) and SANS (SEC540) courses include STRIDE-focused training modules
For teams wanting to move beyond manual STRIDE workshops, Trent AI’s Threat Assessor automates the process using your actual system architecture, not generic templates.
Compliance Benefits
STRIDE outputs directly support compliance requirements:
| Framework | Relevant Control |
|---|---|
| SOC 2 | CC6.1 (Logical Access) |
| ISO 27001:2022 | A.8.25 (Secure Development Life Cycle), A.8.26 (Application Security Requirements), A.5.9 (Inventory of Information and Other Associated Assets) |
| NIST 800-53 | RA-3 (Risk Assessment) |
| PCI DSS | Requirement 6 (Secure Systems and Software) |
Threat model documentation generated through STRIDE analysis serves as evidence for audit trails in regulated industries.
Final Thoughts
STRIDE endures because it simplifies a complex problem: How can this system fail securely?
It does not predict every attack. It does not replace risk management. It does not eliminate human judgment. But it provides structure, in a world of microservices, cloud-native deployments, CI/CD pipelines, and AI agents making autonomous decisions; structure is invaluable.
STRIDE remains the most accessible gateway into systematic security thinking; and when extended thoughtfully, it continues to serve modern engineering teams more than 25 years after its creation. The key is not to treat STRIDE as a checklist. Treat it as a lens, and use that lens early, when architecture decisions are still cheap to change. That’s where real security lives.
Reviewed by Eno Thereska, Co-founder & CEO at Trent AI