Trent AI Raises $13M to Secure the Agentic Age

What Is Threat Modeling?

Trent AI Team
By Trent AI Team
Feb 2026 • 16 min read
What is Threat Modeling?

Threat modeling is a structured process for identifying and prioritizing potential security threats to a system, then defining countermeasures to prevent or mitigate those threats. It answers four key questions originally framed by Adam Shostack: what are we building, what can go wrong, what are we going to do about it, and did we do a good enough job.

Those four questions are the backbone of every threat modeling exercise. The first, what are we building, forces your team to agree on system scope and architecture before anyone discusses attacks. The second, what can go wrong, systematically surfaces threats using structured methodologies rather than gut instinct. The third, what are we going to do about it, connects each identified threat to a specific mitigation. The fourth, did we do a good enough job, closes the loop with validation, the step most teams skip.

Shostack formalized this framework while serving as the threat modeling Program Manager for Microsoft’s Security Development Lifecycle (SDL) team from 2006 to 2009. But threat modeling at Microsoft started years earlier. In 1999, Loren Kohnfelder and Praerit Garg wrote the first structured approach to threat categorization, introducing the methodology that would become STRIDE. After Bill Gates issued his Trustworthy Computing Initiative memo in January 2002, Microsoft committed to embedding security across its development practices. By 2004, the SDL had formalized STRIDE-based threat modeling as a mandatory step for every product team.

The primary value of threat modeling is shifting security left to the design phase, where architectural flaws are cheapest to fix. Studies from IBM, NIST, and others have found that addressing design-level vulnerabilities in production costs significantly more, often cited as 10 to 100 times, than catching them during design. That gap makes threat modeling one of the best returns on time you’ll get in security.

Threat modeling is proactive and design-time, not reactive and code-time. Vulnerability scanning finds known bugs in existing code. Penetration testing attacks running systems. Threat modeling identifies architectural weaknesses before you write a single line.

A threat model is not a static document. It’s a living representation of your system’s security posture that should evolve alongside the architecture. The most effective threat models are treated as versioned artifacts, updated when architecture changes, reviewed when new threat intelligence surfaces, and retired when systems are decommissioned.

If you are new to threat modeling, start with our practical threat modeling for developers guide for a simplified walkthrough.

The Threat Modeling Process

The threat modeling process follows five core steps: define the system scope and architecture using data flow diagrams, identify threats using a structured methodology like STRIDE, prioritize threats using a risk matrix based on impact and likelihood, define mitigations and security controls for prioritized threats, and validate the threat model through review and testing.

Threat Modeling process

Step 1: Define the system. Data flow diagrams (DFDs) are the most common technique for modeling system architecture in threat modeling. A DFD uses five core elements: 1. processes (circles or rounded rectangles), 2. data stores (parallel lines), 3. data flows (arrows), 4. external entities (rectangles), and 5. trust boundaries (dashed lines separating zones of different privilege levels).

Trust boundaries are the most security-critical element in a DFD. Most threats cluster around trust boundary crossings, the points where data moves between zones of different privileges. A request crossing from a public-facing API gateway into an internal microservice is exactly the kind of boundary where authentication failures, injection attacks, and data leakage happen.

Step 2: Identify threats. Apply a structured methodology at each trust boundary crossing and along each data flow. STRIDE is the most common choice, but PASTA, OCTAVE, and others each bring different strengths depending on your goals. The next section compares six major approaches.

Step 3: Prioritize by risk. Not every threat deserves the same attention. Risk prioritization uses a matrix of impact (how severe the damage) and likelihood (how probable the attack). Common approaches include simple High/Medium/Low ratings, numeric scoring on 1-5 scales, and formal models like DREAD or CVSS. The point is consistent evaluation, not false precision.

Step 4: Define mitigations. Each prioritized threat maps to a specific mitigation or control. Your mitigations should follow the principle of defense in depth: layered controls that don’t depend on any single mechanism working perfectly.

Step 5: Validate. Most teams skip validation, but don’t. It asks two questions: does the threat model accurately represent the current system, and are the identified threats addressed by controls you’ve actually implemented? Validation is what separates a threat model from a wish list.

Threat modeling is iterative, not linear. When your architecture changes during development, update the DFD and re-run threat identification. A waterfall approach to threat modeling produces an artifact that’s outdated, the moment your first sprint changes the architecture.

Threat Modeling Methodologies

The six most widely used threat modeling methodologies are: STRIDE (threat classification by category), PASTA (risk-centric seven-stage process), OCTAVE (asset-centric organizational risk), LINDDUN (privacy-focused threat analysis), VAST (scalable visual agile methodology), and DREAD (risk quantification scoring). STRIDE is most common for application security, while PASTA is preferred when business-risk alignment is required.

STRIDE is the most widely adopted threat modeling methodology for application security. Developed at Microsoft by Loren Kohnfelder and Praerit Garg in 1999, it classifies threats into six categories: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. Each category maps to a security property: Spoofing threatens Authentication, Tampering threatens Integrity, Repudiation threatens Non-repudiation, Information Disclosure threatens Confidentiality, DoS threatens Availability, and Elevation of Privilege threatens Authorization. That mapping gives you a systematic way to check whether your controls address each property.

When STRIDE doesn’t fit, the alternatives each bring a different lens. PASTA (Process for Attack Simulation and Threat Analysis) runs seven stages from business objectives through attack modeling to risk analysis, making it the strongest choice when regulatory requirements demand documented risk-based decisions. OCTAVE, developed at Carnegie Mellon’s Software Engineering Institute, takes an asset-centric approach, evaluating threats to critical assets across the enterprise rather than focusing on a single application. For privacy-specific concerns, LINDDUN addresses seven threat categories (Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of information, Unawareness, and Non-compliance) and pairs well with STRIDE if you’re building under GDPR or similar regulations.

Two more specialized approaches round out the toolkit. VAST (Visual, Agile, and Simple Threat) modeling uses two model types, application and operational, to scale threat modeling in enterprise environments without slowing agile delivery. DREAD scores threats on five dimensions (Damage potential, Reproducibility, Exploitability, Affected users, and Discoverability) using either numeric 0-10 scales (total 0-50) or simplified High/Medium/Low ratings. Microsoft deprecated DREAD internally due to scoring subjectivity, but it remains widely used as a practical prioritization tool.

Methodology Approach Best For Limitations
STRIDE Threat categorization (6 types) Application security, dev teams Doesn’t prioritize risks
PASTA Risk-centric (7 stages) Business-risk alignment, compliance Resource-intensive
OCTAVE Asset-centric Organizational risk assessment Less granular for applications
LINDDUN Privacy-focused (7 types) GDPR/privacy compliance Narrow scope
VAST Visual, agile, scalable Enterprise scale, agile teams Proprietary approach
DREAD Risk scoring (5 dimensions) Threat prioritization Subjective scoring

No single methodology is universally best. Selection depends on your maturity, team expertise, system complexity, and regulatory requirements. Use STRIDE when your team is new to threat modeling and working on a single application. Use PASTA when you need explicit business-risk alignment for enterprise systems. Use OCTAVE when assessing organizational risk beyond a single application. Use LINDDUN alongside STRIDE when privacy is a first-class requirement.

Threat Modeling for AI and Agentic Systems

Traditional threat modeling frameworks like STRIDE were designed for deterministic software systems. Agentic AI systems break this assumption through non-deterministic behavior, autonomous decision-making, multi-step reasoning, dynamic tool selection, and inter-agent communication. The OWASP Top 10 for Agentic Applications identifies attack vectors specific to AI agents including goal hijacking, excessive autonomy, and insecure tool use.

Each STRIDE category looks different when your system includes autonomous agents:

STRIDE Category Traditional Threat Agentic AI Manifestation
Spoofing Identity forgery Agent impersonation, prompt injection hijacking
Tampering Data modification Training data poisoning, context window manipulation
Repudiation Action denial Untraceable autonomous decisions
Information Disclosure Data exposure Model memorization, context leakage
Denial of Service System overload Unbounded token consumption, recursive loops
Elevation of Privilege Access escalation Excessive agency, unauthorized tool use

MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome) is a threat modeling framework introduced by the Cloud Security Alliance in February 2025. It provides a structured seven-layer approach with threats mapped to the MITRE ATLAS framework, so you can reason systematically about how agents interact with each other, their tools, and their environments.

ASTRIDE extends STRIDE by adding an “A” category for AI Agent-Specific Attacks: instruction manipulation, unsafe reasoning-driven tool use, and misuse of agent memory or context windows. Published in December 2025, ASTRIDE offers an incremental path if you’re already familiar with STRIDE and want to incorporate AI-specific threats.

The OWASP Top 10 for Agentic Applications, released in December 2025, identifies ten critical risks specific to autonomous AI agents: Agent Goal Hijack, Identity and Privilege Abuse, Unexpected Code Execution, Insecure Inter-Agent Communication, Human-Agent Trust Exploitation, Tool Misuse and Exploitation, Agentic Supply Chain Vulnerabilities, Memory and Context Poisoning, Cascading Failures, and Rogue Agents.

Real-world incidents have already confirmed several of these categories. Agent-mediated data exfiltration demonstrated goal hijack vulnerabilities. Remote code execution through tool misuse showed that agents can be weaponized through their own capabilities. Memory poisoning attacks have reshaped agent behavior long after the initial injection. Supply chain compromises targeting MCP servers and plugins exploited the trust agents place in their runtime infrastructure. Unexpected Code Execution and Agentic Supply Chain Vulnerabilities are distinct from their traditional counterparts because they target agent runtime infrastructure rather than application code or model weights.

The NSA and CISA issued joint guidance in May 2025 requiring organizations to conduct data security threat modeling and privacy impact assessments at the outset of any AI initiative. If you’re building AI-powered products, your threat model needs to cover both traditional application threats and AI-specific threats in one place. Trent AI’s Threat Assessor does exactly this, mapping agentic AI security threats alongside conventional application risks in a single, continuously updated model.

Continuous Threat Modeling

The State of Threat Modeling 2024-2025 survey confirms what practitioners already know: keeping threat models current is the hardest part. You invest significant effort in the initial exercise, then watch the model grow stale as your architecture changes around it.

Static, point-in-time threat models become outdated as system architecture evolves. Continuous threat modeling integrates threat analysis into the development lifecycle using threat-modeling-as-code tools like pytm and Threagile, which generate threat models from code definitions and update automatically with architecture changes.

pytm is an OWASP project that defines threat models in Python. You describe your system components, data flows, and trust boundaries in code, and pytm automatically generates DFDs, sequence diagrams, and applicable threat lists. Your threat model lives in the same repository as the code it describes, reviewed in the same pull requests and versioned with the same commits.

Threagile takes a YAML-based approach, defining threat models declaratively with automated security analysis using both standard and custom risk rules. It supports optional AI integration for threat identification and works well if you prefer configuration over code.

CI/CD pipeline integration lets you automatically detect new attack surfaces when architecture changes are committed. When you add a new microservice or expose a new API endpoint, the pipeline flags that the threat model needs updating, or with tools like pytm, regenerates it automatically.

The shift from document-based to code-based threat modeling mirrors the infrastructure-as-code movement. Just as Terraform and CloudFormation replaced manual infrastructure provisioning, pytm and Threagile replace static Word documents and Visio diagrams. Trent AI’s Threat Assessor takes this further, continuously analyzing your architecture changes and updating the threat assessment as you build.

Continuous threat modeling doesn’t eliminate the need for human judgment. Automated tools detect structural changes; humans assess business impact and design mitigations.

Threat Modeling Tools

Threat modeling tools fall into four categories: enterprise platforms, open-source tools, AI-native platforms, and cloud-native solutions. What you pick depends on your team size, integration requirements, methodology support, and budget.

Category Examples Key Features
AI-Native Trent AI, emerging platforms Automated analysis, continuous assessment
Enterprise IriusRisk, ThreatModeler, SD Elements Collaborative workflows, compliance mapping, issue tracker integration
Open Source OWASP Threat Dragon, pytm, Threagile, Microsoft TMT Free, extensible, community-supported
Cloud-Native Cloud-specific solutions AWS/Azure/GCP integration

Enterprise platforms give you collaborative workflows: threat libraries, compliance mapping, and integration with issue trackers like Jira and Azure DevOps. You’ll want these if you’re coordinating across multiple teams or need audit trails.

Open-source tools lower the barrier to entry but require more manual effort and lack enterprise collaboration features. The Microsoft Threat Modeling Tool is free and widely adopted but limited to STRIDE methodology and Windows-only deployment. OWASP Threat Dragon offers a cross-platform alternative with a visual interface.

Common Challenges and Best Practices

The most common challenge with threat modeling is perceived time cost. If you’re already stretched for delivery deadlines, adding another process feels impossible. Lightweight approaches like “Threat Model Every Story” reduce this friction by analyzing each user story for security implications during sprint planning, not running a separate, lengthy workshop.

Expertise gaps are the second most common barrier. You might not have a security engineer available to facilitate sessions. Champions programs address this by training one developer per team to lead the process, spreading security knowledge across your teams without hiring dedicated security engineers. Trent AI’s Threat Assessor also bridges this gap, walking your team through the process step by step, even if you don’t have a dedicated security engineer.

Threat model staleness is systemic, not accidental. If you treat your threat model as a point-in-time document, you’ll always be behind your own architecture. Threat modeling as code and scheduled review cycles (quarterly for stable systems, per-sprint for rapidly evolving ones) address this structurally rather than through discipline alone.

For agile teams, the pattern that works is: threat modeling during sprint planning for new features that change architecture or trust boundaries, lightweight review during code review for changes that might affect existing threat models, and quarterly deep dives to reassess the entire system against new threat intelligence.

Getting executive buy-in means talking about what executives care about: compliance requirements that threat modeling satisfies, breach cost reduction from design-phase security, and development velocity gains from catching architectural issues early rather than reworking them late. Framing it as “security best practice” rarely gets budget.

Start with your highest-risk components first, not the entire system. A targeted threat model of your authentication flow, payment processing pipeline, or data ingestion layer delivers immediate value. Expand from there as the practice matures.

Threat Modeling and Compliance

Threat modeling satisfies requirements across multiple compliance frameworks: SOC 2 Trust Services Criteria CC3.2 for risk assessment, ISO 27001 Clause 6.1.2 for information security risk assessment, PCI DSS Requirements 6.3 and 6.5 for vulnerability identification and secure development, NIST 800-53 RA-3 for risk assessment, and the EU AI Act Article 9 for risk management of high-risk AI systems.

Framework Requirement How Threat Modeling Satisfies
SOC 2 CC3.2 (Risk Assessment) Structured risk identification; documented threat models serve as audit evidence
ISO 27001 Clause 6.1.2 Systematic identification, analysis, and evaluation of information security risks
PCI DSS v4.0 Req 6.3, 6.5 Proactive vulnerability identification during design; secure development practices
NIST 800-53 RA-3 (Risk Assessment) Accepted method for system-level risk assessments
EU AI Act Article 9 Risk identification mandate for high-risk AI systems (via MAESTRO/ASTRIDE)

SOC 2 CC3.2 requires you to identify and assess risks. ISO 27001 Clause 6.1.2 mandates systematic identification, analysis, and evaluation of information security risks. A documented threat model satisfies both: repeatable risk identification with traceable mitigations that auditors can review.

PCI DSS v4.0 Requirement 6.3 focuses on identifying security vulnerabilities through processes including monitoring sources of vulnerability information. Threat modeling goes beyond known CVEs to catch architectural weaknesses during design. Requirement 6.5 is a more direct fit: it requires secure development practices, and threat modeling is one of them. NIST SP 800-53 Rev 5 RA-3 explicitly identifies threat modeling as an accepted method for system-level risk assessments, so your threat modeling outputs plug directly into the required documentation.

The EU AI Act Article 9 mandates risk management for high-risk AI systems, though “threat modeling” is not explicitly named. AI-specific frameworks like MAESTRO and ASTRIDE directly satisfy the risk identification requirements. The NSA and CISA’s May 2025 joint guidance further reinforces this, specifically calling for AI threat modeling as part of responsible AI development.

One threat modeling practice, consistently applied and documented, generates audit evidence that satisfies all five frameworks at once.

Start Here

If you’ve read this far and haven’t threat modeled your system yet, here are your first steps:

  1. pick your riskiest component (authentication, payment processing, or wherever you handle sensitive data)
  2. pull your team together
  3. draw the data flow on a whiteboard
  4. mark the trust boundaries
  5. Then ask STRIDE questions at each boundary:
    Who could spoof an identity here?
    What data could be tampered with?
    Where could information leak?

You’ll find more in fifteen minutes than most vulnerability scanners find in an hour. And when you outgrow a whiteboard, tools like pytm, Threagile, and Trent AI’s Threat Assessor can keep that model current as your architecture grows.

Reviewed by Eno Thereska, Co-founder & CEO at Trent AI

Frequently Asked Questions

How often should you update a threat model?

+

Update threat models when the system architecture changes, when new integrations are added, when new threat intelligence reveals relevant attack vectors, and at minimum, on a quarterly review cycle for stable systems. Rapidly evolving systems should review threat models each sprint.