Skip to main content

Threat Modeling

AegisShield’s core threat modeling engine leverages GPT-4o to generate comprehensive STRIDE-based threat models tailored to your application. The system combines AI-powered analysis with real-time threat intelligence from multiple sources.

Overview

The threat modeling module (threat_model.py) provides three primary capabilities:

Threat Generation

AI-powered STRIDE threat identification

Image Analysis

Architecture diagram analysis using vision models

Prompt Engineering

Sophisticated prompts for 20+ years of expertise

STRIDE Methodology

AegisShield implements the STRIDE threat modeling framework:
  • Spoofing - Identity impersonation attacks
  • Tampering - Data or code modification
  • Repudiation - Denial of actions performed
  • Information Disclosure - Unauthorized data access
  • Denial of Service - Service disruption
  • Elevation of Privilege - Unauthorized access elevation
The system generates 3 threats per STRIDE category (18 total) by default, ensuring comprehensive coverage across all threat types.

Core Functions

get_threat_model()

Generates a complete threat model using OpenAI’s API.
api_key
str
required
OpenAI API key for authentication
model_name
str
required
OpenAI model to use (typically “gpt-4o”)
prompt
str
required
Formatted threat modeling prompt
Returns: dict[str, Any] - JSON object containing:
  • threat_model: Array of identified threats
  • improvement_suggestions: Recommendations for better descriptions
threat_model.py
from threat_model import get_threat_model

# Generate threat model
response = get_threat_model(
    api_key="your-openai-key",
    model_name="gpt-4o",
    prompt=threat_prompt
)

# Access threats
threats = response["threat_model"]
suggestions = response["improvement_suggestions"]

create_threat_model_prompt()

Creates a comprehensive prompt incorporating application details and threat intelligence.
app_type
str
required
Application type (e.g., “Web application”)
authentication
str
required
Authentication methods used
internet_facing
str
required
Whether application is internet-facing
industry_sector
str
required
Industry sector (e.g., “Finance”)
sensitive_data
str
required
Types of sensitive data handled
app_input
str
required
Detailed application description
nvd_vulnerabilities
str
required
NVD CVE data for technology stack
otx_data
str
required
AlienVault OTX threat intelligence
technical_ability
str
required
User’s technical level (Low/Medium/High)
Example usage
from threat_model import create_threat_model_prompt

prompt = create_threat_model_prompt(
    app_type="Web application",
    authentication="OAuth2, MFA",
    internet_facing="Yes",
    industry_sector="Finance",
    sensitive_data="PII, Financial records",
    app_input="Banking application with...",
    nvd_vulnerabilities=nvd_results,
    otx_data=otx_results,
    technical_ability="Medium"
)

get_image_analysis()

Analyzes architecture diagrams using GPT-4 Vision.
api_key
str
required
OpenAI API key
model_name
str
required
Vision model (e.g., “gpt-4o”)
prompt
str
required
Analysis prompt
base64_image
str
required
Base64-encoded image data
Image analysis example
from threat_model import get_image_analysis, create_image_analysis_prompt
import base64

# Load and encode image
with open("architecture.png", "rb") as f:
    image_data = base64.b64encode(f.read()).decode()

# Analyze
analysis = get_image_analysis(
    api_key=api_key,
    model_name="gpt-4o",
    prompt=create_image_analysis_prompt(),
    base64_image=image_data
)

description = analysis["choices"][0]["message"]["content"]

Threat Model Structure

Each threat in the model includes:
Threat structure
{
  "Threat Type": "Spoofing",
  "Scenario": "An attacker could...",
  "Assumptions": [
    {
      "Assumption": "API keys are stored in plaintext",
      "Role": "Developer",
      "Condition": "No key management system"
    }
  ],
  "Potential Impact": "Unauthorized access to...",
  "MITRE ATT&CK Keywords": [
    "credential access",
    "token theft",
    "api abuse"
  ]
}
Assumptions document the conditions that must be true for a threat to be realized:
  • Assumption: What must be true
  • Role: Who is responsible (Developer, Admin, User)
  • Condition: When it applies
This helps prioritize threats based on your actual environment.

Error Handling

The module implements robust error handling:
Retry logic from threat_model.py
def retry_with_backoff(func, max_retries: int = 3, initial_delay: float = 1.0):
    """Retry with exponential backoff."""
    delay = initial_delay
    
    for attempt in range(max_retries):
        try:
            return func()
        except (RequestException, Timeout) as e:
            if attempt < max_retries - 1:
                logger.warning(f"Attempt {attempt + 1} failed. Retrying in {delay}s...")
                time.sleep(delay)
                delay *= 2  # Exponential backoff
            else:
                raise ThreatModelAPIError(f"Failed after {max_retries} attempts")
API calls may fail due to rate limits or network issues. The retry logic with exponential backoff (1s, 2s, 4s) helps ensure reliability.

Output Format

The threat model is converted to Markdown for display:
json_to_markdown() usage
from threat_model import json_to_markdown

markdown = json_to_markdown(
    threat_model=threats,
    improvement_suggestions=suggestions
)

# Renders as Markdown table
print(markdown)
Output:
Threat TypeScenarioPotential ImpactAssumptions
SpoofingAn attacker could…Unauthorized access…- Assumption 1 (Role, Condition)

Best Practices

Detailed Descriptions

Provide comprehensive application descriptions with architecture details, data flows, and authentication mechanisms for better threat identification.

Include Context

Upload architecture diagrams when available - visual analysis enhances threat detection accuracy.

Specify Tech Stack

Accurate technology selection enables CVE-specific threat identification from NVD.

Review Assumptions

Validate assumptions against your actual environment to prioritize relevant threats.

Integration

The threat model feeds into downstream processes:
See MITRE ATT&CK Integration for how threats are mapped to techniques and Risk Assessment for DREAD scoring.

Build docs developers (and LLMs) love