Violation Dashboard
After a scan completes, navigate to/dashboard/{scanId} to see:
Top Stats:
- Compliance Score Gauge — 0-100 score (higher = more compliant)
- Critical Violations — Highest severity issues requiring immediate action
- High Risk — Important violations that should be addressed
- Accounts Flagged — Unique accounts with at least one violation
- Collapsible tree grouped by Severity → Rule → Account
- Click any account to open the Evidence Drawer for detailed inspection
Violations with
status = 'false_positive' are excluded from the summary to reduce noise. They still appear in the full case view.Severity Levels
Yggdrasil uses three severity levels:| Severity | Color | Meaning | Example |
|---|---|---|---|
| CRITICAL | Red | Immediate regulatory risk, potential fines or legal action | Unencrypted personal data, missing DPO for high-risk processing |
| HIGH | Amber | Significant compliance gap, requires prompt remediation | Missing consent for marketing, insufficient access controls |
| MEDIUM | Gray | Lower-priority issue, should be addressed in next review cycle | Incomplete audit logs, missing privacy policy link |
CRITICALviolations have 1.0x weight (full impact)HIGHviolations have 0.75x weightMEDIUMviolations have 0.5x weight
Confidence Scores
Each violation receives a confidence score (0-100%) indicating how likely it is a true positive. Confidence Components:1. Rule Quality (0-0.2)
- Measures structural quality of the rule definition
- Higher for rules with specific conditions, lower for broad thresholds
2. Signal Specificity Boost (0-0.2)
- Rules with compound
ANDconditions get a boost - Example:
amount > 10000 AND type = WIREis more specific thanamount > 10000alone
3. Statistical Anomaly (0-0.3)
- How unusual is this value compared to the dataset distribution?
- Example: A 1K gets a high anomaly score
4. Bayesian Precision (0-0.2)
- Historical feedback from your reviews
- Formula:
precision = (1 + approved_count) / (2 + approved_count + false_positive_count) - Rules with many false positives lose confidence
- Rules with consistent true positives gain confidence
5. Criticality Weight (0-0.1)
CRITICALviolations get the highest boostHIGHviolations get 0.75x boostMEDIUMviolations get 0.5x boost
Bayesian Learning: The more you review violations (approve or dismiss), the more accurate the confidence scores become. Your feedback trains the system.
Evidence Drawer
Click any violation in the dashboard to open the Evidence Drawer — a detailed panel showing all context for the violation.Policy Excerpt
- Shows the exact clause from the regulatory document that was violated
- Includes section reference (e.g., “GDPR Article 5(1)(f)”)
- Displayed as a blockquote with source citation
“Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.” — GDPR Article 5(1)(f)
Rule Logic
- Shows the rule ID and condition summary
- For threshold rules: displays threshold vs. actual value
- For condition-based rules: shows the matched condition logic
AI Explanation
- Deterministic template-generated explanation (not LLM-generated during enforcement)
- Explains why the violation occurred and what rule was violated
- Written in plain language for non-technical stakeholders
“This transaction exceeds the Currency Transaction Report (CTR) threshold of 15,750.”
Explainability by Default: Explanations are generated from string templates, not LLM calls. This makes them deterministic, reproducible, and audit-ready.
Historical Context (GDPR Only)
For GDPR violations, the drawer shows:- Average Historical Fine — Mean fine amount for this article violation (from Kaggle GDPR Violations dataset)
- Real-World Breach Example — Anonymized description of a past enforcement action
- Article Reference — Full article text with paragraph breakdown
Evidence Grid
- Shows the raw field values from the record that triggered the violation
- Displays as key-value pairs in a monospace font
- Includes transaction details: amount, type, account, recipient, timestamp, etc.
Rule Accuracy (If Available)
For rules validated against ground-truth labels, the drawer shows:- Precision — % of flagged violations that are true positives
- Recall — % of true violations that were detected
- F1 Score — Harmonic mean of precision and recall
True Positives vs False Positives
True Positive (TP)
A violation that represents a real compliance issue. Examples:- A $15K transaction flagged by CTR rule → TP if no CTR was filed
- A database without encryption flagged by GDPR rule → TP if it contains PII
- A user account without marketing consent flagged by GDPR → TP if marketing emails were sent
False Positive (FP)
A violation that does not represent a real compliance issue (rule matched but context is wrong). Examples:- A $15K transaction flagged by CTR rule → FP if a CTR was already filed (not in dataset)
- A test database without encryption flagged by GDPR rule → FP if it contains synthetic data, not real PII
- A user account without marketing consent flagged by GDPR → FP if no marketing emails were ever sent
Context Matters: A rule match is only a violation if it represents a real compliance gap. Use your domain expertise to evaluate context that isn’t in the dataset.
Review Workflow
Evaluate Context
Consider context outside the dataset:
- Was a required report already filed?
- Is this a test account or production account?
- Is there a valid business exception?
Generate Fix (GDPR Only)
For GDPR violations, click Generate Fix to get AI-powered remediation steps. What You Get:- Summary of the fix
- Step-by-step remediation code (SQL, TypeScript, Python, Terraform)
- Estimated effort (e.g., “2-4 hours”)
- Risk level (low, medium, high)
- Applicable frameworks (GDPR, SOC2, ISO 27001)
AML Rules Don’t Get Fixes: AML violations typically require manual investigation (not code changes), so the Generate Fix button is hidden for AML rules.
Best Practices
-
Start with Critical Violations
- Sort by severity: Critical → High → Medium
- Address Critical violations within 24-48 hours
-
Trust High-Confidence Scores
- Violations with confidence >80% are usually true positives
- Violations with confidence <50% require extra scrutiny
-
Review Systematically
- Work through one severity level at a time
- Review all violations for a single account together (cases view)
- Document your reasoning in review notes
-
Use Historical Context
- For GDPR violations, check the average fine amount
- Use real-world breach examples to understand enforcement trends
-
Feed Back to the System
- Always mark true positives as approved
- Always mark false positives as dismissed
- Your feedback trains the Bayesian precision model
What’s Next?
- Exporting Reports — Export full compliance reports with evidence and reviews
- Rule Management — Adjust rules, disable noisy rules, add custom rules