Skip to main content

Overview

UTMStack’s alert management system provides comprehensive capabilities for detecting, triaging, and responding to security events. The alert system helps SOC analysts identify threats quickly and take appropriate action. Alerts are managed through /iframe (alert management interface) and /alerting-rules (rule configuration).

Real-time Alerts

Receive instant notifications when security rules are triggered

Alert Rules

Configure custom detection rules at /alerting-rules

Alert Triage

Prioritize and classify alerts by severity and category

Alert Workflows

Automate alert enrichment and response actions

Alert Components

Alert Attributes

Each alert in UTMStack contains:
  • Alert ID - Unique identifier for tracking and reference
  • Severity - Critical, High, Medium, Low, Informational
  • Status - New, Acknowledged, In Progress, Resolved, False Positive
  • Category - Malware, Intrusion, Data Exfiltration, Policy Violation, etc.
  • Source - Originating data source or integration
  • Timestamp - When the alert was triggered
  • Description - Detailed alert information and context
  • Related Events - Associated log events and indicators
  • Assigned To - User responsible for alert investigation

Alert Severities

Critical

Immediate threat requiring urgent response (e.g., active breach)

High

Serious security event requiring prompt investigation

Medium

Notable security event requiring timely review

Low

Minor security event or policy violation

Informational

Contextual information or baseline activity

Alert Management Workflow

For SOC Analysts

1

Monitor Alert Queue

Navigate to /iframe to view the alert management dashboard and active alerts
2

Review New Alerts

Sort alerts by severity and review new unacknowledged alerts
3

Acknowledge Alert

Acknowledge the alert to indicate you are investigating it
4

Gather Context

Review alert details, related events, and threat intelligence from /threat-intelligence
5

Investigate Events

Use /discover to query logs and gather additional evidence
6

Determine Disposition

Classify the alert as True Positive, False Positive, or Benign Positive
7

Take Action

  • If True Positive: Create incident at /incident or trigger SOAR playbook at /soar
  • If False Positive: Update alerting rule at /alerting-rules to reduce noise
  • If Benign Positive: Document and resolve with notes
8

Close Alert

Update alert status to Resolved with appropriate disposition and notes

Alerting Rules

Rule Types

UTMStack supports multiple types of detection rules accessible at /alerting-rules:

Signature-based Rules

Match known attack patterns and indicators of compromise (IOCs)

Anomaly Detection

Detect statistical deviations from baseline behavior

Correlation Rules

Identify patterns across multiple events or data sources

Threshold Rules

Trigger alerts when metrics exceed defined thresholds

Behavioral Rules

Detect suspicious user or entity behavior patterns

Compliance Rules

Monitor compliance violations and policy breaches

Creating Alert Rules

1

Navigate to Rule Management

Go to /alerting-rules to access the rule configuration interface
2

Select Rule Type

Choose the appropriate rule type for your detection logic
3

Define Trigger Conditions

Specify the conditions that will trigger the alert (query, threshold, correlation)
4

Set Alert Properties

Configure severity, category, description, and recommended actions
5

Configure Enrichment

Add threat intelligence lookups or asset context enrichment
6

Define Actions

Set notification methods and automated response actions from /soar
7

Test Rule

Validate the rule against historical data before enabling
8

Enable and Monitor

Activate the rule and monitor its effectiveness over time

Alert Triage and Prioritization

Triage Criteria

Effective alert triage considers multiple factors:
  1. Severity Level - Critical and High alerts take priority
  2. Asset Criticality - Alerts affecting critical assets (from /data-sources) rank higher
  3. Threat Context - Alerts with threat intelligence matches require attention
  4. Alert Frequency - Recurring alerts may indicate persistent threat
  5. Business Impact - Potential impact on business operations
  6. Compliance Requirements - Regulatory obligations may mandate response times

Prioritization Matrix

Alert SeverityCritical AssetNon-Critical Asset
CriticalP1 (Immediate)P2 (Urgent)
HighP2 (Urgent)P3 (High)
MediumP3 (High)P4 (Medium)
LowP4 (Medium)P5 (Low)

Alert Enrichment

Automatic Enrichment

UTMStack automatically enriches alerts with:

Asset Information

Asset details from /data-sources including owner, location, and criticality

Threat Intelligence

IOC matches and reputation data from /threat-intelligence

User Context

User details from Active Directory integration at /active-directory

Historical Data

Related historical alerts and incidents

Geolocation

Geographic information for IP addresses

Related Events

Correlated log events from /discover

Manual Enrichment

Analysts can add enrichment through:
  • Investigation notes and analyst comments
  • External threat intelligence lookups
  • Contact with asset owners for context
  • Links to related incidents and tickets

Alert Response Actions

Manual Actions

Create Incident

Escalate alert to formal incident at /incident

Block IOC

Add indicators to blocklists via integrations

Isolate Asset

Quarantine affected systems through network controls

Reset Credentials

Force password reset via /active-directory

Automated Actions (SOAR)

Configure automated responses through /soar:
  • Automatic ticket creation in external systems
  • IOC enrichment via threat intelligence feeds
  • Email notifications to stakeholders
  • Webhook triggers to external security tools
  • Asset isolation through firewall rules
  • User account suspension for compromised credentials

Alert Metrics and Reporting

Key Performance Indicators

Track alert effectiveness with these metrics:
  • Alert Volume - Total alerts per day/week
  • Mean Time to Acknowledge (MTTA) - Average time to acknowledge alerts
  • Mean Time to Resolve (MTTR) - Average time to resolve alerts
  • False Positive Rate - Percentage of alerts marked as false positives
  • True Positive Rate - Percentage of alerts representing real threats
  • Alert Coverage - Percentage of MITRE ATT&CK techniques covered
  • Rule Effectiveness - Alerts triggered per rule and disposition

Alert Reporting

1

Access Dashboard

Navigate to /dashboard to view alert metrics
2

Select Time Period

Choose the reporting period (daily, weekly, monthly)
3

Analyze Trends

Review alert volume trends and identify spikes or anomalies
4

Review Disposition

Analyze true positive vs false positive rates
5

Identify Tuning Needs

Determine which rules need adjustment at /alerting-rules
6

Generate Report

Export metrics for management reporting

Alert Tuning and Optimization

Reducing False Positives

High false positive rates lead to alert fatigue. Regular rule tuning is essential for maintaining analyst effectiveness.
Strategies for reducing false positives:
  1. Whitelist Known Good - Exclude legitimate business activity
  2. Adjust Thresholds - Fine-tune numeric thresholds based on baseline
  3. Add Context Filters - Include additional conditions (time, user, asset)
  4. Correlation Over Single Events - Require multiple indicators
  5. Severity Adjustment - Lower severity for noisy but important alerts

Rule Maintenance Workflow

1

Review Alert Statistics

Analyze false positive rates by rule at /alerting-rules
2

Identify Problematic Rules

Find rules with high volume and low true positive rate
3

Analyze False Positives

Review common characteristics of false positive alerts
4

Update Rule Logic

Modify rule conditions to exclude false positive patterns
5

Test Changes

Validate updated rule against historical data
6

Monitor Impact

Track changes in false positive rate after tuning

Alert Integration

Data Sources

Alerts can be generated from various sources configured at /integrations:
  • Network security devices (firewalls, IDS/IPS)
  • Endpoint detection and response (EDR) solutions
  • Cloud security platforms (AWS, Azure, GCP)
  • Email security gateways
  • Web proxies and DNS logs
  • Authentication systems and Active Directory
  • Application logs and custom integrations

External Systems

Integrate alerts with external platforms:
  • Ticketing Systems - ServiceNow, Jira, Zendesk
  • Communication Platforms - Slack, Microsoft Teams, Email
  • Threat Intelligence - MISP, ThreatConnect, feeds at /threat-intelligence
  • SOAR Platforms - Trigger playbooks at /soar

Best Practices

Alert Management

  1. Acknowledge Promptly - Acknowledge alerts within SLA timeframes
  2. Document Thoroughly - Add detailed notes during investigation
  3. Update Status Regularly - Keep alert status current for team visibility
  4. Communicate Escalations - Notify team when escalating to incidents
  5. Track Metrics - Monitor MTTA and MTTR for continuous improvement

Rule Development

  1. Start Conservative - Begin with higher thresholds and refine over time
  2. Test Before Deploying - Validate rules against historical data
  3. Version Control - Document rule changes and rationale
  4. Peer Review - Have rules reviewed by other analysts
  5. Regular Audits - Quarterly review of all active rules

Alerting Rules

Configure and manage detection rules

Incident Management

Escalate alerts to formal incidents

SOAR Automation

Automate alert response actions

Threat Intelligence

Enrich alerts with threat intel

Build docs developers (and LLMs) love