Overview
Slack is one of the most popular alert channels for CronJob Guardian. Alerts appear as rich messages in your team’s Slack channels with:
Job failure details
Error logs and Kubernetes events
Suggested fixes
Links to your monitoring dashboard
Quick Start
Set up Slack alerts in three steps:
Create a Slack Incoming Webhook
Go to your Slack workspace settings
Navigate to Apps > Incoming Webhooks
Click Add to Slack
Select the channel (e.g., #cronjob-alerts)
Copy the webhook URL (starts with https://hooks.slack.com/services/...)
Create a Kubernetes Secret
Store the webhook URL in a secret: kubectl create secret generic slack-webhook \
--namespace cronjob-guardian \
--from-literal=url=https://hooks.slack.com/services/YOUR/WEBHOOK/URL
Create an AlertChannel
Apply the Slack AlertChannel configuration: kubectl apply -f - << EOF
apiVersion: guardian.illenium.net/v1alpha1
kind: AlertChannel
metadata:
name: slack-ops
spec:
type: slack
slack:
webhookSecretRef:
name: slack-webhook
namespace: cronjob-guardian
key: url
defaultChannel: "#cronjob-alerts"
rateLimiting:
maxAlertsPerHour: 100
burstLimit: 10
EOF
Reference in CronJobMonitor
Use the AlertChannel in your monitors: apiVersion : guardian.illenium.net/v1alpha1
kind : CronJobMonitor
metadata :
name : critical-jobs
namespace : production
spec :
selector :
matchLabels :
tier : critical
alerting :
channelRefs :
- name : slack-ops # References the AlertChannel
Basic Slack AlertChannel
Here’s the complete example from the repository:
# Slack AlertChannel
# Sends alerts to a Slack channel via incoming webhook
apiVersion : guardian.illenium.net/v1alpha1
kind : AlertChannel
metadata :
name : slack-alerts
spec :
type : slack
slack :
webhookSecretRef :
name : slack-webhook
namespace : cronjob-guardian
key : url
defaultChannel : "#alerts"
rateLimiting :
maxAlertsPerHour : 100
burstLimit : 10
Configuration Options
Reference to a Kubernetes Secret containing the webhook URL Namespace where the Secret exists
Key within the Secret (usually url)
Default Slack channel for alerts (e.g., #cronjob-alerts) Can be overridden per-alert if needed.
Prevent alert storms Maximum alerts to send per hour (default: unlimited)
Maximum alerts in a short burst (default: 10)
Multiple Slack Channels
Create separate AlertChannels for different teams or alert types:
Operations Channel
Development Channel
Incidents Channel
apiVersion : guardian.illenium.net/v1alpha1
kind : AlertChannel
metadata :
name : slack-ops
spec :
type : slack
slack :
webhookSecretRef :
name : slack-webhook-ops
namespace : cronjob-guardian
key : url
defaultChannel : "#ops-alerts"
Creating Multiple Secrets
Create separate webhook URLs for each channel:
# Ops team
kubectl create secret generic slack-webhook-ops \
--namespace cronjob-guardian \
--from-literal=url=https://hooks.slack.com/services/YOUR/OPS/WEBHOOK
# Dev team
kubectl create secret generic slack-webhook-dev \
--namespace cronjob-guardian \
--from-literal=url=https://hooks.slack.com/services/YOUR/DEV/WEBHOOK
# Incidents
kubectl create secret generic slack-webhook-incidents \
--namespace cronjob-guardian \
--from-literal=url=https://hooks.slack.com/services/YOUR/INCIDENT/WEBHOOK
Routing Alerts by Severity
Send critical alerts to one channel and warnings to another:
apiVersion : guardian.illenium.net/v1alpha1
kind : CronJobMonitor
metadata :
name : tiered-alerts
namespace : production
spec :
selector :
matchLabels :
tier : critical
deadManSwitch :
enabled : true
maxTimeSinceLastSuccess : 25h
sla :
enabled : true
minSuccessRate : 95
alerting :
channelRefs :
# Critical alerts go to incidents channel
- name : slack-incidents
severities : [ critical ]
# All alerts go to ops channel for visibility
- name : slack-ops
severities : [ critical , warning ]
With this configuration:
Critical job failures go to both #incidents and #ops-alerts
Warning-level alerts (e.g., SLA breaches) only go to #ops-alerts
Slack alerts from CronJob Guardian include rich formatting:
🔴 CronJob Failed: production/daily-backup
Job: daily-backup
Namespace: production
Execution: job-daily-backup-28501234
Time: 2026-03-04 08:15:23 UTC
Duration: 5m 32s
Exit Code: 1
📋 Error Logs (last 50 lines):
Error: Failed to connect to database
Connection timeout after 30s
Host: postgres.production.svc.cluster.local:5432
💡 Suggested Fix:
Database connection failed. Check:
1. Database pod is running: kubectl get pods -n production -l app=postgres
2. Service is healthy: kubectl get svc postgres -n production
3. Network policies allow connection from this namespace
🔗 View Details: https://guardian.example.com/jobs/production/daily-backup
Customizing Alert Content
Control what appears in alerts via the CronJobMonitor configuration:
apiVersion : guardian.illenium.net/v1alpha1
kind : CronJobMonitor
metadata :
name : detailed-alerts
namespace : production
spec :
selector :
matchLabels :
tier : critical
alerting :
channelRefs :
- name : slack-ops
# Customize alert content
includeContext :
logs : true # Include pod logs
logLines : 100 # Number of log lines
events : true # Include Kubernetes events
podStatus : true # Include pod status details
suggestedFixes : true # Include fix suggestions
For production debugging, set logLines: 200 to get more context. For high-volume environments, set logLines: 50 to keep messages concise.
Rate Limiting
Prevent Slack from being overwhelmed during incidents:
apiVersion : guardian.illenium.net/v1alpha1
kind : AlertChannel
metadata :
name : slack-ops
spec :
type : slack
slack :
webhookSecretRef :
name : slack-webhook
namespace : cronjob-guardian
key : url
rateLimiting :
maxAlertsPerHour : 100 # Max 100 alerts per hour
burstLimit : 10 # Max 10 alerts in quick succession
How Rate Limiting Works
Burst Limit : Allows up to 10 alerts immediately
Hourly Limit : After burst, limits to 100 alerts per hour
Dropped Alerts : When limits are exceeded, alerts are dropped (not queued)
Per Channel : Each AlertChannel has independent rate limits
Rate limiting drops alerts when exceeded. If you hit limits frequently:
Increase maxAlertsPerHour
Use alertDelay in monitors to reduce transient alerts
Tune suppressDuplicatesFor to avoid repeated alerts
Consider splitting monitors across multiple channels
Testing Your Slack Integration
Verify the AlertChannel is ready
kubectl get alertchannel slack-ops
kubectl describe alertchannel slack-ops
Look for: Status :
Conditions :
- Type : Ready
Status : True
Message : Slack webhook configured successfully
Create a test CronJob that fails
kubectl apply -f - << EOF
apiVersion: batch/v1
kind: CronJob
metadata:
name: test-failure
namespace: production
labels:
tier: critical
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: busybox
command: ["sh", "-c", "echo 'Testing alert' && exit 1"]
restartPolicy: Never
EOF
Wait for the job to fail
# Watch for job execution
kubectl get jobs -n production -w
# Check job status
kubectl get job -n production -l job-name=test-failure --sort-by=.metadata.creationTimestamp
Check Slack for the alert
Within a few minutes, you should see an alert in your configured Slack channel. If no alert appears, check: # Monitor logs
kubectl logs -n cronjob-guardian deployment/cronjob-guardian-controller-manager
# Check AlertChannel status
kubectl get alertchannel slack-ops -o yaml
Clean up the test
kubectl delete cronjob test-failure -n production
Troubleshooting
No alerts appearing in Slack
Check webhook secret: kubectl get secret slack-webhook -n cronjob-guardian -o jsonpath='{.data.url}' | base64 -d
Verify the URL is correct and starts with https://hooks.slack.com/services/. Check AlertChannel status: kubectl describe alertchannel slack-ops
Look for error messages in the status. Check controller logs: kubectl logs -n cronjob-guardian deployment/cronjob-guardian-controller-manager | grep -i slack
The webhook URL may have been revoked or is incorrect.
Go to Slack workspace settings
Check Incoming Webhooks configuration
Generate a new webhook if needed
Update the secret:
kubectl create secret generic slack-webhook \
--namespace cronjob-guardian \
--from-literal=url= < new-webhook-url > \
--dry-run=client -o yaml | kubectl apply -f -
Enable rate limiting: rateLimiting :
maxAlertsPerHour : 50
burstLimit : 5
Suppress duplicates in monitors: alerting :
suppressDuplicatesFor : 1h
alertDelay : 5m
Filter alerts by severity: channelRefs :
- name : slack-ops
severities : [ critical ] # Only critical, no warnings
Alerts going to wrong channel
Check the defaultChannel in the AlertChannel: slack :
defaultChannel : "#cronjob-alerts" # Must match your Slack channel name
Channel name must include the # prefix and match exactly.
Best Practices
Separate Channels by Team Create one AlertChannel per team with their own Slack channel for better alert ownership.
Use Severity Filtering Send critical alerts to high-visibility channels, warnings to team channels.
Enable Rate Limiting Always set maxAlertsPerHour to prevent alert storms during incidents.
Include Context Set includeContext.logs: true to provide debugging information in alerts.
Next Steps
PagerDuty Alerts Escalate critical alerts to on-call engineers
Webhook Alerts Integrate with custom systems
Advanced Monitoring Configure SLA tracking and maintenance windows
Alert Channels Reference Complete API documentation