Debug the Buttercup CRS (Cyber Reasoning System) running on Kubernetes. This plugin provides systematic triage workflows, diagnostic commands, and service-specific debugging for all components of the Buttercup fuzzing platform.
When many pods restart around the same time, check for a shared-dependency failure before investigating individual pods.
The most common cascade: Redis goes down → every service gets ConnectionError/ConnectionRefusedError → mass restarts.Look for the same error across multiple --previous logs. If they all say redis.exceptions.ConnectionError, debug Redis, not the individual services.
# Connect to Rediskubectl exec -n crs <redis-pod> -- redis-cli
Inside redis-cli:
INFO memory # used_memory_human, maxmemoryINFO persistence # aof_enabled, aof_last_bgrewrite_statusINFO clients # connected_clients, blocked_clientsINFO stats # total_connections_received, rejected_connectionsCLIENT LIST # see who's connectedDBSIZE # total keys# AOF configurationCONFIG GET appendonly # is AOF enabled?CONFIG GET appendfsync # fsync policy: everysec, always, or no
Buttercup uses Redis streams with consumer groups.
# What is /data mounted on? (disk vs tmpfs matters for AOF performance)kubectl exec -n crs <redis-pod> -- mount | grep /datakubectl exec -n crs <redis-pod> -- du -sh /data/
When behavior doesn’t match expectations, verify Helm values actually took effect:
# Check a pod's actual resource limitskubectl get pod -n crs <pod-name> -o jsonpath='{.spec.containers[0].resources}'# Check a pod's actual volume definitionskubectl get pod -n crs <pod-name> -o jsonpath='{.spec.volumes}'
Helm values template typos (e.g. wrong key names) silently fall back to chart defaults. If deployed resources don’t match the values template, check for key name mismatches.
All services export traces and metrics via OpenTelemetry. If Signoz is deployed (global.signoz.deployed: true), use its UI for distributed tracing.
# Check if OTEL is configuredkubectl exec -n crs <pod> -- env | grep OTEL# Verify Signoz pods are running (if deployed)kubectl get pods -n platform -l app.kubernetes.io/name=signoz
Traces are especially useful for diagnosing slow task processing, identifying which service in a pipeline is the bottleneck, and correlating events across the scheduler → build-bot → fuzzer-bot chain.