Skip to main content
Author: Ronald Eytchison (Trail of Bits)

Overview

Debug the Buttercup CRS (Cyber Reasoning System) running on Kubernetes. This plugin provides systematic triage workflows, diagnostic commands, and service-specific debugging for all components of the Buttercup fuzzing platform.

When to Use

Pod Failures

Pods in the crs namespace are in CrashLoopBackOff, OOMKilled, or restarting

Cascade Failures

Multiple services restart simultaneously (cascade failure)

Redis Issues

Redis is unresponsive or showing AOF warnings

Queue Problems

Queues are growing but tasks are not progressing

Resource Pressure

Nodes show DiskPressure, MemoryPressure, or PID pressure

DinD Failures

Build-bot cannot reach the Docker daemon

Scheduler Issues

Scheduler is stuck and not advancing task state

Health Check Failures

Health check probes are failing unexpectedly

Service Architecture

All pods run in namespace crs. Key services:
LayerServices
Infrastructureredis, dind, litellm, registry-cache
Orchestrationscheduler, task-server, task-downloader, scratch-cleaner
Fuzzingbuild-bot, fuzzer-bot, coverage-bot, tracer-bot, merger-bot
Analysispatcher, seed-gen, program-model, pov-reproducer
Interfacecompetition-api, ui

Triage Workflow

1

Check Pod Status

Look for restarts, CrashLoopBackOff, OOMKilled:
kubectl get pods -n crs -o wide
2

Review Events

See the timeline of what went wrong:
kubectl get events -n crs --sort-by='.lastTimestamp'
3

Filter Warnings

Focus on critical issues:
kubectl get events -n crs --field-selector type=Warning --sort-by='.lastTimestamp'
4

Investigate Specific Pod

Check why a pod restarted:
# Check Last State Reason (OOMKilled, Error, Completed)
kubectl describe pod -n crs <pod-name> | grep -A8 'Last State:'

# Check actual resource limits
kubectl get pod -n crs <pod-name> -o jsonpath='{.spec.containers[0].resources}'

# Crashed container's logs
kubectl logs -n crs <pod-name> --previous --tail=200

# Current logs
kubectl logs -n crs <pod-name> --tail=200

Cascade Detection

When many pods restart around the same time, check for a shared-dependency failure before investigating individual pods.
The most common cascade: Redis goes down → every service gets ConnectionError/ConnectionRefusedError → mass restarts. Look for the same error across multiple --previous logs. If they all say redis.exceptions.ConnectionError, debug Redis, not the individual services.

Historical vs Ongoing Issues

High restart counts don’t necessarily mean an issue is ongoing. Restarts accumulate over a pod’s lifetime. Always distinguish:
  • Use --since=300s to confirm issues are actively happening now
  • Use --timestamps to correlate events across services
  • Check Last State timestamps in describe pod to see when the most recent crash occurred

Redis Debugging

Redis is the backbone. When it goes down, everything cascades.
# Redis pod status
kubectl get pods -n crs -l app.kubernetes.io/name=redis

# Redis logs (AOF warnings, OOM, connection issues)
kubectl logs -n crs -l app.kubernetes.io/name=redis --tail=200

Resource Pressure

# Per-pod CPU/memory
kubectl top pods -n crs

# Node-level
kubectl top nodes

# Node conditions (disk pressure, memory pressure, PID pressure)
kubectl describe node <node> | grep -A5 Conditions

# Disk usage inside a pod
kubectl exec -n crs <pod> -- df -h

# What's eating disk
kubectl exec -n crs <pod> -- sh -c 'du -sh /corpus/* 2>/dev/null'
kubectl exec -n crs <pod> -- sh -c 'du -sh /scratch/* 2>/dev/null'

Health Checks

Pods write timestamps to /tmp/health_check_alive. The liveness probe checks file freshness.
# Check health file freshness
kubectl exec -n crs <pod> -- stat /tmp/health_check_alive
kubectl exec -n crs <pod> -- cat /tmp/health_check_alive
If a pod is restart-looping, the health check file is likely going stale because the main process is blocked (e.g. waiting on Redis, stuck on I/O).

Service-Specific Quick Reference

DinD (Docker-in-Docker)

Check docker daemon crashes, storage driver errors:
kubectl logs -n crs -l app=dind --tail=100

Build-bot

Check build queue depth, DinD connectivity, OOM during compilation

Fuzzer-bot

Check corpus disk usage, CPU throttling, crash queue backlog

Patcher

Check LiteLLM connectivity, LLM timeout, patch queue depth

Scheduler

The central brain:
kubectl logs -n crs -l app=scheduler --tail=-1 --prefix | grep "WAIT_PATCH_PASS\|ERROR\|SUBMIT"

Deployment Config Verification

When behavior doesn’t match expectations, verify Helm values actually took effect:
# Check a pod's actual resource limits
kubectl get pod -n crs <pod-name> -o jsonpath='{.spec.containers[0].resources}'

# Check a pod's actual volume definitions
kubectl get pod -n crs <pod-name> -o jsonpath='{.spec.volumes}'
Helm values template typos (e.g. wrong key names) silently fall back to chart defaults. If deployed resources don’t match the values template, check for key name mismatches.

Telemetry (OpenTelemetry / Signoz)

All services export traces and metrics via OpenTelemetry. If Signoz is deployed (global.signoz.deployed: true), use its UI for distributed tracing.
# Check if OTEL is configured
kubectl exec -n crs <pod> -- env | grep OTEL

# Verify Signoz pods are running (if deployed)
kubectl get pods -n platform -l app.kubernetes.io/name=signoz
Traces are especially useful for diagnosing slow task processing, identifying which service in a pipeline is the bottleneck, and correlating events across the scheduler → build-bot → fuzzer-bot chain.

Installation

/plugins install debug-buttercup

When NOT to Use

  • Deploying or upgrading Buttercup (use Helm and deployment guides)
  • Debugging issues outside the crs Kubernetes namespace
  • Performance tuning that doesn’t involve a failure symptom

Build docs developers (and LLMs) love