Skip to main content
This guide will walk you through setting up a local Kubernetes cluster with the complete observability and networking stack.
Make sure you’ve completed the Prerequisites before starting this guide.

Choose Your Bootstrap Mode

The platform offers three bootstrap modes depending on your needs:
ModeCommandCNINodesSetup TimeUse Case
Dev-fastbootstrapkindnetd1~120sDaily development
Ciliumbootstrap --fullCilium + Hubble2~200sCNI testing
Fullfull-bootstrapCilium + Istio3~250sFull-stack validation
For your first time, we recommend starting with Dev-fast mode to get up and running quickly. The dev-fast mode provides the fastest bootstrap experience with warm cluster support. After the initial setup, subsequent runs complete almost instantly if nothing has changed.
1

Navigate to the project directory

cd /path/to/microservice-infra
Make sure direnv has loaded the environment (you should see the welcome message with available commands).
2

Run the bootstrap command

bootstrap
This command will:
  • Create a Kind cluster with a single control-plane node
  • Generate Kubernetes manifests using nixidy
  • Pull and load required container images
  • Deploy the full observability stack (Prometheus, Grafana, Loki, Tempo)
  • Deploy Traefik ingress controller
  • Deploy PostgreSQL database
  • Deploy Garage object storage
The first run (cold start) takes approximately 120 seconds on modern hardware.
3

Wait for bootstrap to complete

You’ll see output showing the bootstrap phases:
=== Phase 1: Parallel prep (kind + manifests + OTel + images) ===
Cluster 'microservice-infra' already exists.
Generating manifests...
Fetching OTel Collector image...
Pulling 28 images in parallel...

=== Phase 2: Load images into kind ===
Loading images into kind cluster...

=== Phase 3: Deploy core services ===
Deploying Garage...
Deploying observability stack...
Deploying Traefik...
Deploying Cloudflared...

=== Phase 4: Wait for pods ===
Waiting for PostgreSQL to be ready...
Waiting for Grafana to be ready...
Waiting for Prometheus to be ready...

✓ Bootstrap complete!
4

Verify the installation

Check that all pods are running:
kubectl get pods -A
You should see pods in various namespaces (kube-system, monitoring, storage, etc.) with status Running.For a quick health check, use:
debug-k8s
This will show pod status and recent events across all namespaces.

Accessing Services

Once bootstrap is complete, you can access the platform services via NodePort on your localhost:

Service Ports

PortServiceCredentialsNotes
30081Traefik HTTP-Ingress controller
30090Prometheus-Metrics database
30093Alertmanager-Alert management
30300Grafanaadmin / adminDashboards & visualization
Example: Access Grafana at http://localhost:30300
Default Grafana credentials are admin / admin. You’ll be prompted to change the password on first login.

Verify Grafana

1

Open Grafana in your browser

2

Log in with default credentials

  • Username: admin
  • Password: admin
3

Explore pre-configured dashboards

Go to Dashboards → Browse to see pre-configured dashboards for Kubernetes, Prometheus, Loki, and Tempo.

Warm Cluster Support

One of the key features of dev-fast mode is warm cluster support. The bootstrap script uses hash-based detection to skip unnecessary work:
  • On subsequent runs, if the cluster and manifests haven’t changed, bootstrap completes instantly
  • If only manifests changed, only the changed resources are reapplied (~10-15s)
  • If the cluster configuration changed, a full rebuild occurs (~120s)
Force a clean rebuild:
bootstrap --clean
This deletes the existing cluster and starts fresh.

Alternative Bootstrap Modes

Cilium Mode

For testing with Cilium CNI and Hubble network observability:
bootstrap --full
# or
bootstrap-full
This creates a 2-node cluster (control-plane + 1 worker) with Cilium and Hubble. Additional ports in Cilium mode:
PortServiceNotes
31235Hubble UINetwork flow visualization
Access Hubble UI at http://localhost:31235

Full-Stack Mode

For testing the complete setup with Istio service mesh and ArgoCD:
full-bootstrap
This creates a 3-node cluster (control-plane + 2 workers) with Cilium, Istio ambient mode, and ArgoCD. Additional ports in full mode:
PortServiceNotes
31235Hubble UINetwork flow visualization
30080ArgoCD HTTPGitOps controller
30443ArgoCD HTTPSGitOps controller (TLS)
Full-stack mode also supports warm cluster. Use full-bootstrap --clean to force a clean rebuild.

Managing Your Cluster

Stop the Cluster (Preserve State)

To stop the cluster containers without deleting data:
cluster-stop
This pauses the Docker containers. Use this when you need to free up system resources but want to resume work later.

Restart a Stopped Cluster

To resume a stopped cluster:
cluster-start
The cluster will resume with all data and state preserved.

Delete the Cluster

To completely remove the cluster and all data:
cluster-down
After cluster-down, you’ll need to run bootstrap again to recreate the cluster.

Working with Manifests

The platform uses nixidy to generate Kubernetes manifests from Nix expressions.

Regenerate Manifests

If you modify any Nix files in nixidy/:
gen-manifests
This regenerates manifests in the manifests-result/ directory.

Apply Manifest Changes

After regenerating manifests, apply them to the cluster:
kubectl apply -f manifests-result/

Watch for Changes

To automatically regenerate and apply manifests when Nix files change:
watch-manifests
This runs watchexec to monitor nixidy/ directory and automatically regenerate + apply manifests on any .nix file change.

Monitoring and Debugging

View Cluster Status

# Get all nodes
kubectl get nodes

# Get all pods across all namespaces
kubectl get pods -A

# Get events
kubectl get events -A --sort-by=.lastTimestamp

Quick Debug Command

The debug-k8s command shows a quick overview:
debug-k8s
Output:
=== Pod status ===
NAMESPACE     NAME                          READY   STATUS    RESTARTS
kube-system   coredns-7db6d8ff4d-abcde     1/1     Running   0
monitoring    prometheus-0                  1/1     Running   0
monitoring    grafana-5d7b9c8f6d-xyz12     1/1     Running   0
...

=== Recent events ===
(last 10 events sorted by timestamp)

Check Logs

# View logs for a specific pod
kubectl logs -n monitoring grafana-5d7b9c8f6d-xyz12

# Follow logs in real-time
kubectl logs -f -n monitoring prometheus-0

Cilium Network Observability (Cilium/Full mode only)

# Check Cilium status
cilium status

# Open Hubble UI in browser
cilium hubble ui
# Opens http://localhost:12000

# Observe network flows from CLI
hubble observe -n monitoring

Next Steps

Now that your platform is running, you can:
  • Deploy applications: Use the companion microservice-app repository to deploy sample applications
  • Explore dashboards: Check out the pre-configured Grafana dashboards at http://localhost:30300
  • Monitor metrics: View Prometheus targets and metrics at http://localhost:30090
  • Customize the stack: Modify the Nix configurations in nixidy/ to add or remove components

Troubleshooting

Bootstrap hangs or fails

If bootstrap gets stuck, check Docker is running and you have sufficient resources:
docker ps
docker stats
Try a clean restart:
cluster-down
bootstrap --clean

Pods stuck in Pending state

Check cluster resources and events:
kubectl describe pod <pod-name> -n <namespace>
kubectl get events -A --sort-by=.lastTimestamp | tail -20

Port conflicts

If you see port binding errors, make sure no other services are using the platform ports (30080-31235). You can check with:
# On Linux/macOS
lsof -i :30300

# On all platforms via netstat
netstat -an | grep LISTEN | grep 30300

Images fail to pull

Check your internet connection and Docker daemon status:
docker pull busybox
docker system info
If behind a proxy, ensure Docker is configured with proxy settings.

Warm cluster detection fails

If you want to force a specific behavior:
# Force full rebuild
bootstrap --clean

# Manually check cluster health
kubectl get nodes
kind get clusters

For more detailed information about the different bootstrap modes and their internals, check the docs/bootstrap-modes.md file in the repository.

Build docs developers (and LLMs) love