Skip to main content

Overview

Full mode (full-bootstrap.sh) provides a complete production-like environment with Istio service mesh (ambient mode), ArgoCD for GitOps, and 2 worker nodes. It includes warm cluster support for faster subsequent starts.
Cold start: ~250s | Warm start: Instant (hash match) / manifest reapply only

Key Features

  • Cilium + Hubble - Full CNI with network observability
  • Istio ambient mode - Service mesh without sidecars
  • ArgoCD - GitOps continuous deployment
  • 2 worker nodes - Production-like multi-node setup
  • Warm cluster support - Hash-based state detection
  • 4-phase parallel execution - Optimized deployment

Command Usage

# Standard bootstrap (warm cluster aware)
full-bootstrap

# Force clean rebuild
full-bootstrap --clean

Flags

FlagDescription
--cleanDelete existing cluster and force cold start

Warm Cluster (Hash Gate)

Full-bootstrap uses the same hash-based detection as dev-fast, but with a separate state directory:

Hash Storage

  • .bootstrap-state-full/cluster - Hash of kind-config.yaml + images.sh
  • .bootstrap-state-full/manifest - Hash of entire manifests-result/ directory

Decision Logic

1

Cluster running + cluster hash match + manifest hash match

Instant complete - Health check only, no deployment
2

Cluster running + cluster hash match + manifest hash mismatch

Warm reapply - Regenerate and reapply manifests only
3

Cluster hash mismatch or cluster stopped

Cold start (~250s) - Full cluster rebuild

4-Phase Parallel Execution

Phase 1: Preparation (Parallel)

All tasks run concurrently:
timed_step "phase1-prep" parallel_run \
  "kind-cluster:_step_kind_cluster" \
  "gen-manifests:bash ${SCRIPT_DIR}/gen-manifests.sh" \
  "otel-build:bash ${SCRIPT_DIR}/load-otel-collector-image.sh build" \
  "image-preload:_step_image_preload"
  • kind-cluster - Create cluster with kind-config.yaml (2 workers)
  • gen-manifests - Generate all Kubernetes manifests
  • otel-build - Build OTel collector image
  • image-preload - Pull all images (PRELOAD_IMAGES_FULL)

Phase 2: Network Setup (Sequential)

timed_step "phase2-network" _step_network_setup
1

Load Cilium image

Load Cilium into kind cluster
2

Load OTel image

Load custom OTel collector
3

Background image load

Start loading remaining images in background
4

Install Cilium

Deploy Cilium CNI (overlaps with image loading)
5

Install Istio

Deploy Istio in ambient mode
6

Apply Gateway API CRDs

Install Gateway API v1.5.0 CRDs
7

Wait for images

Ensure all images loaded
8

PostgreSQL early start

Start PostgreSQL (~87s startup time)

Phase 3: Deploy Services (Parallel)

timed_step "phase3-deploy" parallel_run \
  "argocd-apply:_step_argocd_apply" \
  "garage:_step_garage_deploy" \
  "observability:_step_observability" \
  "cloudflared:_step_cloudflared"
All services deploy concurrently:
  • argocd - GitOps controller
  • garage - S3-compatible storage (wait for ready + setup)
  • observability - Prometheus stack, Loki, Tempo, OTel collector
  • cloudflared - Cloudflare tunnel (if credentials exist)
Traefik is omitted in full mode - Istio Gateway handles ingress.

Phase 4: Wait for Pods (Parallel)

Wait for critical pods in parallel:
kubectl wait --for=condition=available deployment/argocd-server -n argocd
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=postgresql -n database
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=grafana -n observability
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=prometheus -n observability

Cluster Configuration

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: microservice-infra
networking:
  disableDefaultCNI: true  # Cilium will be installed
nodes:
  - role: control-plane
    kubeadmConfigPatches:
      - |
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            node-labels: "ingress-ready=true"
    extraPortMappings:
      - containerPort: 30080   # ArgoCD HTTP
        hostPort: 30080
      - containerPort: 30443   # ArgoCD HTTPS
        hostPort: 30443
      - containerPort: 30300   # Grafana
        hostPort: 30300
      - containerPort: 30081   # Traefik HTTP
        hostPort: 30081
      - containerPort: 30444   # Traefik HTTPS
        hostPort: 30444
      - containerPort: 31235   # Hubble UI
        hostPort: 31235
      - containerPort: 30090   # Prometheus
        hostPort: 30090
      - containerPort: 30093   # Alertmanager
        hostPort: 30093
  - role: worker
  - role: worker

Service Mesh (Istio)

Ambient Mode

Full-bootstrap installs Istio in ambient mode, which:
  • No sidecars - Reduces resource overhead
  • Transparent encryption - mTLS without modifying pods
  • Layer 4 + Layer 7 - Network and application-level policies
  • Gateway API - Modern ingress configuration

Installation

Istio is installed via istio-install.sh during Phase 2:
bash "${SCRIPT_DIR}/istio-install.sh"

GitOps (ArgoCD)

Features

  • Automated sync - Deploys applications from Git
  • Multi-environment - Manages dev/staging/prod
  • Rollback - Easy revert to previous versions
  • Health monitoring - Application status tracking

Access

After bootstrap, retrieve the admin password:
kubectl -n argocd get secret argocd-initial-admin-secret \
  -o jsonpath='{.data.password}' | base64 -d
Access at: http://localhost:30080

Exposed Services

ServiceURLCredentials
ArgoCDhttp://localhost:30080admin/<secret>
Grafanahttp://localhost:30300admin/admin
Prometheushttp://localhost:30090-
Alertmanagerhttp://localhost:30093-
Hubble UIhttp://localhost:31235-
Traefikhttp://localhost:30081-
The ArgoCD password is displayed at the end of bootstrap and stored in the argocd-initial-admin-secret secret.

Performance

Cold Start

~250s full cluster build

Warm Start

Instant if no changes

Manifest Changes

Fast reapply only

Resource Usage

Highest of all modes

Warm Reapply Logic

When manifests change but cluster config is unchanged:
# From full-bootstrap.sh:275-286
_warm_reapply() {
  echo "=== Warm reapply: manifests changed ==="
  bash "${SCRIPT_DIR}/gen-manifests.sh"

  _step_argocd_apply
  _step_observability
  _step_postgresql_apply
  _step_cloudflared
  _step_wait_all

  _save_hashes
  echo "=== Warm reapply complete ==="
}
This skips:
  • Cluster creation
  • Image pulling/loading
  • Cilium installation
  • Istio installation

Use Cases

Full-Stack Validation

Test complete production-like environment

Service Mesh Testing

Validate Istio policies and mTLS

GitOps Workflows

Test ArgoCD deployments and sync

Multi-Node Testing

Validate distributed workloads

Next Steps

After full-bootstrap:
1

Access ArgoCD

Log in to ArgoCD and sync applications
open http://localhost:30080
2

Monitor with Grafana

View metrics and dashboards
open http://localhost:30300
3

Check Hubble

Visualize network traffic
open http://localhost:31235

Comparison

See the Bootstrap Mode Comparison to understand when to use full-bootstrap vs other modes.

Build docs developers (and LLMs) love