Skip to main content
When you need to tear down your infrastructure, it’s crucial to destroy resources in the reverse order of deployment to avoid dependency issues and orphaned resources.
Data Loss Warning: Destroying infrastructure will permanently delete all resources, including:
  • Vault secrets and data
  • ArgoCD applications and configurations
  • All Kubernetes workloads
  • EKS cluster and nodes
  • VPC and networking
Ensure you have backups of any critical data before proceeding.

Destruction Order

Destroy layers in reverse order:
3-apps → 2-platform → 1-infrastructure → bootstrap
This order ensures:
  • Applications are removed before their underlying services
  • Platform components are removed before infrastructure
  • Infrastructure is removed before state backend

Pre-Destruction Checklist

Before destroying resources:
1

Backup critical data

# Export Vault secrets
vault kv list -format=json secret/ > vault-secrets-backup.json

# Export ArgoCD applications
kubectl get applications -n argocd -o yaml > argocd-apps-backup.yaml

# Export important Kubernetes resources
kubectl get all --all-namespaces -o yaml > k8s-resources-backup.yaml
2

Document current state

Take note of:
  • Terraform output values
  • Custom configurations
  • External integrations
  • DNS records to clean up manually
3

Notify team members

If working in a team, ensure no one else is using the infrastructure.
4

Set environment variables

# Source your environment
source .env
source ~/.vault-secrets/vault.env

Step 1: Destroy Apps Layer

Remove ArgoCD, Traefik, and application configurations.
1

Navigate to apps directory

cd terraform/dev/3-apps
2

Review destruction plan

terraform plan -destroy
Review what will be destroyed. Verify these are the correct resources.
3

Destroy apps layer

terraform destroy
Type yes when prompted.This process takes approximately 5-10 minutes.
4

Verify destruction

# Check that ArgoCD namespace is gone
kubectl get namespace argocd

# Should return: Error from server (NotFound)
If terraform destroy fails due to lingering resources, see Troubleshooting Destruction below.

Step 2: Destroy Platform Layer

Remove Kubernetes platform components including Vault.
1

Navigate to platform directory

cd terraform/dev/2-platform
2

Review destruction plan

terraform plan -destroy
3

Destroy platform layer

terraform destroy
Type yes when prompted.This process takes approximately 5-10 minutes.
4

Verify destruction

# Check that platform namespaces are gone
kubectl get namespace vault
kubectl get namespace cert-manager
kubectl get namespace external-secrets

Step 3: Destroy Infrastructure Layer

Remove EKS cluster, VPC, Tailscale, and Vault infrastructure.
1

Navigate to infrastructure directory

cd terraform/dev/1-infrastructure
2

Review destruction plan

terraform plan -destroy
This will destroy:
  • EKS cluster and all workloads
  • VPC and networking
  • Tailscale subnet router
  • Vault KMS keys and DynamoDB tables
3

Set required environment variables

export TF_VAR_tailscale_auth_key="tskey-auth-xxxxx"
export TF_VAR_cloudflare_api_token="your-token"
Environment variables are required even for destruction.
4

Destroy infrastructure layer

terraform destroy
Type yes when prompted.This process takes approximately 15-20 minutes (EKS deletion is slowest).
5

Verify destruction

# Check that VPC is gone
aws ec2 describe-vpcs --filters Name=tag:Name,Values=dev-vpc

# Should return empty list

Step 4: Destroy Bootstrap (Optional)

Remove the Terraform state backend. Only do this if you want to completely remove all infrastructure, including state history.
Permanent Deletion: Destroying the bootstrap removes:
  • All Terraform state files
  • State lock table
  • Historical records of infrastructure
You will not be able to recover this data.
1

Migrate state to local

Before destroying the S3 backend, migrate state to local:
cd terraform/dev/bootstrap

# Comment out the S3 backend in backend.tf
# Then run:
terraform init -migrate-state
2

Empty S3 bucket

S3 buckets with versioning must be emptied manually:
aws s3 rm s3://shipyard-terraform-state-dev --recursive
aws s3api delete-bucket --bucket shipyard-terraform-state-dev
3

Destroy bootstrap

terraform destroy
Type yes when prompted.

Troubleshooting Destruction

Resources Fail to Destroy

If terraform destroy fails with dependency errors:
Elastic Network Interfaces may be attached to resources:
# List ENIs in VPC
aws ec2 describe-network-interfaces \
  --filters Name=vpc-id,Values=<vpc-id>

# Delete manually if needed
aws ec2 delete-network-interface --network-interface-id <eni-id>
Security groups may have dependencies:
# Find dependent resources
aws ec2 describe-security-groups --group-ids <sg-id>

# Remove rules first
aws ec2 revoke-security-group-ingress --group-id <sg-id> --ip-permissions ...
Subnets may still have resources:
# List resources in subnet
aws ec2 describe-network-interfaces \
  --filters Name=subnet-id,Values=<subnet-id>

# Delete manually
ALBs created by Kubernetes may not be tracked by Terraform:
# List ALBs
aws elbv2 describe-load-balancers

# Delete manually
aws elbv2 delete-load-balancer --load-balancer-arn <arn>

Orphaned Kubernetes Resources

If EKS destruction hangs due to Kubernetes resources:
# Delete all finalizers from stuck resources
kubectl patch <resource-type> <resource-name> -n <namespace> \
  -p '{"metadata":{"finalizers":[]}}' --type=merge

# Force delete namespace
kubectl delete namespace <namespace> --force --grace-period=0

Terraform State Issues

If state is corrupted or locked:
# Force unlock
terraform force-unlock <lock-id>

# Remove specific resource from state
terraform state rm <resource-address>

# Refresh state
terraform refresh

Manual Cleanup

After Terraform destruction, manually clean up:

AWS Resources

# List log groups
aws logs describe-log-groups | grep dev-eks

# Delete log groups
aws logs delete-log-group --log-group-name <name>
# List repositories
aws ecr describe-repositories

# Delete repository
aws ecr delete-repository --repository-name <name> --force
# List snapshots
aws ec2 describe-snapshots --owner-ids self

# Delete snapshot
aws ec2 delete-snapshot --snapshot-id <snap-id>

DNS Records

Remove DNS records from Cloudflare:
# List DNS records
curl -X GET "https://api.cloudflare.com/client/v4/zones/<zone-id>/dns_records" \
  -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN"

# Delete record
curl -X DELETE "https://api.cloudflare.com/client/v4/zones/<zone-id>/dns_records/<record-id>" \
  -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN"

Tailscale

Remove subnet router from Tailscale:
  1. Go to Tailscale Machines
  2. Find the subnet router device
  3. Click the three dots menu
  4. Select “Remove device”

Cost Verification

After destruction, verify no resources are still incurring costs:
# Check for running EC2 instances
aws ec2 describe-instances --filters Name=instance-state-name,Values=running

# Check for load balancers
aws elbv2 describe-load-balancers

# Check for NAT gateways
aws ec2 describe-nat-gateways --filter Name=state,Values=available

# Check for EBS volumes
aws ec2 describe-volumes --filters Name=status,Values=available

Starting Fresh

To redeploy infrastructure from scratch:
1

Ensure complete cleanup

Verify all resources are destroyed using AWS Console or CLI.
2

Clear local Terraform state

rm -rf terraform/dev/*/.terraform
rm -rf terraform/dev/*/terraform.tfstate*
3

Clear cached credentials

rm -rf ~/.vault-secrets
rm -rf ~/.kube/config
4

Begin new deployment

Start from the bootstrap step:

Bootstrap

Initialize Terraform state backend

Getting Help

If you encounter issues during destruction:
  • Check Troubleshooting guide
  • Review AWS CloudTrail for API errors
  • Check Terraform state: terraform show
  • Use terraform state list to see tracked resources
Never manually delete the S3 state bucket or DynamoDB table before running terraform destroy on all layers.

Build docs developers (and LLMs) love