When you need to tear down your infrastructure, it’s crucial to destroy resources in the reverse order of deployment to avoid dependency issues and orphaned resources.
Data Loss Warning : Destroying infrastructure will permanently delete all resources, including:
Vault secrets and data
ArgoCD applications and configurations
All Kubernetes workloads
EKS cluster and nodes
VPC and networking
Ensure you have backups of any critical data before proceeding.
Destruction Order
Destroy layers in reverse order:
3-apps → 2-platform → 1-infrastructure → bootstrap
This order ensures:
Applications are removed before their underlying services
Platform components are removed before infrastructure
Infrastructure is removed before state backend
Pre-Destruction Checklist
Before destroying resources:
Backup critical data
# Export Vault secrets
vault kv list -format=json secret/ > vault-secrets-backup.json
# Export ArgoCD applications
kubectl get applications -n argocd -o yaml > argocd-apps-backup.yaml
# Export important Kubernetes resources
kubectl get all --all-namespaces -o yaml > k8s-resources-backup.yaml
Document current state
Take note of:
Terraform output values
Custom configurations
External integrations
DNS records to clean up manually
Notify team members
If working in a team, ensure no one else is using the infrastructure.
Set environment variables
# Source your environment
source .env
source ~/.vault-secrets/vault.env
Step 1: Destroy Apps Layer
Remove ArgoCD, Traefik, and application configurations.
Navigate to apps directory
Review destruction plan
Review what will be destroyed. Verify these are the correct resources.
Destroy apps layer
Type yes when prompted. This process takes approximately 5-10 minutes.
Verify destruction
# Check that ArgoCD namespace is gone
kubectl get namespace argocd
# Should return: Error from server (NotFound)
Remove Kubernetes platform components including Vault.
Navigate to platform directory
cd terraform/dev/2-platform
Destroy platform layer
Type yes when prompted. This process takes approximately 5-10 minutes.
Verify destruction
# Check that platform namespaces are gone
kubectl get namespace vault
kubectl get namespace cert-manager
kubectl get namespace external-secrets
Step 3: Destroy Infrastructure Layer
Remove EKS cluster, VPC, Tailscale, and Vault infrastructure.
Navigate to infrastructure directory
cd terraform/dev/1-infrastructure
Review destruction plan
This will destroy:
EKS cluster and all workloads
VPC and networking
Tailscale subnet router
Vault KMS keys and DynamoDB tables
Set required environment variables
export TF_VAR_tailscale_auth_key = "tskey-auth-xxxxx"
export TF_VAR_cloudflare_api_token = "your-token"
Environment variables are required even for destruction.
Destroy infrastructure layer
Type yes when prompted. This process takes approximately 15-20 minutes (EKS deletion is slowest).
Verify destruction
# Check that VPC is gone
aws ec2 describe-vpcs --filters Name=tag:Name,Values=dev-vpc
# Should return empty list
Step 4: Destroy Bootstrap (Optional)
Remove the Terraform state backend. Only do this if you want to completely remove all infrastructure, including state history.
Permanent Deletion : Destroying the bootstrap removes:
All Terraform state files
State lock table
Historical records of infrastructure
You will not be able to recover this data.
Migrate state to local
Before destroying the S3 backend, migrate state to local: cd terraform/dev/bootstrap
# Comment out the S3 backend in backend.tf
# Then run:
terraform init -migrate-state
Empty S3 bucket
S3 buckets with versioning must be emptied manually: aws s3 rm s3://shipyard-terraform-state-dev --recursive
aws s3api delete-bucket --bucket shipyard-terraform-state-dev
Troubleshooting Destruction
Resources Fail to Destroy
If terraform destroy fails with dependency errors:
Elastic Network Interfaces may be attached to resources: # List ENIs in VPC
aws ec2 describe-network-interfaces \
--filters Name=vpc-id,Values= < vpc-i d >
# Delete manually if needed
aws ec2 delete-network-interface --network-interface-id < eni-i d >
Security group deletion errors
Security groups may have dependencies: # Find dependent resources
aws ec2 describe-security-groups --group-ids < sg-i d >
# Remove rules first
aws ec2 revoke-security-group-ingress --group-id < sg-i d > --ip-permissions ...
Subnets may still have resources: # List resources in subnet
aws ec2 describe-network-interfaces \
--filters Name=subnet-id,Values= < subnet-i d >
# Delete manually
Load balancer not deleted
ALBs created by Kubernetes may not be tracked by Terraform: # List ALBs
aws elbv2 describe-load-balancers
# Delete manually
aws elbv2 delete-load-balancer --load-balancer-arn < ar n >
Orphaned Kubernetes Resources
If EKS destruction hangs due to Kubernetes resources:
# Delete all finalizers from stuck resources
kubectl patch < resource-typ e > < resource-nam e > -n < namespac e > \
-p '{"metadata":{"finalizers":[]}}' --type=merge
# Force delete namespace
kubectl delete namespace < namespac e > --force --grace-period=0
If state is corrupted or locked:
# Force unlock
terraform force-unlock < lock-i d >
# Remove specific resource from state
terraform state rm < resource-addres s >
# Refresh state
terraform refresh
Manual Cleanup
After Terraform destruction, manually clean up:
AWS Resources
# List log groups
aws logs describe-log-groups | grep dev-eks
# Delete log groups
aws logs delete-log-group --log-group-name < nam e >
# List repositories
aws ecr describe-repositories
# Delete repository
aws ecr delete-repository --repository-name < nam e > --force
# List snapshots
aws ec2 describe-snapshots --owner-ids self
# Delete snapshot
aws ec2 delete-snapshot --snapshot-id < snap-i d >
DNS Records
Remove DNS records from Cloudflare:
# List DNS records
curl -X GET "https://api.cloudflare.com/client/v4/zones/<zone-id>/dns_records" \
-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN "
# Delete record
curl -X DELETE "https://api.cloudflare.com/client/v4/zones/<zone-id>/dns_records/<record-id>" \
-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN "
Tailscale
Remove subnet router from Tailscale:
Go to Tailscale Machines
Find the subnet router device
Click the three dots menu
Select “Remove device”
Cost Verification
After destruction, verify no resources are still incurring costs:
# Check for running EC2 instances
aws ec2 describe-instances --filters Name=instance-state-name,Values=running
# Check for load balancers
aws elbv2 describe-load-balancers
# Check for NAT gateways
aws ec2 describe-nat-gateways --filter Name=state,Values=available
# Check for EBS volumes
aws ec2 describe-volumes --filters Name=status,Values=available
Starting Fresh
To redeploy infrastructure from scratch:
Ensure complete cleanup
Verify all resources are destroyed using AWS Console or CLI.
Clear local Terraform state
rm -rf terraform/dev/ * /.terraform
rm -rf terraform/dev/ * /terraform.tfstate *
Clear cached credentials
rm -rf ~/.vault-secrets
rm -rf ~/.kube/config
Begin new deployment
Start from the bootstrap step:
Bootstrap Initialize Terraform state backend
Getting Help
If you encounter issues during destruction:
Check Troubleshooting guide
Review AWS CloudTrail for API errors
Check Terraform state: terraform show
Use terraform state list to see tracked resources
Never manually delete the S3 state bucket or DynamoDB table before running terraform destroy on all layers.