Private nodes architecture provides the highest level of isolation by allowing external nodes to join the virtual cluster directly with their own CNI (Container Network Interface), CSI (Container Storage Interface), and networking stack. This mode eliminates dependency on the host cluster’s infrastructure for workload execution.
Introduced in v0.27 : Private nodes is a Pro feature that requires a vCluster Pro license.
How It Works
In private nodes mode, the vCluster control plane runs in a host cluster namespace (or standalone), but workload nodes join the virtual cluster directly via Konnectivity tunnels. These nodes have their own CNI, CSI, and can be completely isolated from the host cluster:
┌────────────────── Host Cluster ────────────────────┐
│ │
│ ┌────────────────────────────────────────┐ │
│ │ Namespace: vcluster-my-vcluster │ │
│ │ │ │
│ │ ┌─────────────────────────────────┐ │ │
│ │ │ vCluster Control Plane │ │ │
│ │ │ - API Server │ │ │
│ │ │ - Syncer │ │ │
│ │ │ - Konnectivity Server │ │ │
│ │ │ - Cloud Controller Manager │ │ │
│ │ └─────────────────────────────────┘ │ │
│ │ │ │
│ │ ┌─────────────────────────────────┐ │ │
│ │ │ Service: NodePort/LB │ │ │
│ │ │ (Exposed API Server) │ │ │
│ │ └─────────────────────────────────┘ │ │
│ └────────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────┘
│
│ Konnectivity Tunnel
│ (Reverse Proxy)
│
┌─────────────────┼────────────────────┐
│ │ │
▼ ▼ ▼
┌──────────────────────────────────────────────────────┐
│ Private Nodes (External) │
│ │
│ ┌───────────────────────────────────────────┐ │
│ │ Node 1 (Bare Metal / VM / Cloud) │ │
│ │ - Kubelet │ │
│ │ - Konnectivity Agent │ │
│ │ - CNI: Flannel/Calico/Cilium/Custom │ │
│ │ - CSI: Local/NFS/Custom │ │
│ │ - Kube-proxy │ │
│ │ - Container Runtime (containerd) │ │
│ └───────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────┐ │
│ │ Node 2 (Bare Metal / VM / Cloud) │ │
│ │ (Same components as Node 1) │ │
│ └───────────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────┘
Can be anywhere: AWS, Azure, GCP, bare metal,
on-prem, edge locations, different networks
Key Characteristics
Complete Isolation : Own CNI, CSI, and networking stack
No Host Dependencies : Workloads don’t touch host infrastructure
Konnectivity : Reverse tunnel for secure control plane communication
Cloud Controller Manager : Manages node IPs and provider metadata
Flexible Networking : Any CNI plugin (Flannel, Calico, Cilium, etc.)
Flexible Storage : Any CSI driver or local storage
Location Independence : Nodes can be anywhere with network connectivity
Configuration
Basic Configuration
# Enable private nodes mode
privateNodes :
enabled : true
# Expose control plane for node joining
controlPlane :
service :
spec :
type : NodePort # or LoadBalancer
# Configure pod CIDR for the virtual cluster
networking :
podCIDR : "10.244.0.0/16"
# Disable host node syncing
sync :
fromHost :
nodes :
enabled : false
# Deploy CNI and local storage
deploy :
cni :
flannel :
enabled : true
localPathProvisioner :
enabled : true
Create Private Nodes vCluster
Create the vCluster
vcluster create my-vcluster \
--namespace team-x \
--values private-nodes.yaml
Get Join Token
# Get the join command
vcluster node join my-vcluster --namespace team-x --print
This outputs a kubeadm join style command with the necessary tokens and endpoint.
Join Nodes
On each external machine you want to add as a node: # Install vcluster binary on the node
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin/
# Join the node
sudo vcluster node join \
--api-server https://my-vcluster.example.com:6443 \
--token < join-toke n > \
--ca-cert-hash sha256: < has h >
Verify Nodes
vcluster connect my-vcluster --namespace team-x
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# private-node-1 Ready <none> 2m v1.28.0
Konnectivity
Konnectivity provides a secure reverse tunnel from private nodes to the control plane:
How Konnectivity Works
Konnectivity Server : Runs in the control plane (StatefulSet)
Konnectivity Agent : Runs on each private node
Tunnel : Agent establishes long-lived connection to server
Reverse Proxy : Control plane communicates with nodes through tunnel
Benefits
Firewall Friendly : Nodes initiate connection (outbound only)
NAT Traversal : Works behind NAT and firewalls
Secure : Mutual TLS authentication
Resilient : Automatic reconnection
Configuration
Konnectivity is automatically configured when private nodes are enabled:
controlPlane :
advanced :
konnectivity :
server :
enabled : true # Auto-enabled with privateNodes
extraArgs : []
agent :
enabled : true
replicas : 1
The Konnectivity agent runs on each private node automatically during the join process.
Networking
Pod Network (CNI)
Private nodes use their own CNI plugin, independent of the host cluster:
Flannel (Default)
Calico
Cilium
Custom CNI
deploy :
cni :
flannel :
enabled : true
networking :
podCIDR : "10.244.0.0/16"
Characteristics:
Simple overlay network
Works across any infrastructure
UDP or VXLAN backend
deploy :
cni :
flannel :
enabled : false
networking :
podCIDR : "192.168.0.0/16"
Then install Calico manually in the virtual cluster: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Characteristics:
Advanced network policies
BGP routing
Better performance
deploy :
cni :
flannel :
enabled : false
networking :
podCIDR : "10.0.0.0/16"
Install Cilium: cilium install --cluster-name my-vcluster
Characteristics:
eBPF-based
Service mesh features
Advanced observability
deploy :
cni :
flannel :
enabled : false
networking :
podCIDR : "<your-cidr>"
Install any CNI plugin of your choice in the virtual cluster.
Service Network
Services get IPs from the virtual cluster’s service CIDR:
networking :
services :
cidr : "10.96.0.0/12" # Independent service network
Kube-proxy
Kube-proxy handles service load balancing on each node:
deploy :
kubeProxy :
enabled : true
config :
mode : "iptables" # or ipvs
Load Balancer Services
For bare metal, use MetalLB:
deploy :
metallb :
enabled : true
ipAddressPool :
addresses :
- 192.168.1.100-192.168.1.200
l2Advertisement : true
Storage
Private nodes have complete storage independence:
Local Path Provisioner (Default)
deploy :
localPathProvisioner :
enabled : true
Provides local storage on each node:
apiVersion : v1
kind : PersistentVolumeClaim
metadata :
name : data
spec :
storageClassName : local-path
accessModes :
- ReadWriteOnce
resources :
requests :
storage : 10Gi
Custom CSI Drivers
Install any CSI driver directly in the virtual cluster:
NFS CSI
Longhorn
Cloud Provider CSI
helm install csi-driver-nfs \
csi-driver-nfs/csi-driver-nfs \
--namespace kube-system
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
# Example: AWS EBS CSI
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
Use Cases
Compliance and Regulatory
Perfect for: PCI-DSS, HIPAA, SOC 2, FedRAMP requirements
privateNodes :
enabled : true
controlPlane :
service :
spec :
type : LoadBalancer
annotations :
service.beta.kubernetes.io/aws-load-balancer-internal : "true"
networking :
podCIDR : "10.0.0.0/16" # Isolated network
policies :
networkPolicy :
enabled : true
workload :
publicEgress :
enabled : false # Block internet egress
Benefits:
Complete network isolation
Data never touches shared infrastructure
Clear security boundaries
Audit-friendly architecture
GPU Cloud Providers
Perfect for: Managed Kubernetes for GPU workloads, AI/ML platforms
privateNodes :
enabled : true
controlPlane :
service :
spec :
type : LoadBalancer # Customer accessible
networking :
podCIDR : "10.244.0.0/16"
deploy :
cni :
flannel :
enabled : true
Node Setup:
# On GPU server
sudo vcluster node join \
--api-server https://customer-123.gpucloud.example.com:6443 \
--token < toke n > \
--ca-cert-hash sha256: < has h >
# Install NVIDIA GPU operator in virtual cluster
kubectl apply -f https://raw.githubusercontent.com/NVIDIA/gpu-operator/master/deployments/gpu-operator.yaml
Real-World Impact:
CoreWeave CoreWeave uses vCluster with private nodes to provide managed Kubernetes for GPU workloads. Each customer gets a complete isolated cluster on dedicated GPU nodes.
Hybrid and Multi-Cloud
Perfect for: Bursting to cloud, disaster recovery, multi-region deployments
privateNodes :
enabled : true
vpn :
enabled : true # Connect nodes across networks
nodeToNode :
enabled : true
controlPlane :
service :
spec :
type : LoadBalancer
Scenario : Control plane in AWS, private nodes in on-prem + Azure:
# On-prem nodes
sudo vcluster node join --api-server https://control-plane.aws.example.com:6443 ...
# Azure nodes
sudo vcluster node join --api-server https://control-plane.aws.example.com:6443 ...
Perfect for: Edge computing, on-premises, high-performance computing
privateNodes :
enabled : true
controlPlane :
service :
spec :
type : NodePort
deploy :
cni :
flannel :
enabled : true
metallb :
enabled : true
ipAddressPool :
addresses :
- 10.1.0.100-10.1.0.200
Advanced Configuration
Auto-Upgrade Nodes
Automatically upgrade private nodes:
privateNodes :
enabled : true
autoUpgrade :
enabled : true
concurrency : 1 # Upgrade one node at a time
Kubelet Configuration
Customize kubelet settings:
privateNodes :
enabled : true
kubelet :
config :
maxPods : 200
imageGCHighThresholdPercent : 85
imageGCLowThresholdPercent : 80
evictionHard :
memory.available : "100Mi"
nodefs.available : "10%"
Container Runtime
Configure containerd:
privateNodes :
enabled : true
joinNode :
containerd :
enabled : true
VPN for Node-to-Node Communication
Enable Tailscale-powered VPN:
privateNodes :
enabled : true
vpn :
enabled : true
nodeToNode :
enabled : true
Introduced in v0.30 : VPN integration requires vCluster Platform connection.
Resource Overhead
Control Plane:
CPU: 200-500m
Memory: 512MB-2GB
Konnectivity: +50m CPU, +128MB memory
Per Node:
Kubelet: ~100m CPU, ~100MB memory
Konnectivity Agent: ~50m CPU, ~64MB memory
CNI: 20-50m CPU, 50-200MB memory (varies by plugin)
Kube-proxy: ~50m CPU, ~64MB memory
Metric Performance Pod-to-Pod (same node) Near-native (CNI dependent) Pod-to-Pod (different nodes) CNI dependent (overlay ~5-10% overhead) Control plane latency +5-20ms (Konnectivity overhead) Node-to-control plane Variable (depends on location)
Scaling Limits
Metric Private Nodes Nodes per vCluster 1-5000+ Pods per node Up to 200 (configurable) Pods per vCluster 1000s (CNI dependent) Geographic distribution Global
Troubleshooting
Node Not Joining
Symptom : Node fails to join the cluster
# Check Konnectivity agent logs on the node
journalctl -u vcluster-node -f
# Common issues:
# 1. Cannot reach API server (firewall/network)
# 2. Invalid token
# 3. CA cert mismatch
Solution :
# Verify API server is reachable
curl -k https:// < api-serve r > :6443
# Regenerate join command
vcluster node join my-vcluster --namespace team-x --print
# Check control plane service
kubectl get svc -n team-x
Konnectivity Issues
Symptom : Nodes join but control plane can’t communicate with them
# Check Konnectivity server logs
kubectl logs -n team-x < vcluster-po d > -c konnectivity-server
# Check agent status on node
sudo systemctl status vcluster-node
Solution :
# Restart Konnectivity on node
sudo systemctl restart vcluster-node
# Check firewall rules
sudo iptables -L -n | grep 8132
Networking Issues
Symptom : Pods can’t communicate
# Check CNI is running
kubectl get pods -n kube-system -l k8s-app=kube-flannel
# Check pod CIDR configuration
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
# Test pod networking
kubectl run test-1 --image=nginx
kubectl run test-2 --image=nginx
kubectl exec test-1 -- ping < test-2-i p >
Storage Issues
Symptom : PVCs not binding
# Check storage provisioner
kubectl get pods -n kube-system -l app.kubernetes.io/name=local-path-provisioner
# Check storage class
kubectl get storageclass
# Check PVC events
kubectl describe pvc < pvc-nam e >
Security Considerations
Control Plane Exposure
The control plane must be network-accessible to private nodes:
controlPlane :
service :
spec :
type : LoadBalancer
annotations :
service.beta.kubernetes.io/aws-load-balancer-internal : "true"
Only accessible within your VPC/network. controlPlane :
service :
spec :
type : NodePort
Restrict access via firewall rules to known IPs. controlPlane :
service :
spec :
type : LoadBalancer
Konnectivity uses mutual TLS for authentication.
Node Authentication
Join tokens are time-limited and single-use:
# Generate new token (valid for 24 hours)
vcluster node join my-vcluster --namespace team-x --print
# Tokens are automatically rotated after use
Network Policies
Enforce strict network policies:
policies :
networkPolicy :
enabled : true
workload :
publicEgress :
enabled : false
egress :
- to :
- podSelector : {} # Only same vCluster
Pros and Cons
Advantages
Complete Isolation : Full CNI, CSI, and network independence
Any CNI/CSI : Use any networking or storage solution
Compliance : Meets strictest regulatory requirements
Location Flexibility : Nodes anywhere with connectivity
No Host Dependencies : Zero reliance on host infrastructure
Hybrid/Multi-Cloud : Seamlessly span cloud and on-prem
Better Security : Clear isolation boundaries
Limitations
Higher Complexity : More components to manage
Latency : Konnectivity adds network hop
Node Management : Must provision and maintain nodes
Initial Setup : More complex than shared/dedicated
Pro License : Requires vCluster Pro
Comparison
Aspect Dedicated Nodes Private Nodes Network Isolation ❌ ✅ Storage Isolation ❌ ✅ Custom CNI ❌ ✅ Custom CSI ❌ ✅ Location Flexibility ❌ ✅ Setup Complexity Medium High License OSS Pro
Best Practices
Secure Control Plane Access
Use internal load balancers and restrict access: controlPlane :
service :
spec :
type : LoadBalancer
loadBalancerSourceRanges :
- 10.0.0.0/8 # Internal network only
- 172.16.0.0/12
Set up monitoring for private nodes: # Deploy metrics-server
deploy :
metricsServer :
enabled : true
# Monitor node resources
kubectl top nodes
Automate Node Provisioning
Use infrastructure as code: # Example: Terraform + cloud-init
resource "aws_instance" "vcluster_node" {
user_data = <<- EOF
#!/bin/bash
curl -L -o vcluster https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64
chmod +x vcluster && sudo mv vcluster /usr/local/bin/
sudo vcluster node join --api-server ${ var . api_server } ...
EOF
}
Enable auto-upgrade and monitoring: privateNodes :
autoUpgrade :
enabled : true
concurrency : 1
Use multiple nodes for critical workloads: replicas : 3 # Spread across nodes
When nodes span multiple networks: privateNodes :
vpn :
enabled : true
nodeToNode :
enabled : true
Migration
From Dedicated to Private Nodes
Create New vCluster with Private Nodes
vcluster create my-vcluster-v2 \
--namespace team-x-v2 \
--values private-nodes.yaml
Join Private Nodes
Provision new nodes and join them to the new vCluster.
Migrate Workloads
Use Velero or kubectl to migrate resources: # Export from old
kubectl get all -A -o yaml > resources.yaml
# Import to new
kubectl apply -f resources.yaml
Update DNS/Ingress
Point traffic to new vCluster.
Decommission Old vCluster
vcluster delete my-vcluster --namespace team-x
Next Steps
Auto Nodes Add Karpenter-powered autoscaling to private nodes.
Standalone Mode Run vCluster without any host cluster.
VPN Integration Configure VPN for node-to-node communication.
Node Management Advanced node joining and configuration options.