Skip to main content
Private nodes architecture provides the highest level of isolation by allowing external nodes to join the virtual cluster directly with their own CNI (Container Network Interface), CSI (Container Storage Interface), and networking stack. This mode eliminates dependency on the host cluster’s infrastructure for workload execution.
Introduced in v0.27: Private nodes is a Pro feature that requires a vCluster Pro license.

How It Works

In private nodes mode, the vCluster control plane runs in a host cluster namespace (or standalone), but workload nodes join the virtual cluster directly via Konnectivity tunnels. These nodes have their own CNI, CSI, and can be completely isolated from the host cluster:
┌────────────────── Host Cluster ────────────────────┐
│                                                      │
│   ┌────────────────────────────────────────┐   │
│   │ Namespace: vcluster-my-vcluster      │   │
│   │                                          │   │
│   │  ┌─────────────────────────────────┐ │   │
│   │  │ vCluster Control Plane        │ │   │
│   │  │  - API Server                  │ │   │
│   │  │  - Syncer                      │ │   │
│   │  │  - Konnectivity Server         │ │   │
│   │  │  - Cloud Controller Manager    │ │   │
│   │  └─────────────────────────────────┘ │   │
│   │                                          │   │
│   │  ┌─────────────────────────────────┐ │   │
│   │  │ Service: NodePort/LB          │ │   │
│   │  │  (Exposed API Server)          │ │   │
│   │  └─────────────────────────────────┘ │   │
│   └────────────────────────────────────────┘   │
│                                                      │
└──────────────────────────────────────────────────────┘

                        │ Konnectivity Tunnel
                        │ (Reverse Proxy)

    ┌─────────────────┼────────────────────┐
    │                │                      │
    ▼                ▼                      ▼
┌──────────────────────────────────────────────────────┐
│            Private Nodes (External)              │
│                                                   │
│  ┌───────────────────────────────────────────┐  │
│  │ Node 1 (Bare Metal / VM / Cloud)    │  │
│  │  - Kubelet                            │  │
│  │  - Konnectivity Agent                 │  │
│  │  - CNI: Flannel/Calico/Cilium/Custom  │  │
│  │  - CSI: Local/NFS/Custom              │  │
│  │  - Kube-proxy                         │  │
│  │  - Container Runtime (containerd)     │  │
│  └───────────────────────────────────────────┘  │
│                                                   │
│  ┌───────────────────────────────────────────┐  │
│  │ Node 2 (Bare Metal / VM / Cloud)    │  │
│  │  (Same components as Node 1)          │  │
│  └───────────────────────────────────────────┘  │
│                                                   │
└──────────────────────────────────────────────────────┘
    Can be anywhere: AWS, Azure, GCP, bare metal,
    on-prem, edge locations, different networks

Key Characteristics

Complete Isolation: Own CNI, CSI, and networking stack
No Host Dependencies: Workloads don’t touch host infrastructure
Konnectivity: Reverse tunnel for secure control plane communication
Cloud Controller Manager: Manages node IPs and provider metadata
Flexible Networking: Any CNI plugin (Flannel, Calico, Cilium, etc.)
Flexible Storage: Any CSI driver or local storage
Location Independence: Nodes can be anywhere with network connectivity

Configuration

Basic Configuration

private-nodes.yaml
# Enable private nodes mode
privateNodes:
  enabled: true

# Expose control plane for node joining
controlPlane:
  service:
    spec:
      type: NodePort  # or LoadBalancer

# Configure pod CIDR for the virtual cluster
networking:
  podCIDR: "10.244.0.0/16"

# Disable host node syncing
sync:
  fromHost:
    nodes:
      enabled: false

# Deploy CNI and local storage
deploy:
  cni:
    flannel:
      enabled: true
  localPathProvisioner:
    enabled: true

Create Private Nodes vCluster

1

Create the vCluster

vcluster create my-vcluster \
  --namespace team-x \
  --values private-nodes.yaml
2

Get Join Token

# Get the join command
vcluster node join my-vcluster --namespace team-x --print
This outputs a kubeadm join style command with the necessary tokens and endpoint.
3

Join Nodes

On each external machine you want to add as a node:
# Install vcluster binary on the node
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin/

# Join the node
sudo vcluster node join \
  --api-server https://my-vcluster.example.com:6443 \
  --token <join-token> \
  --ca-cert-hash sha256:<hash>
4

Verify Nodes

vcluster connect my-vcluster --namespace team-x
kubectl get nodes
# NAME          STATUS   ROLES    AGE   VERSION
# private-node-1   Ready    <none>   2m    v1.28.0

Konnectivity

Konnectivity provides a secure reverse tunnel from private nodes to the control plane:

How Konnectivity Works

  1. Konnectivity Server: Runs in the control plane (StatefulSet)
  2. Konnectivity Agent: Runs on each private node
  3. Tunnel: Agent establishes long-lived connection to server
  4. Reverse Proxy: Control plane communicates with nodes through tunnel

Benefits

Firewall Friendly: Nodes initiate connection (outbound only)
NAT Traversal: Works behind NAT and firewalls
Secure: Mutual TLS authentication
Resilient: Automatic reconnection

Configuration

Konnectivity is automatically configured when private nodes are enabled:
controlPlane:
  advanced:
    konnectivity:
      server:
        enabled: true  # Auto-enabled with privateNodes
        extraArgs: []
      agent:
        enabled: true
        replicas: 1
The Konnectivity agent runs on each private node automatically during the join process.

Networking

Pod Network (CNI)

Private nodes use their own CNI plugin, independent of the host cluster:
deploy:
  cni:
    flannel:
      enabled: true

networking:
  podCIDR: "10.244.0.0/16"
Characteristics:
  • Simple overlay network
  • Works across any infrastructure
  • UDP or VXLAN backend

Service Network

Services get IPs from the virtual cluster’s service CIDR:
networking:
  services:
    cidr: "10.96.0.0/12"  # Independent service network

Kube-proxy

Kube-proxy handles service load balancing on each node:
deploy:
  kubeProxy:
    enabled: true
    config:
      mode: "iptables"  # or ipvs

Load Balancer Services

For bare metal, use MetalLB:
deploy:
  metallb:
    enabled: true
    ipAddressPool:
      addresses:
        - 192.168.1.100-192.168.1.200
      l2Advertisement: true

Storage

Private nodes have complete storage independence:

Local Path Provisioner (Default)

deploy:
  localPathProvisioner:
    enabled: true
Provides local storage on each node:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data
spec:
  storageClassName: local-path
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Custom CSI Drivers

Install any CSI driver directly in the virtual cluster:
helm install csi-driver-nfs \
  csi-driver-nfs/csi-driver-nfs \
  --namespace kube-system

Use Cases

Compliance and Regulatory

Perfect for: PCI-DSS, HIPAA, SOC 2, FedRAMP requirements
privateNodes:
  enabled: true

controlPlane:
  service:
    spec:
      type: LoadBalancer
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"

networking:
  podCIDR: "10.0.0.0/16"  # Isolated network

policies:
  networkPolicy:
    enabled: true
    workload:
      publicEgress:
        enabled: false  # Block internet egress
Benefits:
  • Complete network isolation
  • Data never touches shared infrastructure
  • Clear security boundaries
  • Audit-friendly architecture

GPU Cloud Providers

Perfect for: Managed Kubernetes for GPU workloads, AI/ML platforms
privateNodes:
  enabled: true

controlPlane:
  service:
    spec:
      type: LoadBalancer  # Customer accessible

networking:
  podCIDR: "10.244.0.0/16"

deploy:
  cni:
    flannel:
      enabled: true
Node Setup:
# On GPU server
sudo vcluster node join \
  --api-server https://customer-123.gpucloud.example.com:6443 \
  --token <token> \
  --ca-cert-hash sha256:<hash>

# Install NVIDIA GPU operator in virtual cluster
kubectl apply -f https://raw.githubusercontent.com/NVIDIA/gpu-operator/master/deployments/gpu-operator.yaml
Real-World Impact:

CoreWeave

CoreWeave uses vCluster with private nodes to provide managed Kubernetes for GPU workloads. Each customer gets a complete isolated cluster on dedicated GPU nodes.

Hybrid and Multi-Cloud

Perfect for: Bursting to cloud, disaster recovery, multi-region deployments
privateNodes:
  enabled: true
  vpn:
    enabled: true  # Connect nodes across networks
    nodeToNode:
      enabled: true

controlPlane:
  service:
    spec:
      type: LoadBalancer
Scenario: Control plane in AWS, private nodes in on-prem + Azure:
# On-prem nodes
sudo vcluster node join --api-server https://control-plane.aws.example.com:6443 ...

# Azure nodes
sudo vcluster node join --api-server https://control-plane.aws.example.com:6443 ...

Bare Metal Kubernetes

Perfect for: Edge computing, on-premises, high-performance computing
privateNodes:
  enabled: true

controlPlane:
  service:
    spec:
      type: NodePort

deploy:
  cni:
    flannel:
      enabled: true
  metallb:
    enabled: true
    ipAddressPool:
      addresses:
        - 10.1.0.100-10.1.0.200

Advanced Configuration

Auto-Upgrade Nodes

Automatically upgrade private nodes:
privateNodes:
  enabled: true
  autoUpgrade:
    enabled: true
    concurrency: 1  # Upgrade one node at a time

Kubelet Configuration

Customize kubelet settings:
privateNodes:
  enabled: true
  kubelet:
    config:
      maxPods: 200
      imageGCHighThresholdPercent: 85
      imageGCLowThresholdPercent: 80
      evictionHard:
        memory.available: "100Mi"
        nodefs.available: "10%"

Container Runtime

Configure containerd:
privateNodes:
  enabled: true
  joinNode:
    containerd:
      enabled: true

VPN for Node-to-Node Communication

Enable Tailscale-powered VPN:
privateNodes:
  enabled: true
  vpn:
    enabled: true
    nodeToNode:
      enabled: true
Introduced in v0.30: VPN integration requires vCluster Platform connection.

Performance Characteristics

Resource Overhead

Control Plane:
  • CPU: 200-500m
  • Memory: 512MB-2GB
  • Konnectivity: +50m CPU, +128MB memory
Per Node:
  • Kubelet: ~100m CPU, ~100MB memory
  • Konnectivity Agent: ~50m CPU, ~64MB memory
  • CNI: 20-50m CPU, 50-200MB memory (varies by plugin)
  • Kube-proxy: ~50m CPU, ~64MB memory

Network Performance

MetricPerformance
Pod-to-Pod (same node)Near-native (CNI dependent)
Pod-to-Pod (different nodes)CNI dependent (overlay ~5-10% overhead)
Control plane latency+5-20ms (Konnectivity overhead)
Node-to-control planeVariable (depends on location)

Scaling Limits

MetricPrivate Nodes
Nodes per vCluster1-5000+
Pods per nodeUp to 200 (configurable)
Pods per vCluster1000s (CNI dependent)
Geographic distributionGlobal

Troubleshooting

Node Not Joining

Symptom: Node fails to join the cluster
# Check Konnectivity agent logs on the node
journalctl -u vcluster-node -f

# Common issues:
# 1. Cannot reach API server (firewall/network)
# 2. Invalid token
# 3. CA cert mismatch
Solution:
# Verify API server is reachable
curl -k https://<api-server>:6443

# Regenerate join command
vcluster node join my-vcluster --namespace team-x --print

# Check control plane service
kubectl get svc -n team-x

Konnectivity Issues

Symptom: Nodes join but control plane can’t communicate with them
# Check Konnectivity server logs
kubectl logs -n team-x <vcluster-pod> -c konnectivity-server

# Check agent status on node
sudo systemctl status vcluster-node
Solution:
# Restart Konnectivity on node
sudo systemctl restart vcluster-node

# Check firewall rules
sudo iptables -L -n | grep 8132

Networking Issues

Symptom: Pods can’t communicate
# Check CNI is running
kubectl get pods -n kube-system -l k8s-app=kube-flannel

# Check pod CIDR configuration
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

# Test pod networking
kubectl run test-1 --image=nginx
kubectl run test-2 --image=nginx
kubectl exec test-1 -- ping <test-2-ip>

Storage Issues

Symptom: PVCs not binding
# Check storage provisioner
kubectl get pods -n kube-system -l app.kubernetes.io/name=local-path-provisioner

# Check storage class
kubectl get storageclass

# Check PVC events
kubectl describe pvc <pvc-name>

Security Considerations

Control Plane Exposure

The control plane must be network-accessible to private nodes:

Node Authentication

Join tokens are time-limited and single-use:
# Generate new token (valid for 24 hours)
vcluster node join my-vcluster --namespace team-x --print

# Tokens are automatically rotated after use

Network Policies

Enforce strict network policies:
policies:
  networkPolicy:
    enabled: true
    workload:
      publicEgress:
        enabled: false
      egress:
        - to:
            - podSelector: {}  # Only same vCluster

Pros and Cons

Advantages

Complete Isolation: Full CNI, CSI, and network independence
Any CNI/CSI: Use any networking or storage solution
Compliance: Meets strictest regulatory requirements
Location Flexibility: Nodes anywhere with connectivity
No Host Dependencies: Zero reliance on host infrastructure
Hybrid/Multi-Cloud: Seamlessly span cloud and on-prem
Better Security: Clear isolation boundaries

Limitations

Higher Complexity: More components to manage
Latency: Konnectivity adds network hop
Node Management: Must provision and maintain nodes
Initial Setup: More complex than shared/dedicated
Pro License: Requires vCluster Pro

Comparison

AspectDedicated NodesPrivate Nodes
Network Isolation
Storage Isolation
Custom CNI
Custom CSI
Location Flexibility
Setup ComplexityMediumHigh
LicenseOSSPro

Best Practices

Use internal load balancers and restrict access:
controlPlane:
  service:
    spec:
      type: LoadBalancer
      loadBalancerSourceRanges:
        - 10.0.0.0/8    # Internal network only
        - 172.16.0.0/12
Set up monitoring for private nodes:
# Deploy metrics-server
deploy:
  metricsServer:
    enabled: true
# Monitor node resources
kubectl top nodes
Use infrastructure as code:
# Example: Terraform + cloud-init
resource "aws_instance" "vcluster_node" {
  user_data = <<-EOF
    #!/bin/bash
    curl -L -o vcluster https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64
    chmod +x vcluster && sudo mv vcluster /usr/local/bin/
    sudo vcluster node join --api-server ${var.api_server} ...
  EOF
}
Enable auto-upgrade and monitoring:
privateNodes:
  autoUpgrade:
    enabled: true
    concurrency: 1
Use multiple nodes for critical workloads:
replicas: 3  # Spread across nodes
When nodes span multiple networks:
privateNodes:
  vpn:
    enabled: true
    nodeToNode:
      enabled: true

Migration

From Dedicated to Private Nodes

1

Create New vCluster with Private Nodes

vcluster create my-vcluster-v2 \
  --namespace team-x-v2 \
  --values private-nodes.yaml
2

Join Private Nodes

Provision new nodes and join them to the new vCluster.
3

Migrate Workloads

Use Velero or kubectl to migrate resources:
# Export from old
kubectl get all -A -o yaml > resources.yaml

# Import to new
kubectl apply -f resources.yaml
4

Update DNS/Ingress

Point traffic to new vCluster.
5

Decommission Old vCluster

vcluster delete my-vcluster --namespace team-x

Next Steps

Auto Nodes

Add Karpenter-powered autoscaling to private nodes.

Standalone Mode

Run vCluster without any host cluster.

VPN Integration

Configure VPN for node-to-node communication.

Node Management

Advanced node joining and configuration options.

Build docs developers (and LLMs) love