Skip to main content
NATS Server can be deployed on Kubernetes using StatefulSets for clustered deployments or standard Deployments for single instances. The official Helm chart simplifies complex deployments.

Quick Start with kubectl

Deploy a simple single-node NATS Server:
kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-server/simple-nats.yml
This creates a basic NATS deployment with a ClusterIP service.

StatefulSet Deployment

For production deployments with clustering and persistence, use a StatefulSet:
nats-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: nats
  labels:
    app: nats
spec:
  selector:
    app: nats
  clusterIP: None
  ports:
  - name: client
    port: 4222
  - name: cluster
    port: 6222
  - name: monitor
    port: 8222
  - name: metrics
    port: 7777
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nats
  labels:
    app: nats
spec:
  selector:
    matchLabels:
      app: nats
  replicas: 3
  serviceName: nats
  template:
    metadata:
      labels:
        app: nats
    spec:
      containers:
      - name: nats
        image: nats:latest
        ports:
        - containerPort: 4222
          name: client
        - containerPort: 6222
          name: cluster
        - containerPort: 8222
          name: monitor
        command:
        - "nats-server"
        - "--cluster_name=nats-cluster"
        - "--cluster=nats://0.0.0.0:6222"
        - "--routes=nats://nats-0.nats.default.svc.cluster.local:6222,nats://nats-1.nats.default.svc.cluster.local:6222,nats://nats-2.nats.default.svc.cluster.local:6222"
        - "--http_port=8222"
        - "--port=4222"
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: config-volume
          mountPath: /etc/nats-config
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8222
          initialDelaySeconds: 10
          periodSeconds: 30
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8222
          initialDelaySeconds: 10
          periodSeconds: 10
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
      volumes:
      - name: config-volume
        configMap:
          name: nats-config
Apply the StatefulSet:
kubectl apply -f nats-statefulset.yaml

ConfigMap for Configuration

Store NATS configuration in a ConfigMap:
nats-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nats-config
data:
  nats-server.conf: |
    port: 4222
    monitor_port: 8222
    
    cluster {
      port: 6222
      
      routes = [
        nats://nats-0.nats.default.svc.cluster.local:6222
        nats://nats-1.nats.default.svc.cluster.local:6222
        nats://nats-2.nats.default.svc.cluster.local:6222
      ]
      
      cluster_advertise: $CLUSTER_ADVERTISE
      connect_retries: 120
    }
    
    # JetStream configuration
    jetstream {
      store_dir: /data
      max_memory_store: 1GB
      max_file_store: 10GB
    }
Create the ConfigMap:
kubectl apply -f nats-config.yaml

Service Definitions

Headless Service for StatefulSet

apiVersion: v1
kind: Service
metadata:
  name: nats
  labels:
    app: nats
spec:
  selector:
    app: nats
  clusterIP: None
  ports:
  - name: client
    port: 4222
  - name: cluster
    port: 6222
  - name: monitor
    port: 8222

External Service (LoadBalancer)

apiVersion: v1
kind: Service
metadata:
  name: nats-external
  labels:
    app: nats
spec:
  type: LoadBalancer
  selector:
    app: nats
  ports:
  - name: client
    port: 4222
    targetPort: 4222

Monitoring Service

apiVersion: v1
kind: Service
metadata:
  name: nats-monitor
  labels:
    app: nats
spec:
  type: ClusterIP
  selector:
    app: nats
  ports:
  - name: monitor
    port: 8222
    targetPort: 8222

Helm Chart Deployment

The official NATS Helm chart is the recommended way to deploy NATS on Kubernetes.

Installation

Add the NATS Helm repository:
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm repo update
Install NATS with default settings:
helm install my-nats nats/nats

Custom Values

Create a values.yaml file for customization:
values.yaml
nats:
  image: nats:latest
  
  # Enable JetStream
  jetstream:
    enabled: true
    memStorage:
      enabled: true
      size: 1Gi
    fileStorage:
      enabled: true
      size: 10Gi
      storageClassName: standard
  
  # Cluster configuration
  cluster:
    enabled: true
    replicas: 3
  
  # Logging
  logging:
    debug: false
    trace: false
  
  # Resource limits
  resources:
    requests:
      cpu: 100m
      memory: 128Mi
    limits:
      cpu: 500m
      memory: 512Mi

# Monitoring
monitoring:
  enabled: true
  port: 8222

# Authentication
auth:
  enabled: false
  # basic:
  #   username: admin
  #   password: changeme
Install with custom values:
helm install my-nats nats/nats -f values.yaml

Helm Chart Repository

The NATS Helm chart is available on ArtifactHub. View all configuration options:
helm show values nats/nats

JetStream with Persistent Storage

For JetStream deployments, use PersistentVolumeClaims:
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nats-jetstream
spec:
  serviceName: nats
  replicas: 3
  selector:
    matchLabels:
      app: nats
  volumeClaimTemplates:
  - metadata:
      name: nats-data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: standard
  template:
    metadata:
      labels:
        app: nats
    spec:
      containers:
      - name: nats
        image: nats:latest
        command:
        - nats-server
        - -js
        - -sd
        - /data
        - -c
        - /etc/nats-config/nats-server.conf
        volumeMounts:
        - name: nats-data
          mountPath: /data
        - name: config-volume
          mountPath: /etc/nats-config

Scaling and High Availability

Horizontal Scaling

Scale the StatefulSet:
kubectl scale statefulset nats --replicas=5

Pod Disruption Budget

Ensure availability during maintenance:
nats-pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: nats-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: nats

Anti-Affinity Rules

Distribute pods across nodes:
affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app
          operator: In
          values:
          - nats
      topologyKey: kubernetes.io/hostname

Monitoring and Health Checks

Liveness Probe

livenessProbe:
  httpGet:
    path: /healthz
    port: 8222
  initialDelaySeconds: 10
  periodSeconds: 30
  timeoutSeconds: 5
  failureThreshold: 3

Readiness Probe

readinessProbe:
  httpGet:
    path: /healthz
    port: 8222
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3

Prometheus Metrics

Expose metrics for Prometheus:
apiVersion: v1
kind: Service
metadata:
  name: nats-metrics
  labels:
    app: nats
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "7777"
spec:
  selector:
    app: nats
  ports:
  - name: metrics
    port: 7777
    targetPort: 7777

RBAC Configuration

Create ServiceAccount and permissions:
nats-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nats-server
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nats-server
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nats-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nats-server
subjects:
- kind: ServiceAccount
  name: nats-server
  namespace: default

Upgrading

Rolling Update

Update the image version:
kubectl set image statefulset/nats nats=nats:2.10.0

Helm Upgrade

helm upgrade my-nats nats/nats -f values.yaml

Namespace Isolation

Deploy NATS in a dedicated namespace:
kubectl create namespace nats-system
helm install my-nats nats/nats -n nats-system

Best Practices

Use StatefulSets

Always use StatefulSets for clustered deployments to maintain stable network identities.

Configure Resource Limits

Set appropriate CPU and memory requests/limits based on your workload.

Enable Persistence

Use PersistentVolumes for JetStream to ensure data durability.

Implement Anti-Affinity

Spread pods across nodes for better fault tolerance.

Troubleshooting

Check Pod Status

kubectl get pods -l app=nats

View Logs

kubectl logs nats-0

Check Cluster Formation

kubectl exec -it nats-0 -- nats server list

Verify Configuration

kubectl exec -it nats-0 -- cat /etc/nats-config/nats-server.conf

Next Steps

Configuration

Explore advanced configuration options

Docker Deployment

Learn about Docker-based deployments

Build docs developers (and LLMs) love