Skip to main content
Anubis can be deployed on Kubernetes to protect applications using the nginx Ingress Controller’s external auth feature or Traefik’s ForwardAuth middleware.

Architecture

Deployment Manifest

Here’s a complete Anubis deployment for Kubernetes: deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: anubis
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: anubis
  template:
    metadata:
      labels:
        app: anubis
    spec:
      containers:
        - name: anubis
          image: ghcr.io/techarohq/anubis:main
          ports:
            - name: http
              containerPort: 8923
            - name: metrics
              containerPort: 9090
          env:
            - name: BIND
              value: ":8923"
            - name: TARGET
              value: " "  # Empty - using external auth mode
            - name: REDIRECT_DOMAINS
              value: "example.com,www.example.com"
            - name: PUBLIC_URL
              value: "https://anubis.example.com"
            - name: COOKIE_DOMAIN
              value: "example.com"
            - name: ED25519_PRIVATE_KEY_HEX
              valueFrom:
                secretKeyRef:
                  name: anubis-signing-key
                  key: private-key
          resources:
            limits:
              cpu: 500m
              memory: 256Mi
            requests:
              cpu: 250m
              memory: 128Mi
          livenessProbe:
            httpGet:
              path: /healthz
              port: 9090
            initialDelaySeconds: 10
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /healthz
              port: 9090
            initialDelaySeconds: 5
            periodSeconds: 5

Service

service.yaml:
apiVersion: v1
kind: Service
metadata:
  name: anubis
  namespace: default
spec:
  selector:
    app: anubis
  ports:
    - name: http
      protocol: TCP
      port: 8923
      targetPort: 8923
    - name: metrics
      protocol: TCP
      port: 9090
      targetPort: 9090
  type: ClusterIP

Signing Key Secret

Generate and store the signing key:
# Generate ED25519 private key
openssl rand -hex 32 > anubis-key.txt

# Create Kubernetes secret
kubectl create secret generic anubis-signing-key \
  --from-file=private-key=anubis-key.txt

# Clean up local file
rm anubis-key.txt
All Anubis replicas will use the same key for JWT signing.

nginx Ingress Controller

Use nginx’s auth_request annotation to protect your application: ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: protected-app
  namespace: default
  annotations:
    # Enable external auth
    nginx.ingress.kubernetes.io/auth-url: "http://anubis.default.svc.cluster.local:8923/.within.website/x/cmd/anubis/api/check"
    nginx.ingress.kubernetes.io/auth-signin: "https://anubis.example.com/.within.website/?redir=$scheme://$host$request_uri"
    
    # TLS certificate (using cert-manager)
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - example.com
      secretName: example-com-tls
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: your-app
                port:
                  number: 80
Anubis Ingress (for challenge page):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: anubis
  namespace: default
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - anubis.example.com
      secretName: anubis-example-com-tls
  rules:
    - host: anubis.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: anubis
                port:
                  number: 8923

Traefik Ingress

For Traefik, use the ForwardAuth middleware: middleware.yaml:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: anubis
  namespace: default
spec:
  forwardAuth:
    address: http://anubis.default.svc.cluster.local:8923/.within.website/x/cmd/anubis/api/check
    trustForwardHeader: true
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: protected-app
  namespace: default
  annotations:
    traefik.ingress.kubernetes.io/router.middlewares: default-anubis@kubernetescrd
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: traefik
  tls:
    - hosts:
        - example.com
      secretName: example-com-tls
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: your-app
                port:
                  number: 80

ConfigMap for Policy

Store custom Anubis policy in a ConfigMap: policy-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
  name: anubis-policy
  namespace: default
data:
  policy.yaml: |
    difficulty: 15
    
    allow:
      - name: allow-health-checks
        action: ALLOW
        path:
          prefix: /healthz
    
    deny:
      - name: block-bad-bots
        action: DENY
        user_agent:
          regex: "(scrapy|curl|wget)"
Mount in deployment:
spec:
  template:
    spec:
      containers:
        - name: anubis
          env:
            - name: POLICY_FNAME
              value: /etc/anubis/policy.yaml
          volumeMounts:
            - name: policy
              mountPath: /etc/anubis
      volumes:
        - name: policy
          configMap:
            name: anubis-policy

Multiple Replicas

Anubis supports horizontal scaling:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: anubis
spec:
  replicas: 3  # Run 3 instances
  # ...
Critical requirements:
  1. All replicas must use the same signing key (via Secret)
  2. Use a shared store for rate limiting (Redis/Valkey)
With Redis store:
env:
  - name: STORE_BACKEND
    value: redis
  - name: STORE_REDIS_ADDR
    value: redis.default.svc.cluster.local:6379

Redis Deployment

For shared state across Anubis replicas:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
        - name: redis
          image: redis:7-alpine
          ports:
            - containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  selector:
    app: redis
  ports:
    - port: 6379
      targetPort: 6379

Namespace Isolation

Deploy Anubis in a dedicated namespace:
kubectl create namespace anubis-system
Update service references in Ingress:
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-url: "http://anubis.anubis-system.svc.cluster.local:8923/.within.website/x/cmd/anubis/api/check"

Monitoring

Prometheus ServiceMonitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: anubis
  namespace: default
spec:
  selector:
    matchLabels:
      app: anubis
  endpoints:
    - port: metrics
      path: /metrics
      interval: 30s

Grafana Dashboard

Anubis exports Prometheus metrics on port 9090:
  • anubis_challenges_total - Total challenges issued
  • anubis_challenges_passed - Challenges successfully solved
  • anubis_challenges_failed - Failed challenge attempts
  • anubis_requests_total - Total requests processed

Resource Limits

Recommended resource limits:
resources:
  limits:
    cpu: 1000m      # 1 CPU core max
    memory: 512Mi   # 512 MB RAM max
  requests:
    cpu: 250m       # 0.25 CPU cores minimum
    memory: 128Mi   # 128 MB RAM minimum
Adjust based on traffic volume.

Autoscaling

Horizontal Pod Autoscaler based on CPU:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: anubis
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: anubis
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

Network Policies

Restrict traffic to Anubis:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: anubis
spec:
  podSelector:
    matchLabels:
      app: anubis
  policyTypes:
    - Ingress
    - Egress
  ingress:
    # Allow from Ingress Controller
    - from:
        - namespaceSelector:
            matchLabels:
              name: ingress-nginx
      ports:
        - port: 8923
    # Allow from Prometheus
    - from:
        - namespaceSelector:
            matchLabels:
              name: monitoring
      ports:
        - port: 9090
  egress:
    # Allow to Redis
    - to:
        - podSelector:
            matchLabels:
              app: redis
      ports:
        - port: 6379
    # Allow DNS
    - to:
        - namespaceSelector:
            matchLabels:
              name: kube-system
      ports:
        - port: 53
          protocol: UDP

Troubleshooting

Check Logs

# View Anubis logs
kubectl logs -l app=anubis -f

# Check specific pod
kubectl logs anubis-7d9f8b5c6-abcde

Test Auth Request

# Port-forward to Anubis
kubectl port-forward svc/anubis 8923:8923

# Test auth endpoint
curl http://localhost:8923/.within.website/x/cmd/anubis/api/check

Verify Ingress Annotations

# Check Ingress configuration
kubectl describe ingress protected-app

# View nginx Ingress logs
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx

Common Issues

503 Service Unavailable:
  • Check Anubis pods are running: kubectl get pods -l app=anubis
  • Verify service endpoints: kubectl get endpoints anubis
  • Check health probe status: kubectl describe pod <anubis-pod>
Redirect loops:
  • Verify REDIRECT_DOMAINS includes your domain
  • Check PUBLIC_URL matches Anubis ingress hostname
  • Ensure COOKIE_DOMAIN is correct
Different challenge on each request:
  • All replicas need the same signing key
  • Use Redis for shared state

Production Checklist

  • Use dedicated namespace
  • Set resource limits
  • Configure autoscaling
  • Use Redis/Valkey for multi-replica deployments
  • Store signing key in Secret
  • Enable TLS with cert-manager
  • Configure network policies
  • Set up Prometheus monitoring
  • Configure log aggregation
  • Test failover scenarios
  • Document REDIRECT_DOMAINS and PUBLIC_URL

Complete Example

See the test configuration at /home/daytona/workspace/source/test/nginx-external-auth/ for a working Kubernetes deployment example with:
  • Deployment with sidecar pattern
  • Service configuration
  • Ingress with external auth
  • ConfigMap for nginx config

Resources

Build docs developers (and LLMs) love