Skip to main content
Deploy S2 Lite to Kubernetes using the official Helm chart. The chart supports various deployment scenarios from development to production.

Prerequisites

  • Kubernetes 1.19+
  • Helm 3.0+
  • (Optional) S3-compatible object storage for persistent data

Quick Start

1

Add the Helm repository

helm repo add s2 https://s2-streamstore.github.io/s2
helm repo update
2

Install with default settings

The default installation runs in-memory mode (great for testing):
helm install my-s2-lite s2/s2-lite-helm
3

Verify the installation

# Check the pod status
kubectl get pods -l app.kubernetes.io/name=s2-lite

# Check the service
kubectl get svc -l app.kubernetes.io/name=s2-lite

# Port forward to access locally
kubectl port-forward svc/s2-lite 8080:80

# Test the health endpoint
curl http://localhost:8080/health

Installation from OCI Registry

You can also install directly from GitHub Container Registry:
helm install my-s2-lite oci://ghcr.io/s2-streamstore/charts/s2-lite-helm

Storage Options

In-Memory (Default)

Perfect for development and testing:
helm install my-s2-lite s2/s2-lite-helm
Data is lost when the pod restarts. Not suitable for production.

S3-Compatible Object Storage

For production deployments with persistent data:
helm install my-s2-lite s2/s2-lite-helm \
  --set objectStorage.enabled=true \
  --set objectStorage.bucket=my-s2-bucket \
  --set objectStorage.path=s2lite

Configuration Examples

AWS S3 with IAM Role (IRSA)

Recommended for EKS deployments:
1

Create IAM policy and role

Create an IAM policy with S3 access:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::my-s2-bucket",
        "arn:aws:s3:::my-s2-bucket/*"
      ]
    }
  ]
}
Create an IAM role with OIDC trust relationship for your EKS cluster.
2

Create values file

values.yaml
objectStorage:
  enabled: true
  bucket: my-s2-bucket
  path: s2lite

serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/s2-lite-role
3

Install with values

helm install my-s2-lite s2/s2-lite-helm -f values.yaml

With Static Credentials

For non-AWS S3-compatible storage (MinIO, Tigris, R2, etc.):
1

Create a secret

kubectl create secret generic s2-lite-credentials \
  --from-literal=AWS_ACCESS_KEY_ID=your-access-key \
  --from-literal=AWS_SECRET_ACCESS_KEY=your-secret-key
2

Create values file

values.yaml
objectStorage:
  enabled: true
  bucket: my-bucket
  endpoint: https://fly.storage.tigris.dev

env:
  - name: AWS_ACCESS_KEY_ID
    valueFrom:
      secretKeyRef:
        name: s2-lite-credentials
        key: AWS_ACCESS_KEY_ID
  - name: AWS_SECRET_ACCESS_KEY
    valueFrom:
      secretKeyRef:
        name: s2-lite-credentials
        key: AWS_SECRET_ACCESS_KEY
3

Install

helm install my-s2-lite s2/s2-lite-helm -f values.yaml

With TLS

Self-Signed Certificate

For development/testing:
values.yaml
tls:
  enabled: true
  selfSigned: true

service:
  type: LoadBalancer
helm install my-s2-lite s2/s2-lite-helm -f values.yaml
Clients will need to use --insecure or configure SSL verification to trust the self-signed certificate.

Provided Certificate

For production with valid certificates:
1

Create TLS secret

kubectl create secret tls s2-lite-tls \
  --cert=tls.crt \
  --key=tls.key
2

Create values file

values.yaml
tls:
  enabled: true
  cert: /etc/tls/tls.crt
  key: /etc/tls/tls.key

volumeMounts:
  - name: tls-certs
    mountPath: /etc/tls
    readOnly: true

volumes:
  - name: tls-certs
    secret:
      secretName: s2-lite-tls
3

Install

helm install my-s2-lite s2/s2-lite-helm -f values.yaml

With Ingress

Expose S2 Lite via an Ingress controller:
values.yaml
service:
  type: ClusterIP

ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - host: s2-lite.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: s2-lite-tls
      hosts:
        - s2-lite.example.com
helm install my-s2-lite s2/s2-lite-helm -f values.yaml

Behind AWS Network Load Balancer

values.yaml
service:
  type: LoadBalancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
    external-dns.alpha.kubernetes.io/hostname: "s2.example.com"

objectStorage:
  enabled: true
  bucket: my-s2-bucket

serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/s2-lite-role

Monitoring with Prometheus

Enable Prometheus monitoring:
values.yaml
metrics:
  serviceMonitor:
    enabled: true
    interval: 30s
    scrapeTimeout: 10s
    labels:
      release: prometheus  # Match your Prometheus operator label
helm install my-s2-lite s2/s2-lite-helm -f values.yaml
Requires Prometheus Operator to be installed in your cluster.
For TLS-enabled deployments, configure TLS scraping:
values.yaml
metrics:
  serviceMonitor:
    enabled: true
    tlsConfig:
      insecureSkipVerify: true  # For self-signed certs
      # Or for CA-signed certs:
      # ca:
      #   secret:
      #     name: s2-lite-tls
      #     key: tls.crt

Resource Configuration

Set resource requests and limits:
values.yaml
resources:
  requests:
    cpu: 500m
    memory: 512Mi
  limits:
    cpu: 2000m
    memory: 2Gi

Pod Disruption Budget

Protect against voluntary disruptions:
values.yaml
podDisruptionBudget:
  enabled: true
  maxUnavailable: 1
S2 Lite runs as a single instance. With replicaCount: 1, a PDB will prevent pod eviction. Use carefully.

Advanced Configuration

SlateDB Settings

Configure SlateDB parameters via environment variables:
values.yaml
env:
  # Flush interval (default: 50ms for S3, 5ms for in-memory)
  - name: SL8_FLUSH_INTERVAL
    value: "50ms"
  # Manifest poll interval
  - name: SL8_MANIFEST_POLL_INTERVAL
    value: "1s"
See the SlateDB settings reference for all options.

Enable Pipelining

For better performance (currently disabled by default for safety):
values.yaml
env:
  - name: S2LITE_PIPELINE
    value: "true"

Init File for Resources

Pre-create basins and streams at startup:
1

Create ConfigMap

kubectl create configmap s2-lite-init \
  --from-file=resources.json=resources.json
2

Configure values

values.yaml
env:
  - name: S2LITE_INIT_FILE
    value: /config/resources.json

volumeMounts:
  - name: init-config
    mountPath: /config
    readOnly: true

volumes:
  - name: init-config
    configMap:
      name: s2-lite-init

Configuration Reference

Common Helm values: | Parameter | Description | Default | |-----------|-------------|---------|| | replicaCount | Number of replicas (must be 1) | 1 | | image.repository | Image repository | ghcr.io/s2-streamstore/s2 | | image.tag | Image tag | Chart appVersion | | service.type | Service type | ClusterIP | | service.port | Service port | 80 | | service.targetPort | Container port | 8080 | | tls.enabled | Enable TLS | false | | tls.selfSigned | Use self-signed cert | false | | objectStorage.enabled | Enable object storage | false | | objectStorage.bucket | S3 bucket name | "" | | metrics.serviceMonitor.enabled | Enable ServiceMonitor | false | | resources | Resource requests/limits | {} | See the values.yaml for all options.

Upgrading

# Update the repository
helm repo update

# Upgrade to the latest version
helm upgrade my-s2-lite s2/s2-lite-helm

# Or with custom values
helm upgrade my-s2-lite s2/s2-lite-helm -f values.yaml
S2 Lite uses a Recreate deployment strategy. Upgrades will cause brief downtime while the old pod terminates and the new one starts.

Uninstalling

helm uninstall my-s2-lite
Uninstalling will not delete data in object storage. Your S3 bucket remains intact.

Troubleshooting

Pod Not Starting

Check pod events:
kubectl describe pod -l app.kubernetes.io/name=s2-lite

Check Logs

kubectl logs -l app.kubernetes.io/name=s2-lite --follow

Health Check Failures

Test the health endpoint:
kubectl port-forward svc/s2-lite 8080:80
curl http://localhost:8080/health

Object Storage Connection Issues

Verify credentials and permissions:
# Check environment variables
kubectl exec -it <pod-name> -- env | grep AWS

# Test S3 access (requires aws-cli in debug image)
kubectl exec -it <pod-name> -- aws s3 ls s3://my-s2-bucket/

Next Steps

Monitoring

Set up monitoring and alerts

Configuration

Detailed configuration reference

Build docs developers (and LLMs) love