Skip to main content
This guide covers deploying Flyte on an on-premises Kubernetes cluster (bare metal, VMware, OpenShift, or local k3s/k3d) without relying on cloud-managed services. MinIO provides the S3-compatible object store, and PostgreSQL runs either as a Kubernetes deployment or an external service.
A community-maintained tutorial for setting up the required dependencies and deploying the flyte-binary chart to a local Kubernetes cluster is available at the flyte-the-hard-way repository.

Architecture

┌──────────────────────────────────────────────┐
│  Kubernetes Cluster                             │
│  ┌───────────┐  ┌─────────────┐  ┌───────────┐  │
│  │ flyte-   │  │ minio        │  │ postgres   │  │
│  │ binary   │  │ (S3-compat)  │  │            │  │
│  └───────────┘  └─────────────┘  └───────────┘  │
└──────────────────────────────────────────────┘

Prerequisites

  • A running Kubernetes cluster (v1.25+)
  • Helm 3 installed
  • kubectl configured with cluster access
  • Sufficient node resources: at minimum 4 CPU and 8 GB RAM for the Flyte services plus task workloads

Deploy MinIO

MinIO provides an S3-compatible object store. Use the MinIO Operator or a standalone StatefulSet.

Standalone MinIO (for development/testing)

# minio.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: minio-ns
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: minio
  namespace: minio-ns
spec:
  selector:
    matchLabels:
      app: minio
  serviceName: minio
  replicas: 1
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
        - name: minio
          image: minio/minio:RELEASE.2024-01-01T00-00-00Z
          args: ["server", "/data"]
          env:
            - name: MINIO_ROOT_USER
              value: minio
            - name: MINIO_ROOT_PASSWORD
              value: miniostorage
          ports:
            - containerPort: 9000
          volumeMounts:
            - name: data
              mountPath: /data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 50Gi
---
apiVersion: v1
kind: Service
metadata:
  name: minio
  namespace: minio-ns
spec:
  selector:
    app: minio
  ports:
    - port: 9000
      targetPort: 9000
kubectl apply -f minio.yaml

# Create the bucket
kubectl exec -n minio-ns minio-0 -- \
  mc alias set local http://localhost:9000 minio miniostorage
kubectl exec -n minio-ns minio-0 -- \
  mc mb local/my-flyte-bucket

Deploy PostgreSQL

Standalone PostgreSQL (for development/testing)

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgres bitnami/postgresql \
  --namespace flyte \
  --create-namespace \
  --set auth.postgresPassword=postgres \
  --set auth.database=flyteadmin
For production on-premises deployments, use an externally managed PostgreSQL instance with regular backups, replication, and connection pooling via PgBouncer.

Install Flyte

values-onprem.yaml

configuration:
  database:
    username: postgres
    password: postgres
    host: postgres-postgresql.flyte.svc.cluster.local
    port: 5432
    dbname: flyteadmin
    options: sslmode=disable

  storage:
    metadataContainer: my-flyte-bucket
    userDataContainer: my-flyte-bucket
    provider: s3
    providerConfig:
      s3:
        disableSSL: true
        v2Signing: true
        endpoint: http://minio.minio-ns.svc.cluster.local:9000
        authType: accesskey
        accessKey: minio
        secretKey: miniostorage
        region: us-east-1    # Can be any value for MinIO

  logging:
    level: 5
    plugins:
      kubernetes:
        enabled: true
        templateUri: "http://localhost:30080/kubernetes-dashboard/#/log/{{.namespace}}/{{ .podName }}/pod?namespace={{.namespace}}"

  inline:
    storage:
      signedURL:
        stowConfigOverride:
          endpoint: http://localhost:30002  # For pre-signed URL generation
    plugins:
      k8s:
        default-env-vars:
          - FLYTE_AWS_ENDPOINT: http://minio.minio-ns.svc.cluster.local:9000
          - FLYTE_AWS_ACCESS_KEY_ID: minio
          - FLYTE_AWS_SECRET_ACCESS_KEY: miniostorage
    task_resources:
      defaults:
        cpu: 500m
        memory: 500Mi
      limits:
        cpu: 4
        memory: 4Gi
    tasks:
      task-plugins:
        enabled-plugins:
          - container
          - sidecar
          - K8S-ARRAY
          - echo
        default-for-task-types:
          - container: container
          - container_array: K8S-ARRAY

clusterResourceTemplates:
  inline:
    001_namespace.yaml: |
      apiVersion: v1
      kind: Namespace
      metadata:
        name: '{{ namespace }}'

ingress:
  create: false   # Set to true if you have an Ingress controller
helm repo add flyteorg https://flyteorg.github.io/flyte
helm install flyte-backend flyteorg/flyte-binary \
  --namespace flyte \
  --create-namespace \
  --values values-onprem.yaml

Access FlyteConsole

Without Ingress, use a NodePort service or port-forward:
kubectl -n flyte port-forward service/flyte-binary 8088:8088 8089:8089
Then open http://localhost:8088/console.

Persistent volume considerations

  • Flyte itself is stateless; all state is in PostgreSQL and MinIO
  • PostgreSQL and MinIO need ReadWriteOnce PVCs backed by your storage class
  • On bare metal, consider Longhorn or OpenEBS for dynamic provisioning

Security considerations for on-premises

  • MinIO should not be exposed outside the cluster without TLS and access control
  • Use network policies to restrict traffic to minio-ns and flyte namespaces
  • Set a strong MINIO_ROOT_PASSWORD and rotate credentials via Kubernetes Secrets
  • Enable authentication before exposing FlyteConsole externally

Build docs developers (and LLMs) love