Skip to main content
ChartDB can be configured through environment variables at both build time and runtime. This guide covers all available configuration options.

Environment Variables

ChartDB uses Vite for building, which means environment variables are handled differently at build time versus runtime.

Build-Time vs Runtime

Build-time variables are embedded into the compiled JavaScript during the build process. They are prefixed with VITE_:
# NPM build
VITE_OPENAI_API_KEY=sk-your-key npm run build

# Docker build
docker build --build-arg VITE_OPENAI_API_KEY=sk-your-key -t chartdb .
Build-time configuration is permanent and cannot be changed without rebuilding the application.

Configuration Options

AI Configuration

Configure AI features for DDL script generation and intelligent operations.

OpenAI API Key

Use OpenAI’s GPT models for AI features:
docker run -e OPENAI_API_KEY=sk-proj-your-key-here -p 8080:80 chartdb
VariableTypeDescription
OPENAI_API_KEYstringYour OpenAI API key (starts with sk-)
VITE_OPENAI_API_KEYstringBuild-time version

Custom Inference Server

Use a custom LLM inference server compatible with OpenAI’s API format:
docker run \
  -e OPENAI_API_ENDPOINT=http://localhost:8000/v1 \
  -e LLM_MODEL_NAME=Qwen/Qwen2.5-32B-Instruct-AWQ \
  -p 8080:80 chartdb
VariableTypeDescription
OPENAI_API_ENDPOINTstringCustom inference server URL (must end with /v1)
VITE_OPENAI_API_ENDPOINTstringBuild-time version
LLM_MODEL_NAMEstringModel identifier for the custom server
VITE_LLM_MODEL_NAMEstringBuild-time version
You must configure either OPENAI_API_KEY OR the combination of OPENAI_API_ENDPOINT + LLM_MODEL_NAME. Do not mix both configurations.

Supported LLM Servers

ChartDB works with any OpenAI-compatible API. Popular options include:

vLLM

High-performance inference server for large language models
http://localhost:8000/v1

LocalAI

Drop-in replacement API for running LLMs locally
http://localhost:8080/v1

Ollama

Easy-to-use local LLM server with OpenAI-compatible endpoints
http://localhost:11434/v1

LM Studio

Desktop application for running LLMs with API server
http://localhost:1234/v1

UI Configuration

Hide ChartDB Cloud

Remove references to ChartDB Cloud for a fully self-hosted experience:
docker run -e HIDE_CHARTDB_CLOUD=true -p 8080:80 chartdb
VariableTypeDefaultDescription
HIDE_CHARTDB_CLOUDbooleanfalseHides ChartDB Cloud branding and links
VITE_HIDE_CHARTDB_CLOUDbooleanfalseBuild-time version
When enabled, this removes:
  • ChartDB Cloud promotional banners
  • Links to the hosted version
  • Cloud-specific UI elements

Analytics Configuration

Disable Analytics

ChartDB includes privacy-focused analytics via Fathom Analytics. You can disable this:
docker run -e DISABLE_ANALYTICS=true -p 8080:80 chartdb
VariableTypeDefaultDescription
DISABLE_ANALYTICSbooleanfalseCompletely disables Fathom Analytics
VITE_DISABLE_ANALYTICSbooleanfalseBuild-time version
ChartDB uses Fathom Analytics, which is privacy-focused and GDPR compliant. No personal data is collected. However, you can disable it entirely for internal deployments.

Internal Configuration

These variables are used internally and generally don’t need to be changed:
VariableTypeDescription
VITE_IS_CHARTDB_IObooleanIdentifies the official ChartDB.io deployment
VITE_APP_URLstringApplication URL for canonical links
VITE_HOST_URLstringHost URL for API requests

Configuration Examples

Minimal Setup

Basic ChartDB deployment without AI features:
docker run -p 8080:80 ghcr.io/chartdb/chartdb:latest

OpenAI Integration

Full-featured setup with OpenAI:
docker run \
  -e OPENAI_API_KEY=sk-proj-your-key-here \
  -e DISABLE_ANALYTICS=true \
  -p 8080:80 \
  ghcr.io/chartdb/chartdb:latest

Self-Hosted with Local LLM

Complete self-hosted setup with local vLLM:
docker build \
  --build-arg VITE_OPENAI_API_ENDPOINT=http://localhost:8000/v1 \
  --build-arg VITE_LLM_MODEL_NAME=Qwen/Qwen2.5-32B-Instruct-AWQ \
  --build-arg VITE_HIDE_CHARTDB_CLOUD=true \
  --build-arg VITE_DISABLE_ANALYTICS=true \
  -t chartdb-selfhosted .

docker run \
  -e OPENAI_API_ENDPOINT=http://host.docker.internal:8000/v1 \
  -e LLM_MODEL_NAME=Qwen/Qwen2.5-32B-Instruct-AWQ \
  -e HIDE_CHARTDB_CLOUD=true \
  -e DISABLE_ANALYTICS=true \
  -p 8080:80 \
  chartdb-selfhosted

Enterprise Setup

Production-ready configuration:
docker-compose.yml
version: '3.8'

services:
  chartdb:
    image: ghcr.io/chartdb/chartdb:latest
    container_name: chartdb
    ports:
      - "8080:80"
    environment:
      # AI Configuration (choose one)
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      # OR use custom inference server:
      # - OPENAI_API_ENDPOINT=${OPENAI_API_ENDPOINT}
      # - LLM_MODEL_NAME=${LLM_MODEL_NAME}
      
      # UI Configuration
      - HIDE_CHARTDB_CLOUD=true
      
      # Privacy Configuration
      - DISABLE_ANALYTICS=true
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:80"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

Configuration File Reference

package.json Scripts

ChartDB’s package.json includes several build and development scripts:
{
  "scripts": {
    "dev": "vite",                              // Start development server
    "build": "npm run lint && tsc -b && vite build",  // Production build
    "lint": "eslint . --report-unused-disable-directives --max-warnings 0",
    "lint:fix": "npm run lint -- --fix",
    "preview": "vite preview",                  // Preview production build
    "test": "vitest",
    "test:ci": "vitest run --reporter=verbose --bail=1",
    "test:coverage": "vitest --coverage"
  }
}

Dockerfile Build Arguments

The Dockerfile accepts these build arguments:
ARG VITE_OPENAI_API_KEY           # OpenAI API key
ARG VITE_OPENAI_API_ENDPOINT      # Custom inference endpoint
ARG VITE_LLM_MODEL_NAME           # Custom model name
ARG VITE_HIDE_CHARTDB_CLOUD       # Hide cloud references
ARG VITE_DISABLE_ANALYTICS        # Disable analytics

Nginx Configuration

ChartDB uses a custom Nginx configuration template that dynamically injects environment variables:
location /config.js {
    default_type application/javascript;
    return 200 "window.env = {
        OPENAI_API_KEY: \"$OPENAI_API_KEY\",
        OPENAI_API_ENDPOINT: \"$OPENAI_API_ENDPOINT\",
        LLM_MODEL_NAME: \"$LLM_MODEL_NAME\",
        HIDE_CHARTDB_CLOUD: \"$HIDE_CHARTDB_CLOUD\",
        DISABLE_ANALYTICS: \"$DISABLE_ANALYTICS\"
    };";
}
This allows runtime configuration in Docker without rebuilding.

Environment Variable Loading

ChartDB loads environment variables in the following order (later sources override earlier ones):
1

Build-time variables

Variables prefixed with VITE_ are embedded during npm run build
2

Docker build arguments

--build-arg values are passed to the build stage
3

Docker runtime environment

-e flags set environment variables in the container
4

Nginx runtime injection

The entrypoint script creates /config.js with current values
5

Application runtime

ChartDB reads from window.env (runtime) or import.meta.env (build-time)

Code Reference

Environment variables are loaded in src/lib/env.ts:
export const OPENAI_API_KEY: string = import.meta.env.VITE_OPENAI_API_KEY;
export const OPENAI_API_ENDPOINT: string = import.meta.env.VITE_OPENAI_API_ENDPOINT;
export const LLM_MODEL_NAME: string = import.meta.env.VITE_LLM_MODEL_NAME;
export const HIDE_CHARTDB_CLOUD: boolean =
    (window?.env?.HIDE_CHARTDB_CLOUD ?? import.meta.env.VITE_HIDE_CHARTDB_CLOUD) === 'true';
export const DISABLE_ANALYTICS: boolean =
    (window?.env?.DISABLE_ANALYTICS ?? import.meta.env.VITE_DISABLE_ANALYTICS) === 'true';
Notice how HIDE_CHARTDB_CLOUD and DISABLE_ANALYTICS check window.env first (runtime) before falling back to import.meta.env (build-time). This enables Docker runtime configuration.

Validation

Check Current Configuration

Verify your configuration is loaded correctly:
# In Docker container
docker exec <container-id> cat /etc/nginx/conf.d/default.conf

# Check environment variables
docker exec <container-id> env | grep -E 'OPENAI|LLM|HIDE|DISABLE'

# Test config.js endpoint
curl http://localhost:8080/config.js

Debug Configuration Issues

Ensure you’re using the correct variable names:
  • Build time: Use VITE_ prefix with --build-arg
  • Runtime: No prefix with -e flag
Check the application code at src/lib/env.ts:1-15 to see how variables are loaded.
Verify your AI configuration:
# Check if API key is set
docker exec <container-id> env | grep OPENAI_API_KEY

# Or check custom endpoint
docker exec <container-id> env | grep -E 'OPENAI_API_ENDPOINT|LLM_MODEL_NAME'
Remember: Use either OPENAI_API_KEY or OPENAI_API_ENDPOINT + LLM_MODEL_NAME, not both.
Ensure you’re using --build-arg (not -e) and the VITE_ prefix:
# Correct
docker build --build-arg VITE_OPENAI_API_KEY=sk-key -t chartdb .

# Incorrect
docker build -e OPENAI_API_KEY=sk-key -t chartdb .

Security Considerations

Never commit sensitive values to version control
  • Don’t hardcode API keys in configuration files
  • Don’t commit .env files with secrets
  • Use environment variables or secret management systems
  • Rotate API keys regularly

Best Practices

1

Use runtime environment variables

Prefer runtime configuration over build-time for sensitive values:
# Good: Runtime configuration
docker run -e OPENAI_API_KEY=sk-key chartdb

# Less secure: Build-time configuration
docker build --build-arg VITE_OPENAI_API_KEY=sk-key -t chartdb .
2

Use secret management

For production, use Docker secrets or environment variable files:
services:
  chartdb:
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
    env_file:
      - .env.production
3

Restrict API key permissions

Create API keys with minimal required permissions for ChartDB’s AI features
4

Monitor API usage

Set up billing alerts and usage monitoring for external API services

Next Steps

Docker Deployment

Learn how to deploy ChartDB with Docker

AI Setup

Detailed guide to configuring AI features

Build docs developers (and LLMs) love