Skip to main content

Overview

TikTok Miner provides a complete Docker setup with multi-service orchestration via Docker Compose. The application includes PostgreSQL, Redis, the main web application, a background worker, and optional services for development and production environments.

Architecture

The Docker setup includes the following services:
  • postgres: PostgreSQL 14 database with TimescaleDB support
  • redis: Redis 7 for job queues and caching
  • app: Main Next.js application
  • worker: Background job processor for discovery pipelines
  • nginx: Reverse proxy (production profile)
  • pgadmin: Database management UI (development profile)

Quick Start

Prerequisites

  • Docker 20.10 or higher
  • Docker Compose 1.29 or higher
  • At least 4GB of available RAM

Basic Setup

  1. Clone the repository and navigate to the project directory:
git clone <repository-url>
cd tiktok-miner
  1. Create environment file:
cp app/.env.example .env
  1. Set required environment variables in .env:
# Database
DB_PASSWORD=your_secure_password
REDIS_PASSWORD=your_redis_password

# Required API Keys
OPENAI_API_KEY=your_openai_key
GITHUB_TOKEN=your_github_token
  1. Start all services:
docker-compose up -d
  1. Check service health:
docker-compose ps

Docker Compose Configuration

PostgreSQL Service

The PostgreSQL service uses the official Alpine image with health checks:
postgres:
  image: postgres:14-alpine
  container_name: tiktok-miner-postgres
  restart: unless-stopped
  ports:
    - "5432:5432"
  environment:
    POSTGRES_DB: tiktok_miner
    POSTGRES_USER: tiktok_miner_user
    POSTGRES_PASSWORD: ${DB_PASSWORD:-tiktok_miner_password}
  volumes:
    - postgres_data:/var/lib/postgresql/data
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U tiktok_miner_user -d tiktok_miner"]
    interval: 10s
    timeout: 5s
    retries: 5
Key Features:
  • Automatic health monitoring
  • Persistent data storage
  • Configurable password via environment variable

Redis Service

Redis is used for job queues and caching:
redis:
  image: redis:7-alpine
  container_name: tiktok-miner-redis
  restart: unless-stopped
  ports:
    - "6379:6379"
  command: redis-server --requirepass ${REDIS_PASSWORD:-redis_password}
  volumes:
    - redis_data:/data
  healthcheck:
    test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
    interval: 10s
    timeout: 5s
    retries: 5
Key Features:
  • Password-protected access
  • Data persistence
  • Health monitoring

Application Service

The main Next.js application:
app:
  build:
    context: .
    dockerfile: Dockerfile
    args:
      DATABASE_URL: postgresql://tiktok_miner_user:${DB_PASSWORD}@postgres:5432/tiktok_miner
      DIRECT_URL: postgresql://tiktok_miner_user:${DB_PASSWORD}@postgres:5432/tiktok_miner
      NODE_ENV: production
      PORT: 3000
  container_name: tiktok-miner-app
  restart: unless-stopped
  ports:
    - "3000:3000"
  depends_on:
    postgres:
      condition: service_healthy
    redis:
      condition: service_healthy
  volumes:
    - app_uploads:/app/uploads
Key Features:
  • Waits for database and Redis to be healthy before starting
  • Automatic Prisma migrations on startup
  • Persistent upload storage

Worker Service

Background job processor for creator discovery and data scraping:
worker:
  build:
    context: .
    dockerfile: Dockerfile
  container_name: tiktok-miner-worker
  restart: unless-stopped
  command: ["bun", "run", "worker"]
  environment:
    DATABASE_URL: postgresql://tiktok_miner_user:${DB_PASSWORD}@postgres:5432/tiktok_miner
    REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379
    NODE_ENV: production
  depends_on:
    postgres:
      condition: service_healthy
    redis:
      condition: service_healthy
Key Features:
  • Shares database with main app
  • Processes queued jobs from Redis
  • Automatic restart on failure

Dockerfile Details

The application uses a multi-stage Dockerfile based on Bun:
FROM oven/bun:1.0.25 AS base

WORKDIR /app

# Install system dependencies
RUN apt-get update && \
    apt-get install -y gnupg2 debian-archive-keyring && \
    apt-get clean && \
    apt-get install -y python3 build-essential bash

# Install Node.js 20
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
    apt-get install -y nodejs

# Copy and install dependencies
COPY . .
WORKDIR /app/app
RUN bun install --verbose

# Build application
RUN bun build cli/index.ts --outdir dist --target node
RUN bunx prisma generate
RUN bun run build

EXPOSE 8080

# Startup script runs migrations then starts app
CMD ["/bin/bash", "/app/app/start.sh"]
Build Features:
  • Bun runtime for fast installation and execution
  • Node.js 20 for compatibility
  • Prisma client generation during build
  • Automatic migrations on container start
  • CLI tool compilation

Production Deployment

Using Nginx Profile

For production deployments with SSL/TLS support:
  1. Create nginx configuration:
mkdir -p nginx/ssl
  1. Add your SSL certificates to nginx/ssl/
  2. Create nginx/nginx.conf:
upstream app {
  server app:3000;
}

server {
  listen 80;
  server_name your-domain.com;
  return 301 https://$server_name$request_uri;
}

server {
  listen 443 ssl http2;
  server_name your-domain.com;

  ssl_certificate /etc/nginx/ssl/cert.pem;
  ssl_certificate_key /etc/nginx/ssl/key.pem;

  location / {
    proxy_pass http://app;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}
  1. Start with production profile:
docker-compose --profile production up -d

Environment Variables for Production

Ensure these are set in your production .env:
# Database
DB_PASSWORD=<strong-password>
REDIS_PASSWORD=<strong-password>

# Application
NODE_ENV=production
NEXT_PUBLIC_APP_URL=https://your-domain.com

# API Keys (all required services)
OPENAI_API_KEY=...
GITHUB_TOKEN=...
TIKTOK_CLIENT_KEY=...
TIKTOK_CLIENT_SECRET=...

# Email
SMTP_HOST=...
SMTP_PORT=465
SMTP_USER=...
SMTP_PASSWORD=...

Resource Limits

Add resource constraints for production:
app:
  deploy:
    resources:
      limits:
        cpus: '2'
        memory: 2G
      reservations:
        cpus: '1'
        memory: 1G

Development Setup

Using pgAdmin Profile

For database management during development:
docker-compose --profile development up -d
Access pgAdmin at http://localhost:5050:
  • Email: [email protected] (or set PGADMIN_EMAIL)
  • Password: admin (or set PGADMIN_PASSWORD)

Common Operations

View Logs

# All services
docker-compose logs -f

# Specific service
docker-compose logs -f app
docker-compose logs -f worker

Restart Services

# All services
docker-compose restart

# Specific service
docker-compose restart app

Run Database Migrations

docker-compose exec app bunx prisma migrate deploy

Access Database Shell

docker-compose exec postgres psql -U tiktok_miner_user -d tiktok_miner

Access Redis CLI

docker-compose exec redis redis-cli -a $REDIS_PASSWORD

Execute Scripts

# Run any app script
docker-compose exec app bun run scripts/your-script.ts

# Run database seed
docker-compose exec app bunx ts-node --transpile-only prisma/seed-creators.ts

Backup Database

docker-compose exec postgres pg_dump -U tiktok_miner_user tiktok_miner > backup.sql

Restore Database

docker-compose exec -T postgres psql -U tiktok_miner_user tiktok_miner < backup.sql

Volumes

Persistent data is stored in Docker volumes:
  • postgres_data: Database files
  • redis_data: Redis persistence
  • app_uploads: User-uploaded files
  • nginx_cache: Nginx cache (production)
  • pgadmin_data: pgAdmin configuration (development)

Backup Volumes

# Create backup
docker run --rm -v tiktok-miner_postgres_data:/data -v $(pwd):/backup \
  alpine tar czf /backup/postgres-backup.tar.gz /data

Restore Volumes

# Restore backup
docker run --rm -v tiktok-miner_postgres_data:/data -v $(pwd):/backup \
  alpine sh -c "cd /data && tar xzf /backup/postgres-backup.tar.gz --strip 1"

Networking

All services run on a custom network: tiktok-miner-network Services can communicate using their service names:
  • postgres:5432
  • redis:6379
  • app:3000

Troubleshooting

Container Won’t Start

# Check logs
docker-compose logs app

# Check service health
docker-compose ps

# Rebuild without cache
docker-compose build --no-cache

Database Connection Issues

# Verify postgres is healthy
docker-compose ps postgres

# Check database logs
docker-compose logs postgres

# Test connection
docker-compose exec app bunx prisma db pull

Redis Connection Issues

# Check Redis is running
docker-compose ps redis

# Test connection
docker-compose exec redis redis-cli -a $REDIS_PASSWORD ping

Migration Failures

# Reset database (WARNING: deletes all data)
docker-compose down -v
docker-compose up -d postgres redis
docker-compose exec app bunx prisma migrate deploy

Performance Issues

# Check resource usage
docker stats

# Increase resources in docker-compose.yml
# See Resource Limits section above

Security Best Practices

  1. Change default passwords: Always set strong passwords for DB_PASSWORD and REDIS_PASSWORD
  2. Use secrets: For production, use Docker secrets instead of environment variables
  3. Limit network exposure: Don’t expose database ports in production
  4. Regular updates: Keep Docker images up to date
  5. HTTPS only: Always use nginx with SSL in production
  6. API key rotation: Regularly rotate API keys and tokens

Next Steps

Environment Variables

Configure all required environment variables

Database Setup

Learn about database schema and migrations

Build docs developers (and LLMs) love