Skip to main content

Quickstart

This guide walks you through the fastest path to generating your first AI-powered video with MoneyPrinter.

Prerequisites

Ensure you have the following installed:
For detailed installation instructions, see the Installation Guide.

Setup Steps

1

Clone the repository

git clone https://github.com/FujiwaraChoki/MoneyPrinter.git
cd MoneyPrinter
2

Run the interactive setup script (recommended)

The setup script checks dependencies, creates .env, installs packages, and optionally pulls an Ollama model:
./setup.sh
Or use manual setup:
# Install dependencies
uv sync

# Create environment file
cp .env.example .env
Windows PowerShell:
Copy-Item .env.example .env
3

Configure required environment variables

Edit .env and set:
# Required
TIKTOK_SESSION_ID="your_tiktok_session_id"
PEXELS_API_KEY="your_pexels_api_key"

# Optional (defaults shown)
OLLAMA_BASE_URL="http://localhost:11434"
OLLAMA_MODEL="llama3.1:8b"
  1. Log into TikTok in your browser
  2. Open Developer Tools (F12)
  3. Go to Application → Cookies
  4. Copy the value of the sessionid cookie
  1. Create a free account at pexels.com/api
  2. Generate an API key from your dashboard
4

Start Ollama and pull a model

In a new terminal:
ollama serve
Pull a model (if not already installed):
ollama pull llama3.1:8b
Verify the model is available:
ollama list
If Ollama runs on another machine or port, set OLLAMA_BASE_URL in .env.
5

Start the backend API

uv run python Backend/main.py
The API will start on http://localhost:8080.
6

Start the worker (new terminal)

The worker claims jobs from the queue and runs the generation pipeline:
uv run python Backend/worker.py
7

Start the frontend (new terminal)

cd Frontend
python3 -m http.server 3000
Open your browser to http://localhost:3000.
8

Generate your first video

In the web interface:
  1. Enter a video subject (e.g., “Top 3 AI business ideas”)
  2. Expand Advanced Options
  3. Select an Ollama model from the dropdown
  4. Choose a voice (default: en_us_001)
  5. Click Generate
Progress events will stream in real-time. The final video will be saved as output.mp4 in the project root.

Verify Installation

Test the API endpoints:
# List available Ollama models
curl http://localhost:8080/api/models

# Queue a test generation job
curl -X POST http://localhost:8080/api/generate \
  -H "Content-Type: application/json" \
  -d '{
    "videoSubject": "AI business ideas",
    "aiModel": "llama3.1:8b",
    "voice": "en_us_001",
    "paragraphNumber": 1,
    "customPrompt": ""
  }'

Optional: Run Tests

Install dev dependencies and run the test suite:
uv sync --group dev
uv run pytest
See Testing for more details.

What’s Next?

Configuration

Learn about all environment variables and options

Docker Deployment

Run the full stack with Docker Compose

Architecture

Understand the queue-based system design

Troubleshooting

Common issues and solutions

Troubleshooting

  • Ensure Ollama is running: ollama serve
  • Check models are installed: ollama list
  • Verify OLLAMA_BASE_URL in .env
Set the explicit path in .env:Linux/macOS:
IMAGEMAGICK_BINARY="/usr/local/bin/magick"
Windows:
IMAGEMAGICK_BINARY="C:\\Program Files\\ImageMagick-7.1.1-Q16-HDRI\\magick.exe"
Note the double backslashes (\\) for Windows paths.
  • Ensure the backend API is running first
  • Check DATABASE_URL is correctly set (defaults to SQLite)
  • Look for errors in the worker terminal output

Build docs developers (and LLMs) love