Skip to main content
This guide covers common problems you might encounter and how to fix them.

Setup & Installation Issues

Symptom: Application crashes or shows syntax errors on startupCause: MoneyPrinter V2 requires Python 3.12 for compatibility with dependenciesSolution:
  1. Check your Python version:
    python3 --version
    
  2. Install Python 3.12 if needed:
    • macOS (Homebrew):
      brew install [email protected]
      
    • Ubuntu/Debian:
      sudo add-apt-repository ppa:deadsnakes/ppa
      sudo apt update
      sudo apt install python3.12 python3.12-venv
      
    • Windows: Download from python.org
  3. Recreate virtual environment:
    rm -rf venv
    python3.12 -m venv venv
    source venv/bin/activate
    pip install -r requirements.txt
    
Symptom: Import errors when running the applicationCause: Dependencies not installed or virtual environment not activatedSolution:
  1. Activate virtual environment:
    source venv/bin/activate  # Unix/macOS
    .\venv\Scripts\activate   # Windows
    
  2. Reinstall dependencies:
    pip install -r requirements.txt
    
  3. If using a specific module like faster-whisper:
    pip install faster-whisper
    
Symptom: bash: ./script.sh: Permission deniedSolution:Make scripts executable:
chmod +x scripts/*.sh
chmod +x scripts/*.py
Then run from project root:
bash scripts/setup_local.sh

Firefox Profile Issues

Symptom: ValueError: Firefox profile path does not exist or is not a directoryCause: Incorrect path to Firefox profile in config.jsonSolution:Find your Firefox profile path:
  1. macOS:
    ls ~/Library/Application\ Support/Firefox/Profiles/
    
    Look for a directory ending in .default-release Example: ~/Library/Application Support/Firefox/Profiles/abc123xyz.default-release
  2. Linux:
    ls ~/.mozilla/firefox/
    
    Look for a directory ending in .default-release Example: ~/.mozilla/firefox/abc123xyz.default-release
  3. Windows:
    dir %APPDATA%\Mozilla\Firefox\Profiles\
    
    Example: C:\Users\YourName\AppData\Roaming\Mozilla\Firefox\Profiles\abc123xyz.default-release
Update config.json:
{
  "firefox_profile": "/full/path/to/profile.default-release"
}
Verify it works:
python3 scripts/preflight_local.py
Symptom: Browser asks to log in every timeCause: Using wrong Firefox profile or cookies not savedSolution:
  1. Create a dedicated Firefox profile:
    • Open Firefox
    • Navigate to about:profiles
    • Click “Create a New Profile”
    • Name it (e.g., “MoneyPrinterV2”)
    • Note the “Root Directory” path
  2. Log into your accounts:
    • Launch Firefox with this profile
    • Log into YouTube, Twitter/X manually
    • Save cookies/stay logged in
    • Close Firefox
  3. Update config.json with the new profile path
  4. Test automation:
    python3 src/main.py
    
Symptom: WebDriverException: Message: Expected browser binary location, but unable to find binary in default locationCause: GeckoDriver or Firefox not foundSolution:
  1. Ensure Firefox is installed:
    firefox --version
    
  2. Reinstall webdriver-manager:
    pip install --upgrade webdriver-manager
    
  3. Clear GeckoDriver cache:
    rm -rf ~/.wdm
    
  4. Run again - it will auto-download GeckoDriver

Ollama & LLM Issues

Symptom: [FAIL] Ollama is not reachable at http://127.0.0.1:11434Cause: Ollama service not runningSolution:
  1. Check if Ollama is running:
    curl http://127.0.0.1:11434/api/tags
    
  2. Start Ollama:
    • macOS: Ollama runs automatically if installed via Homebrew or app
    • Linux:
      ollama serve
      
    • Windows: Launch Ollama desktop app
  3. Verify installation:
    ollama --version
    ollama list
    
  4. Pull a model if none installed:
    ollama pull llama3.2:3b
    
Symptom: Warning about no available modelsSolution:
  1. Pull recommended models:
    ollama pull phi4:latest
    ollama pull llama3.2:3b
    ollama pull qwen3:14b
    
  2. Verify models are installed:
    ollama list
    
  3. Re-run setup:
    bash scripts/setup_local.sh
    
    This will auto-detect and configure the best available model
Symptom: LLM produces gibberish or off-topic contentCause: Model too small or poorly suited for taskSolution:
  1. Use a larger, more capable model:
    ollama pull qwen3:14b
    # or
    ollama pull deepseek-r1:32b
    
  2. Update config.json:
    {
      "ollama_model": "qwen3:14b"
    }
    
  3. Test generation:
    ollama run qwen3:14b "Write a short video script about space exploration"
    
Symptom: Long wait times or timeout errorsCause: Model too large for your hardware or heavy system loadSolution:
  1. Use a smaller model:
    ollama pull llama3.2:3b
    
  2. Check system resources:
    htop  # or Activity Monitor on macOS
    
    Close unnecessary applications
  3. Adjust Ollama settings (if running ollama serve):
    OLLAMA_MAX_LOADED_MODELS=1 ollama serve
    

Image Generation Issues

Symptom: [FAIL] nanobanana2_api_key is emptyCause: Missing or invalid Gemini API keySolution:
  1. Get a Gemini API key:
  2. Set in config.json:
    {
      "nanobanana2_api_key": "AIzaSy..."
    }
    
    Or set as environment variable:
    export GEMINI_API_KEY="AIzaSy..."
    
  3. Verify connectivity:
    python3 scripts/preflight_local.py
    
Symptom: [WARN] Nano Banana 2 did not return an image payloadCause: API quota exceeded, model unavailable, or malformed promptSolution:
  1. Check API quota:
  2. Verify model availability: Update config.json to use stable model:
    {
      "nanobanana2_model": "gemini-3.1-flash-image-preview"
    }
    
  3. Simplify prompts: Avoid overly complex or NSFW content that might be filtered

Video Generation Issues

Symptom: [WARN] imagemagick_path is not set to a valid executable pathCause: ImageMagick not installed or path not configuredSolution:
  1. Install ImageMagick: macOS:
    brew install imagemagick
    which magick  # Note this path
    
    Ubuntu/Debian:
    sudo apt install imagemagick
    which convert  # Note this path
    
    Windows:
    • Download from imagemagick.org
    • Install and note installation path (e.g., C:\Program Files\ImageMagick-7.1.0-Q16\magick.exe)
  2. Update config.json:
    {
      "imagemagick_path": "/usr/local/bin/magick"
    }
    
  3. Verify:
    python3 scripts/preflight_local.py
    
Symptom: OSError: MoviePy Error: creation of None failed because of the following errorCause: Missing codecs, ffmpeg issues, or insufficient disk spaceSolution:
  1. Ensure ffmpeg is installed:
    ffmpeg -version
    
    If not installed:
    # macOS
    brew install ffmpeg
    
    # Ubuntu/Debian
    sudo apt install ffmpeg
    
  2. Check disk space:
    df -h .
    
    Videos are written to .mp/ directory
  3. Lower thread count in config.json:
    {
      "threads": 1
    }
    
  4. Try regenerating: Delete partial files in .mp/ and run again
Symptom: Subtitles or images don’t align with narrationCause: Timing calculation errors or TTS duration mismatchSolution:
  1. Regenerate the video: Audio duration is calculated dynamically; retry generation
  2. Check TTS configuration: Ensure tts_voice in config.json is valid:
    {
      "tts_voice": "Jasper"
    }
    
  3. Manually adjust timing: Edit src/classes/YouTube.py:552 (combine method) to tweak duration calculations

Speech & Subtitle Issues

Symptom: ModuleNotFoundError: No module named 'faster_whisper'Cause: faster-whisper not installed (required for local STT)Solution:
pip install faster-whisper
If you encounter installation issues:
  1. Use AssemblyAI instead:
    {
      "stt_provider": "third_party_assemblyai",
      "assembly_ai_api_key": "your-key-here"
    }
    
  2. Get AssemblyAI API key:
Symptom: Generated video has no subtitlesCause: ImageMagick not configured or STT failureSolution:
  1. Check ImageMagick:
    python3 scripts/preflight_local.py
    
  2. Enable verbose logging:
    {
      "verbose": true
    }
    
    Re-run and check for error messages
  3. Verify STT provider: Ensure stt_provider is set correctly:
    {
      "stt_provider": "local_whisper"
    }
    
  4. Check .mp/ directory: Look for generated .srt files - if missing, STT failed
Symptom: Long wait times during subtitle generationCause: Whisper model downloading or too large for hardwareSolution:
  1. Use a smaller model:
    {
      "whisper_model": "tiny",
      "whisper_device": "cpu",
      "whisper_compute_type": "int8"
    }
    
    Available models: tiny, base, small, medium, large
  2. Pre-download models: Models are auto-downloaded on first use but you can pre-cache:
    from faster_whisper import WhisperModel
    model = WhisperModel("base", device="cpu")
    

Upload & Automation Issues

Symptom: Video doesn’t upload, browser stuck at upload screenCause: Element selectors changed or incorrect video pathSolution:
  1. Check video file exists:
    ls -lh .mp/*.mp4
    
  2. Disable headless mode for debugging:
    {
      "headless": false
    }
    
    Watch browser interactions to see where it fails
  3. Update Selenium selectors: YouTube’s HTML changes frequently. Check src/classes/YouTube.py:703 (upload_video method) and update element selectors if needed
  4. Ensure logged in: Open Firefox with your profile manually and verify YouTube login persists
Symptom: RuntimeError: Could not find the Post button on X compose screenCause: X/Twitter UI changed or session expiredSolution:
  1. Verify login: Open Firefox with profile and check if logged into X
  2. Update selectors: Check src/classes/Twitter.py:73 (post method) for current selectors
  3. Increase wait time: Edit src/classes/Twitter.py:71:
    self.wait = WebDriverWait(self.browser, 60)  # Increase from 30
    
Symptom: Scheduled automation doesn’t executeCause: Incorrect cron configuration or script path issuesSolution:
  1. Test manual execution:
    python3 src/cron.py youtube <account-id>
    
  2. Check cron logs:
    # macOS/Linux
    tail -f /var/log/cron.log
    # or
    grep CRON /var/log/syslog
    
  3. Use absolute paths in crontab:
    0 */6 * * * cd /full/path/to/MoneyPrinterV2 && /full/path/to/venv/bin/python src/cron.py youtube account-id
    
  4. Verify crontab syntax:
    crontab -l
    

Performance Issues

Symptom: Takes 10+ minutes to generate a short videoCause: Slow LLM, image generation, or video encodingSolution:
  1. Use faster models:
    • LLM: llama3.2:3b or phi4:latest
    • Whisper: tiny or base
  2. Reduce thread count (counterintuitively can help on some systems):
    {
      "threads": 1
    }
    
  3. Limit script length:
    {
      "script_sentence_length": 3
    }
    
  4. Profile bottlenecks: Enable verbose mode and time each step:
    time python3 src/main.py
    
Symptom: System freezes or out-of-memory errorsCause: Large models loaded simultaneouslySolution:
  1. Close other applications
  2. Use smaller models:
    • Ollama: 3B parameter models
    • Whisper: tiny or base
  3. Limit Ollama concurrent models:
    OLLAMA_MAX_LOADED_MODELS=1 ollama serve
    
  4. Process videos one at a time: Don’t run multiple automation jobs in parallel

Debugging Tips

Enable Verbose Logging

{
  "verbose": true
}
This prints detailed information about each step.

Check Log Files

MoneyPrinter V2 doesn’t create log files by default, but you can redirect output:
python3 src/main.py 2>&1 | tee moneyprinter.log

Inspect Generated Files

All temporary files are in .mp/ directory:
ls -lh .mp/
Check for:
  • .png - Generated images
  • .wav - TTS audio
  • .srt - Subtitle files
  • .mp4 - Final videos

Test Individual Components

# Test LLM
from src.llm_provider import generate_text
print(generate_text("Write a short poem about AI"))

# Test image generation
from src.classes.YouTube import YouTube
yt = YouTube(...)
yt.generate_image("A futuristic cityscape at sunset")

Use Python Debugger

# Add to any file
import pdb; pdb.set_trace()
Then step through code interactively.

Getting Help

If you’re still stuck:
  1. Check existing issues: GitHub Issues
  2. Search documentation: This site covers most scenarios
  3. Ask on Discord: Fuji Community Discord
  4. Open a new issue: Provide:
    • Operating system and Python version
    • Full error message and stack trace
    • Relevant config.json (redact API keys)
    • Steps to reproduce
Useful diagnostic info:
python3 --version
ollama --version
ffmpeg -version
magick --version
pip list | grep -E "moviepy|selenium|faster-whisper"
Include this output when asking for help.

Build docs developers (and LLMs) love