Skip to main content
This guide covers common issues you might encounter when using Tinbox and how to resolve them.

Quick Diagnostics

Run the built-in diagnostic tool to check your setup:
tinbox doctor
This command checks:
  • System tools (poppler for PDF processing)
  • Python packages (pdf2image, python-docx, Pillow)
  • API key configuration (OpenAI, Anthropic, Google)
  • Local model availability (Ollama)
The doctor command is defined in src/tinbox/core/doctor.py and performs comprehensive environment validation.

Common Issues

PDF Translation Issues

Problem: PDF processing requires poppler-utils to be installed.Solution: Install poppler for your operating system:
# macOS
brew install poppler

# Linux (Ubuntu/Debian)
sudo apt-get install poppler-utils

# Windows (Chocolatey)
choco install poppler
Verify installation:
tinbox doctor  # Should show poppler-utils as ✅
which pdfinfo  # Should show path to pdfinfo binary
From doctor.py:40-56:
def check_poppler() -> DoctorCheck:
    pdfinfo_path = shutil.which("pdfinfo")
    if pdfinfo_path:
        return DoctorCheck(
            name="poppler-utils",
            category="System Tools",
            ok=True,
            details=f"Found at {pdfinfo_path}",
        )
    return DoctorCheck(
        name="poppler-utils",
        category="System Tools",
        ok=False,
        hint="Install poppler: brew install poppler (macOS) or apt-get install poppler-utils (Linux)",
    )
Problem: Missing Python package for PDF image conversion.Solution: Install PDF dependencies:
pip install tinbox[pdf]
# Or install all extras
pip install tinbox[all]
Verify installation:
tinbox doctor  # Should show pdf2image as ✅
python -c "import pdf2image; print(pdf2image.__version__)"
Problem: Trying to translate a PDF with a text-only model.Solution: Use a vision-capable model:
# ✅ Vision-capable models
tinbox translate --to es --model openai:gpt-4o document.pdf
tinbox translate --to es --model anthropic:claude-3-sonnet document.pdf
tinbox translate --to es --model gemini:gemini-2.5-pro document.pdf

# ❌ Text-only models (will fail)
tinbox translate --to es --model ollama:llama3.1:8b document.pdf  # No vision
Ollama models typically do not support vision. Convert PDFs to text first or use cloud models.
Problem: PDF rendered at too high DPI, causing excessive token usage.Solution: Adjust DPI based on your needs:
# Default: 200 DPI (balanced)
tinbox translate --to es --pdf-dpi 200 --model openai:gpt-4o document.pdf

# Lower quality, lower cost: 150 DPI
tinbox translate --to es --pdf-dpi 150 --model openai:gpt-4o document.pdf

# High quality, higher cost: 300 DPI
tinbox translate --to es --pdf-dpi 300 --model openai:gpt-4o document.pdf
Use 150 DPI for simple text documents to save ~25% on costs.

API and Authentication Issues

Problem: API key environment variable not set.Solution: Set the appropriate API key for your model provider:
# OpenAI
export OPENAI_API_KEY="sk-..."

# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."

# Google
export GOOGLE_API_KEY="AIza..."
Make it permanent (add to ~/.bashrc or ~/.zshrc):
echo 'export OPENAI_API_KEY="your-key-here"' >> ~/.bashrc
source ~/.bashrc
Verify:
tinbox doctor  # Check API Keys section
echo $OPENAI_API_KEY  # Should show your key
From doctor.py:126-144:
def check_openai_key() -> DoctorCheck:
    key = os.environ.get("OPENAI_API_KEY")
    if key:
        masked = f"{key[:8]}...{key[-4:]}" if len(key) > 12 else "***"
        return DoctorCheck(
            name="OPENAI_API_KEY",
            category="API Keys",
            ok=True,
            details=f"Set ({masked})",
        )
    return DoctorCheck(
        name="OPENAI_API_KEY",
        category="API Keys",
        ok=False,
        hint="Set with: export OPENAI_API_KEY='your-key-here'",
    )
Problem: Hitting API rate limits from the model provider.Solutions:
  1. Use checkpoints to resume:
tinbox translate --to es \
  --checkpoint-dir ./checkpoints \
  --model openai:gpt-4o \
  document.pdf
# If rate limited, wait and re-run - it will resume
  1. Reduce checkpoint frequency to save less often:
tinbox translate --to es \
  --checkpoint-frequency 10 \
  --model openai:gpt-4o \
  document.pdf
  1. Switch to local model (no rate limits):
ollama serve  # In another terminal
tinbox translate --to es --model ollama:llama3.1:8b document.txt
Rate limits vary by provider and API tier. Check your provider’s documentation.
Problem: API request taking too long, especially with large pages or high reasoning effort.Solutions:
  1. Use checkpoints to save progress:
tinbox translate --to es \
  --checkpoint-dir ./checkpoints \
  --model openai:gpt-4o \
  document.pdf
  1. Reduce reasoning effort:
# Minimal is fastest
tinbox translate --to es --reasoning-effort minimal --model openai:gpt-4o doc.pdf
  1. For PDFs, reduce DPI:
tinbox translate --to es --pdf-dpi 150 --model openai:gpt-4o document.pdf
  1. For text, reduce context size:
tinbox translate --to es --context-size 1500 --model openai:gpt-4o document.txt

Package and Dependency Issues

Problem: Missing python-docx package for Word document support.Solution:
pip install tinbox[docx]
# Or install all extras
pip install tinbox[all]
Verify:
tinbox doctor  # Should show python-docx as ✅
python -c "import docx; print('OK')"
From doctor.py:81-100:
def check_python_docx() -> DoctorCheck:
    try:
        import docx
        return DoctorCheck(
            name="python-docx",
            category="Python Packages",
            ok=True,
            details="Installed",
        )
    except ImportError:
        return DoctorCheck(
            name="python-docx",
            category="Python Packages",
            ok=False,
            hint="Install with: pip install tinbox[docx]",
        )
Problem: Missing Pillow package for image processing.Solution:
pip install pillow
# Or install all extras
pip install tinbox[all]
Verify:
tinbox doctor  # Should show Pillow as ✅
python -c "import PIL; print(PIL.__version__)"
Problem: Tinbox requires Python 3.12 or higher.Check your version:
python --version  # Should be 3.12+
Solution: Install Python 3.12 or higher:
# macOS (using Homebrew)
brew install [email protected]

# Ubuntu/Debian
sudo apt install python3.12

# Or use pyenv
pyenv install 3.12.0
pyenv global 3.12.0

Translation Quality Issues

Problem: Page-by-page algorithm doesn’t share context between pages.Solution: Enable glossary to maintain consistency:
# Auto-build glossary during translation
tinbox translate --to es \
  --glossary \
  --save-glossary terms.json \
  --model openai:gpt-4o \
  document.pdf

# Use existing glossary
tinbox translate --to es \
  --glossary-file terms.json \
  --model openai:gpt-4o \
  document.pdf
Glossary adds ~20% input token overhead but ensures consistent terminology (from cost.py:183-189).
Problem: Translation lacks nuance or makes errors.Solutions:
  1. Increase reasoning effort:
tinbox translate --to es \
  --reasoning-effort high \
  --max-cost 20.00 \
  --model openai:gpt-4o \
  document.pdf
  1. Use a higher-quality model:
# Claude Sonnet often produces higher quality
tinbox translate --to es --model anthropic:claude-3-sonnet document.pdf
  1. For text files, use context-aware:
tinbox translate --to es \
  --algorithm context-aware \
  --context-size 2000 \
  --model openai:gpt-4o \
  document.txt
  1. Provide a glossary with domain-specific terms:
{
  "entries": {
    "API": "Interfaz de Programación de Aplicaciones",
    "webhook": "webhook",
    "payload": "carga útil"
  }
}
Problem: Using page-by-page for continuous text documents.Solution: Use context-aware algorithm for text files:
# Maintains context between chunks
tinbox translate --to fr \
  --algorithm context-aware \
  --context-size 2000 \
  --model openai:gpt-4o \
  novel.txt
Context-aware is not supported for PDFs. For PDF narratives, use glossary instead:
tinbox translate --to fr \
  --algorithm page \
  --glossary \
  --model openai:gpt-4o \
  novel.pdf

Cost and Performance Issues

Problem: Cost higher than expected.Solutions:
  1. Set a cost limit:
tinbox translate --to es --max-cost 5.00 --model openai:gpt-4o document.pdf
  1. Preview costs first:
tinbox translate --to es --dry-run --model openai:gpt-4o document.pdf
  1. Use cheaper algorithm:
# Page-by-page has no context overhead
tinbox translate --to es --algorithm page --model openai:gpt-4o document.pdf
  1. Use local models:
ollama serve
tinbox translate --to es --model ollama:llama3.1:8b document.txt  # Free
  1. Reduce PDF DPI:
tinbox translate --to es --pdf-dpi 150 --model openai:gpt-4o document.pdf
See the Cost Optimization guide for more strategies.
Problem: Translation taking longer than expected.Solutions:
  1. Reduce reasoning effort:
tinbox translate --to es --reasoning-effort minimal --model openai:gpt-4o doc.pdf
  1. For text, increase context size (fewer chunks to process):
tinbox translate --to es --context-size 3000 --model openai:gpt-4o document.txt
  1. Use a faster model:
# GPT-4o is typically faster than Claude
tinbox translate --to es --model openai:gpt-4o document.pdf
  1. For PDFs, reduce DPI:
tinbox translate --to es --pdf-dpi 150 --model openai:gpt-4o document.pdf
From cost.py:198-200, estimated speed:
  • Cloud models: ~30 tokens/second
  • Local models (Ollama): ~20 tokens/second
Problem: Document has >50,000 tokens, costs may be high.From cost.py:206-210:
if estimated_total_tokens > 50000:
    warnings.append(
        f"Large document detected ({estimated_total_tokens:,} tokens). "
        "Consider using Ollama for no cost."
    )
Solutions:
  1. Use local models (completely free):
ollama serve
tinbox translate --to es --model ollama:llama3.1:8b large_document.txt
  1. Set a cost limit:
tinbox translate --to es --max-cost 10.00 --model openai:gpt-4o large_document.txt
  1. Use checkpoints to process in segments:
tinbox translate --to es \
  --checkpoint-dir ./checkpoints \
  --checkpoint-frequency 10 \
  --max-cost 5.00 \
  --model openai:gpt-4o \
  large_document.pdf

# Resume after hitting limit
tinbox translate --to es \
  --checkpoint-dir ./checkpoints \
  --max-cost 5.00 \
  --model openai:gpt-4o \
  large_document.pdf

Ollama and Local Model Issues

Problem: Ollama not installed or not in PATH.Solution: Install Ollama:
# Visit https://ollama.ai and download installer
# Or use Homebrew (macOS)
brew install ollama
Verify:
tinbox doctor  # Should show Ollama as ✅
which ollama   # Should show path
ollama --version
From doctor.py:187-203:
def check_ollama() -> DoctorCheck:
    ollama_path = shutil.which("ollama")
    if ollama_path:
        return DoctorCheck(
            name="Ollama",
            category="Local Models",
            ok=True,
            details=f"Found at {ollama_path}",
        )
    return DoctorCheck(
        name="Ollama",
        category="Local Models",
        ok=False,
        hint="Install from: https://ollama.ai (optional, for local models)",
    )
Problem: Ollama server not running.Solution: Start Ollama server:
# In a separate terminal
ollama serve
Then translate:
tinbox translate --to es --model ollama:llama3.1:8b document.txt
Keep ollama serve running in the background for all local model operations.
Problem: Model not downloaded.Solution: Pull the model first:
ollama pull llama3.1:8b
ollama list  # Verify model is downloaded
Then translate:
tinbox translate --to es --model ollama:llama3.1:8b document.txt

Checkpoint and Resume Issues

Problem: Re-running translation starts from beginning.Causes and solutions:
  1. Different checkpoint directory:
# Make sure to use the SAME checkpoint directory
tinbox translate --to es --checkpoint-dir ./checkpoints --model openai:gpt-4o doc.pdf
# Resume
tinbox translate --to es --checkpoint-dir ./checkpoints --model openai:gpt-4o doc.pdf
  1. Different source or target language:
# These are different jobs, won't resume
tinbox translate --to es --checkpoint-dir ./chk --model openai:gpt-4o doc.pdf  # es
tinbox translate --to fr --checkpoint-dir ./chk --model openai:gpt-4o doc.pdf  # fr - different!
  1. Different algorithm:
# These are different jobs
tinbox translate --to es --algorithm page --checkpoint-dir ./chk --model openai:gpt-4o doc.pdf
tinbox translate --to es --algorithm context-aware --checkpoint-dir ./chk --model openai:gpt-4o doc.pdf  # Won't resume
Problem: Want to start fresh, ignoring existing checkpoints.Solution: Delete the checkpoint directory:
rm -rf ./checkpoints
tinbox translate --to es --checkpoint-dir ./checkpoints --model openai:gpt-4o doc.pdf
Or use a different checkpoint directory:
tinbox translate --to es --checkpoint-dir ./checkpoints-new --model openai:gpt-4o doc.pdf

Getting Help

1

Run tinbox doctor

tinbox doctor
Check for environment and dependency issues.
2

Check the README

Review troubleshooting section in the README.
3

Enable verbose logging

# Set log level to DEBUG for detailed output
export TINBOX_LOG_LEVEL=DEBUG
tinbox translate --to es --model openai:gpt-4o document.pdf
4

Report issues

If you’ve found a bug, report it on GitHub:
  • Include tinbox doctor output
  • Include error messages and logs
  • Provide sample command that reproduces the issue
  • Mention Python version and OS

Doctor Command Reference

The tinbox doctor command performs these checks:

System Tools

CheckCategoryRequired For
poppler-utilsSystem ToolsPDF processing

Python Packages

PackageCategoryRequired For
pdf2imagePython PackagesPDF to image conversion
python-docxPython PackagesWord document support
PillowPython PackagesImage processing

API Keys (Optional)

KeyCategoryRequired For
OPENAI_API_KEYAPI KeysOpenAI models (GPT-4o, GPT-5, etc.)
ANTHROPIC_API_KEYAPI KeysAnthropic models (Claude)
GOOGLE_API_KEYAPI KeysGoogle models (Gemini)

Local Models (Optional)

ToolCategoryRequired For
OllamaLocal ModelsFree local model inference
From doctor.py:206-227, the doctor command runs all checks and reports:
  • ✅ for passing checks
  • ❌ for failing checks
  • Helpful hints for fixing issues

Cost Optimization

Strategies to reduce costs and prevent budget overruns

Algorithm Comparison

Choose the right algorithm for your use case

Build docs developers (and LLMs) love