Your First Translation
Let’s translate a document in three simple steps:
Set Your API Key
Choose your preferred LLM provider and set the API key: export OPENAI_API_KEY = "your-api-key-here"
Run Your First Translation
Translate a text document to German: tinbox translate --to de --model openai:gpt-5-2025-08-07 document.txt
The translation will be printed to your console.
Save to a File
Add the --output flag to save the result: tinbox translate --to de --model openai:gpt-5-2025-08-07 --output document_de.txt document.txt
Basic Examples
Text Documents
Translating text files is the simplest use case:
Simple Translation
Specify Source Language
With Output File
tinbox translate --to es --model openai:gpt-5-2025-08-07 story.txt
PDF Documents
PDF translation requires:
Poppler installed on your system (see Installation )
A vision-capable model like GPT-4o, Claude Sonnet, or Gemini Pro Vision
Basic PDF Translation
With Custom DPI
Save Output
tinbox translate --to de --algorithm page --model openai:gpt-4o document.pdf
Use --algorithm page for PDFs to process them page-by-page, which works best for maintaining formatting and context.
Word Documents (DOCX)
# Translate a Word document
tinbox translate --to de --model openai:gpt-5-2025-08-07 report.docx
# Save to file
tinbox translate --to es --output report_es.txt --model openai:gpt-5-2025-08-07 report.docx
Model Providers
Tinbox supports multiple LLM providers:
OpenAI
Anthropic (Claude)
Google (Gemini)
Ollama (Local)
# GPT-5 (latest)
tinbox translate --to de --model openai:gpt-5-2025-08-07 document.txt
# GPT-4o (vision-capable for PDFs)
tinbox translate --to es --model openai:gpt-4o document.pdf
GPT-4o and GPT-5 both support vision for PDF translation.
# Claude 4 Sonnet (vision-capable)
tinbox translate --to de --model anthropic:claude-3-sonnet document.pdf
# For text documents
tinbox translate --to fr --model anthropic:claude-3-sonnet document.txt
Remember to set: export ANTHROPIC_API_KEY="your-key" # Gemini Pro Vision
tinbox translate --to es --model google:gemini-pro-vision document.pdf
# For text
tinbox translate --to de --model google:gemini-pro document.txt
Remember to set: export GOOGLE_API_KEY="your-key" # Start Ollama server first
ollama serve # In another terminal
# Then use local models (free!)
tinbox translate --to de --model ollama:llama3.1:8b document.txt
tinbox translate --to fr --model ollama:mistral-small document.txt
Ollama runs models locally - no API costs! But it doesn’t support vision models for PDFs yet.
Real-World Example
Let’s translate a story from English to German with checkpointing:
Preview the Cost
Use --dry-run to estimate costs before translating: tinbox translate --to de --model openai:gpt-5-2025-08-07 --dry-run story.txt
Output: 📊 Translation Estimate:
- Input tokens: ~1,200
- Estimated output tokens: ~1,400
- Estimated cost: $0.03
- Estimated time: 15 seconds
Run with Checkpointing
For large documents, enable checkpointing to resume if interrupted: tinbox translate --to de \
--model openai:gpt-5-2025-08-07 \
--checkpoint-dir ./checkpoints \
--output story_de.txt \
story.txt
Monitor Progress
Tinbox shows real-time progress: 🔄 Translating: story.txt
📄 Processing chunk 1/8...
💰 Cost so far: $0.01
⏱️ Elapsed: 5s
✅ Translation complete!
📊 Final stats:
- Total tokens: 2,600
- Total cost: $0.03
- Total time: 18s
Advanced Features
Glossary for Consistent Terminology
Maintain consistent translations of technical terms:
Auto-Generate Glossary
Use Existing Glossary
Extend Glossary
# Tinbox will detect and save important terms
tinbox translate --to es \
--glossary \
--save-glossary terms.json \
--model openai:gpt-5-2025-08-07 \
technical_doc.txt
Create a JSON file with term mappings:
{
"entries" : {
"API" : "Interface de programmation" ,
"CPU" : "Processeur" ,
"GPU" : "Carte graphique" ,
"Machine Learning" : "Apprentissage automatique"
}
}
Cost Control
Prevent unexpected expenses:
# Set a maximum cost limit
tinbox translate --to de \
--max-cost 5.00 \
--model openai:gpt-5-2025-08-07 \
document.txt
# Translation will stop if cost exceeds $5.00
Reasoning Effort
Control translation quality vs. cost:
Minimal (Fast & Cheap)
Low (Balanced)
High (Best Quality)
tinbox translate --to es \
--reasoning-effort minimal \
--model openai:gpt-5-2025-08-07 \
document.txt
Start with minimal reasoning effort. Only increase to low or high if you need better quality for complex documents.
Choose how you want your translation output:
Text (Default)
JSON
Markdown
tinbox translate --to es --model openai:gpt-5-2025-08-07 document.txt
Simple text output, perfect for most use cases. tinbox translate --to es \
--format json \
--model openai:gpt-5-2025-08-07 \
document.txt
Output: {
"translation" : "..." ,
"metadata" : {
"source_language" : "en" ,
"target_language" : "es" ,
"model" : "openai:gpt-5-2025-08-07" ,
"tokens_used" : 2500 ,
"cost" : 0.03 ,
"duration_seconds" : 18
}
}
tinbox translate --to de \
--format markdown \
--model openai:gpt-5-2025-08-07 \
document.txt
Formatted markdown report with metadata.
Best Practices by Document Type
PDFs tinbox translate --to es \
--algorithm page \
--model openai:gpt-4o \
document.pdf
Use --algorithm page
Requires vision-capable model
Consider --pdf-dpi 300 for higher quality
Large Text Files tinbox translate --to de \
--context-size 2000 \
--checkpoint-dir ./checkpoints \
--model openai:gpt-5-2025-08-07 \
large_file.txt
Use context-aware algorithm (default)
Enable checkpointing
Adjust --context-size as needed
Technical Documents tinbox translate --to fr \
--glossary \
--save-glossary terms.json \
--model openai:gpt-5-2025-08-07 \
tech_doc.pdf
Enable glossary support
Save terms for future use
Consider higher reasoning effort
Cost-Sensitive Projects tinbox translate --to es \
--dry-run \
--max-cost 5.00 \
--reasoning-effort minimal \
--model openai:gpt-5-2025-08-07 \
document.txt
Always use --dry-run first
Set --max-cost limits
Use minimal reasoning effort
Consider Ollama for local models
Common Workflows
Batch Translation
Translate multiple documents efficiently:
# Loop through all text files
for file in *.txt ; do
tinbox translate --to de \
--model openai:gpt-5-2025-08-07 \
--output "${ file % . txt }_de.txt" \
" $file "
done
Resume Interrupted Translation
If a translation is interrupted, simply rerun with the same checkpoint directory:
# First run (interrupted)
tinbox translate --to de \
--checkpoint-dir ./checkpoints \
--model openai:gpt-5-2025-08-07 \
large_document.txt
# Resume from checkpoint
tinbox translate --to de \
--checkpoint-dir ./checkpoints \
--model openai:gpt-5-2025-08-07 \
large_document.txt
Tinbox automatically detects and resumes from the last checkpoint.
Troubleshooting
Translation quality is poor
Try these improvements:
Increase reasoning effort: --reasoning-effort high
Use a better model: openai:gpt-5-2025-08-07 or anthropic:claude-3-sonnet
Enable glossary: --glossary for consistent terminology
Specify source language: --from en (don’t rely on auto-detect)
PDF translation not working
Check these requirements:
Poppler installed: Run tinbox doctor to verify
Using vision-capable model: GPT-4o, Claude Sonnet, or Gemini Pro Vision
PDF extras installed: pip install tinbox[pdf]
Reduce costs with:
Use local Ollama models (free)
Set --reasoning-effort minimal
Set --max-cost limits
Use --dry-run to preview costs
Smaller context size: --context-size 1500
For large documents:
Enable checkpointing: --checkpoint-dir ./checkpoints
Reduce chunk size: --context-size 1500
For PDFs, use: --algorithm page
Next Steps
Command Reference Complete guide to all CLI options and flags
Translation Algorithms Learn about different translation strategies
Model Providers Detailed comparison of LLM providers
Advanced Usage Custom splitting, glossaries, and optimization
Need help? Run tinbox doctor to diagnose issues or check our troubleshooting guide.