AI categorization is optional. If not configured, users must manually categorize transactions.
AI Provider Configuration
All four environment variables must be set to enable AI features:Friendly name for the AI provider used in logs and debugging.Example:
ollama, openai, groq, together, anthropicRequired: Yes (if using AI features)Base URL of the provider’s OpenAI-compatible API endpoint.Example:
http://localhost:11434/v1(Ollama)https://api.openai.com/v1(OpenAI)https://api.groq.com/openai/v1(Groq)https://api.together.xyz/v1(Together AI)
API key for authenticating with the provider.Example:
sk-proj-abc123def456... (OpenAI), gsk_abc123... (Groq)Note: Some local providers like Ollama don’t require an API key. Use any string (e.g., none or local) as a placeholder.Required: Yes (if using AI features)Name of the specific model to use for transaction categorization.Example:
llama3.2(Ollama)gpt-4o-mini(OpenAI)llama-3.1-70b-versatile(Groq)meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo(Together AI)
Supported Providers
Budgetron works with any provider that implements the OpenAI API format:Local Providers
Ollama (Recommended for Self-Hosting)
Ollama (Recommended for Self-Hosting)
Run LLMs locally with Ollama.Setup:Configuration:Recommended Models:
llama3.2- Fast and efficient (3B)llama3.1- More capable (8B)mistral- Excellent for structured outputmixtral- High-quality categorization (47B)
LocalAI
LocalAI
LM Studio
LM Studio
Desktop app for running LLMs locally with a GUI.Setup:
- Download LM Studio
- Load a model
- Start the local server (default port: 1234)
Cloud Providers
OpenAI
OpenAI
Official OpenAI API with GPT models.Setup:Recommended Models:
- Create account at platform.openai.com
- Generate API key in dashboard
gpt-4o-mini- Cost-effective and fastgpt-4o- Most capablegpt-3.5-turbo- Budget option
Groq
Groq
Ultra-fast inference with Llama models.Setup:Recommended Models:
- Sign up at groq.com
- Generate API key
llama-3.1-70b-versatile- Best qualityllama-3.1-8b-instant- Fastestmixtral-8x7b-32768- Long context
Together AI
Together AI
Access to 100+ open-source models.Setup:
- Sign up at together.ai
- Get API key from dashboard
Azure OpenAI
Azure OpenAI
AI Service Implementation
The AI service is implemented using the Vercel AI SDK with structured output support.Service Detection
The application checks if AI is enabled at runtime usingisAIServiceEnabled() defined in src/server/ai/utils.ts. All four environment variables must be set for the service to be active.
Model Requirements
Most modern models support this feature:- ✅ GPT-4, GPT-4o, GPT-3.5-turbo
- ✅ Llama 3.x models
- ✅ Mistral, Mixtral
- ✅ Qwen models
- ❌ Older models may not support structured output
AI Categorization Process
- User creates a transaction
- Transaction details sent to AI model
- Model analyzes merchant, amount, and description
- Returns structured categorization with confidence score
- Category applied to transaction
src/server/ai/service/categorize-transactions/
Performance Considerations
Local vs Cloud
| Factor | Local (Ollama) | Cloud (OpenAI) |
|---|---|---|
| Cost | Free | Pay per request |
| Speed | Hardware dependent | Very fast |
| Privacy | Complete control | Data sent to provider |
| Setup | Requires installation | Just API key |
| Reliability | Depends on hardware | Highly reliable |
Model Selection
For transaction categorization:- Fast categorization: Use smaller models (3B-8B parameters)
- High accuracy: Use larger models (70B+ parameters)
- Balanced: 8B-13B parameter models work well
Rate Limiting
Budgetron does not implement rate limiting. Consider:- API provider rate limits
- Local hardware capabilities
- Transaction volume
Health Check
Budgetron includes an AI health check endpoint:src/server/ai/service/health/index.ts
Troubleshooting
AI Features Not Working
Service Not Enabled
Service Not Enabled
Error: “AI service is not enabled”Solution: Verify all four environment variables are set:All must return non-empty values.
Connection Failed
Connection Failed
Error: “Failed to connect to AI provider”Solutions:
- Verify
OPENAI_COMPATIBLE_BASE_URLis correct - For local providers, ensure service is running
- Check firewall/network settings
- Test connection:
curl $OPENAI_COMPATIBLE_BASE_URL/models
Authentication Failed
Authentication Failed
Error: “Invalid API key” or “Unauthorized”Solutions:
- Verify
OPENAI_COMPATIBLE_API_KEYis correct - Check API key hasn’t expired
- For OpenAI, ensure billing is set up
- For local providers, try
OPENAI_COMPATIBLE_API_KEY="none"
Model Not Found
Model Not Found
Error: “Model not found” or “Invalid model”Solutions:
- Verify
OPENAI_COMPATIBLE_MODELname is correct - For Ollama: Run
ollama listto see available models - For cloud providers: Check model availability in dashboard
- Pull model:
ollama pull llama3.2
Structured Output Not Supported
Structured Output Not Supported
Error: “Model does not support structured outputs”Solutions:
- Use a newer model that supports JSON mode
- For Ollama, update to latest version
- Try a different model from the recommended list
Testing AI Configuration
Test your AI configuration:choices array containing the model response.
Cost Optimization
Local Deployment
For free AI categorization:- Install Ollama locally
- Pull a smaller model:
ollama pull llama3.2 - Configure Budgetron to use local endpoint
Cloud Deployment
To minimize costs:- Use
gpt-4o-miniinstead ofgpt-4o(10x cheaper) - Use Groq for free tier (100K tokens/day)
- Use Together AI for competitive pricing
- Batch categorization requests when possible
Privacy and Security
Data Handling
- Local providers: Data never leaves your infrastructure
- Cloud providers: Data sent via HTTPS to provider’s API
- Data retention: Varies by provider (check their policies)
- Compliance: Ensure provider meets your compliance requirements
API Key Security
- Never commit API keys to version control
- Rotate API keys periodically
- Use environment-specific keys
- Monitor API usage for anomalies
Related Configuration
- Environment Variables - Complete environment reference
- Email Configuration - Configure email notifications