Overview
Polaris IDE uses Trigger.dev for background job processing, enabling non-blocking AI operations and long-running tasks.Background jobs run on Trigger.dev’s infrastructure, keeping the UI responsive while processing complex operations.
Why Background Jobs?
AI operations can take several seconds to complete:Without Background Jobs
- UI freezes during AI processing
- Request timeouts on long operations
- Poor user experience
- No retry on failure
With Background Jobs
- Instant UI response
- No timeout limits
- Real-time progress updates
- Automatic retry and error handling
Architecture
Job Definitions
Polaris includes two main background jobs:Process Message Task
Location:trigger/tasks/process-message.ts
Handles AI chat responses with tool execution:
Available Tools
Available Tools
File Management:
readFile- Read file contentswriteFile- Create/update filesdeleteFile- Delete files/folderslistFiles- List directory contentsgetProjectStructure- Get file tree
findSymbol- Search for functions, classes, variablesgetReferences- Find symbol referencesgetDiagnostics- Get TypeScript errorsgoToDefinition- Navigate to definitions
searchFiles- Regex content searchsearchCodebase- AST-aware searchfindFilesByPattern- Glob pattern matching
getRelevantFiles- Find contextually relevant files
executeCommand- Run safe commands (npm, git, tsc, etc.)
Streaming Updates
Streaming Updates
Updates are sent to Convex every 100ms (throttled):
Error Handling
Error Handling
Failures are gracefully handled:
Generate Project Task
Location:trigger/tasks/generate-project.ts
Creates complete projects from descriptions:
Step-by-Step Generation
Step-by-Step Generation
Each step focuses on specific files:
- Config files - package.json, tsconfig.json
- Entry points - main.tsx, App.tsx
- Components - UI building blocks
- Pages - Route components
- Hooks - Custom React hooks
- Types - TypeScript definitions
- Utilities - Helper functions
- Documentation - README.md
Progress Tracking
Progress Tracking
Events logged to Convex for real-time UI updates:
Tool Choice
Tool Choice
Force specific tools for each step:
Invoking Jobs
From Frontend (via API Route)
From Convex (via HTTP Action)
Monitoring Jobs
Trigger.dev Dashboard
Access at cloud.trigger.dev:Run History
View all job executions with status and duration
Live Runs
Monitor currently executing jobs in real-time
Logs
Detailed execution logs and error traces
Metrics
Job success rate, duration, and retry statistics
Frontend Monitoring
Track job status in UI:Creating New Tasks
Advanced Features
Retry Logic
Scheduled Jobs
Delayed Execution
Batching
Best Practices
Keep tasks focused
Keep tasks focused
- Each task should do one thing well
- Break complex operations into multiple tasks
- Use task chaining for multi-step workflows
Handle errors gracefully
Handle errors gracefully
- Always catch and log errors
- Return meaningful error messages
- Use try-catch for external API calls
- Let Trigger.dev handle retries
Provide progress updates
Provide progress updates
- Stream updates for long operations
- Update Convex records frequently
- Show progress in UI
- Log milestones for debugging
Optimize performance
Optimize performance
- Use throttling for real-time updates
- Batch database operations
- Cache expensive computations
- Set appropriate timeouts
Test thoroughly
Test thoroughly
- Test with development environment
- Verify retry behavior
- Check error handling
- Monitor production metrics
Troubleshooting
Task not triggering
Task not triggering
Symptoms: Job doesn’t startSolutions:
- Verify
TRIGGER_SECRET_KEYis set - Check task is deployed:
npx trigger.dev deploy - Ensure task ID matches trigger call
- Check Trigger.dev dashboard for errors
Job timing out
Job timing out
Symptoms: Job fails with timeout errorSolutions:
- Increase timeout in task config
- Break into smaller tasks
- Optimize expensive operations
- Use streaming for long-running AI calls
High failure rate
High failure rate
Symptoms: Jobs frequently failSolutions:
- Check Trigger.dev logs for error details
- Verify external API credentials
- Add better error handling
- Increase retry attempts
- Monitor Sentry for exceptions
Slow execution
Slow execution
Symptoms: Jobs take too longSolutions:
- Profile code for bottlenecks
- Reduce AI token limits
- Optimize database queries
- Use parallel processing where possible
- Consider caching results
Next Steps
Architecture
Understand the system design
Error Tracking
Monitor job failures with Sentry
Custom Extensions
Build AI tools for your tasks
Contributing
Add new background jobs