Overview
Jean supports three AI CLI backends (Claude CLI, Codex CLI, and OpenCode) with flexible model selection, thinking levels, and customizable system prompts. All AI interactions run locally through your installed CLI tools.Key Capabilities
Backend Selection
Jean supports three CLI backends: Claude CLI (Anthropic):- Claude Opus 4.6, Opus 4.5
- Claude Sonnet 4.6, Sonnet 4.5
- Claude Haiku
- Extended thinking (Think, Megathink, Ultrathink)
- Adaptive thinking with effort levels (Opus 4.6)
- GPT 5.3 Codex
- GPT 5.2 Codex
- GPT 5.1 Codex Max
- GPT 5.2
- GPT 5.1 Codex Mini
- Reasoning effort levels (low, medium, high, xhigh)
- Multi-agent collaboration (experimental)
- Model routing through
opencode/prefix - Community-driven development
- Compatible with OpenCode CLI
Model Selection
Claude Models:Thinking Levels
Claude extended thinking:- Off: No extended thinking
- Think: 4,000 thinking tokens
- Megathink: 10,000 thinking tokens
- Ultrathink: 32,000 thinking tokens
Provider Profiles
Route requests through alternative API providers: Predefined profiles:Custom System Prompts
Global system prompt:Parallel Execution
Optional system prompt to encourage sub-agent parallelism:How to Use
Selecting Backend & Model
Global defaults:- Open Settings (Cmd/Ctrl + ,)
- Navigate to AI section
- Choose default backend (Claude/Codex/OpenCode)
- Select default model
- Set thinking/effort levels
- Open chat session
- Use toolbar dropdowns
- Change model, thinking level, backend
- Settings persist for session
- Right-click project → Settings
- AI pane
- Set default backend and provider
- New sessions inherit these settings
Configuring Thinking Levels
When to use each level: Off:- Simple refactors
- Straightforward implementations
- Following clear patterns
- Quick fixes
- Standard development tasks
- Code review
- Testing strategies
- Documentation
- Complex algorithms
- Architecture decisions
- Performance optimization
- Edge case analysis
- Novel problem solving
- Research and exploration
- Security analysis
- Deep debugging
Using Adaptive Thinking
Opus 4.6 effort levels: Low:- Quick questions
- Obvious solutions
- Pattern following
- Normal development
- Code generation
- Light problem solving
- Complex logic
- Multiple constraints
- Performance critical
- Unlimited reasoning
- Novel approaches
- Research problems
Setting Up Providers
Adding custom provider:- Settings → Providers
- Click “Add Profile”
- Enter name and settings JSON
- Configure environment variables:
- Save profile
- Session toolbar → Provider dropdown
- Select custom profile
- Or set as default in Settings
Customizing System Prompts
Global prompt:- Settings → AI → Magic Prompts
- Find “Global System Prompt”
- Edit in text editor
- Applies to all future messages
- Project Settings → AI pane
- Enter project-specific prompt
- Appended after global prompt
- Inherited by all sessions in project
- Create test session
- Ask AI to explain its instructions
- Verify prompts are working
- Adjust as needed
Configuration Options
Settings → AI
Claude Settings:Per-Session Settings
Configurable in chat toolbar:- Model selection
- Thinking level
- Effort level (if supported)
- Backend
- Provider profile
Mode-Specific Overrides
Build mode:Best Practices
Model Selection Strategy
By task complexity:Thinking Level Guidelines
Match to problem type:- Deterministic tasks → Off
- Creative tasks → Think+
- Research → Megathink/Ultrathink
- Debugging → Megathink
- Thinking tokens count against limits
- Ultrathink = expensive
- Start lower, increase if needed
System Prompt Design
Keep prompts actionable:- Ask AI to implement something
- Verify it follows guidelines
- Adjust prompt if needed
- Iterate until consistent
Provider Configuration
When to use providers:- Lower costs (OpenRouter)
- Regional models (MiniMax, Z.ai)
- Custom deployments
- Rate limit management
- Performance: Anthropic direct > OpenRouter > Others
- Cost: Regional providers < OpenRouter < Anthropic
- Reliability: Anthropic > OpenRouter > Others
Performance Optimization
Reduce latency:- Use appropriate thinking levels
- Choose closest provider
- Batch related questions
- Clear unused context
- Use Haiku for simple tasks
- Disable thinking when not needed
- Archive finished sessions
- Monitor token usage
Multi-Backend Workflows
Leverage strengths:- Claude Opus: Architecture & planning
- Codex: Code generation
- Sonnet: Code review & testing
- Haiku: Quick questions
Advanced Configuration
Per-magic-prompt overrides:- Expensive models for investigation
- Fast models for commit messages
- Specific backends for specific tasks
- Settings → AI → Magic Prompts
- Expand advanced options
- Set model/backend per prompt type
- Falls back to session defaults