Overview
Adist integrates with OpenAI’s API to provide AI-driven code analysis using GPT models. OpenAI offers powerful language models with excellent code understanding and generation capabilities.Available Models
You can choose from three GPT models:GPT-4o
Latest and most capable model (default)
GPT-4 Turbo
Fast and powerful GPT-4 variant
GPT-3.5 Turbo
Fastest and most cost-effective option
Setup
Get an API Key
Sign up for an OpenAI API key at platform.openai.com
Set Environment Variable
Add your API key to your environment:To make it permanent, add the line to your
- Linux/macOS
- Windows (PowerShell)
~/.bashrc, ~/.zshrc, or ~/.profile:Configure Adist
Run the LLM configuration command:Select:
- OpenAI as your provider
- Your preferred GPT model (GPT-4o, GPT-4 Turbo, or GPT-3.5 Turbo)
Features
Context Caching
The OpenAI service implementation includes intelligent context caching:- Topic Identification: Automatically identifies query topics using AI
- Cache Duration: Contexts are cached for 30 minutes
- Related Context Merging: Similar topics are merged for better responses
- Cache Cleanup: Old entries are automatically removed
Query Complexity Estimation
Queries are analyzed and categorized as:- Low Complexity: Simple questions (< 8 words, no technical terms)
- Medium Complexity: Standard questions (8-15 words or basic technical terms)
- High Complexity: Complex questions (> 15 words, code snippets, comparisons)
Document Relevance Scoring
The service scores documents based on:- Code blocks and syntax
- Comments and documentation
- Function definitions (
function,=>) - Class definitions (
class,interface)
Conversation Analysis
In chat mode, the service analyzes conversation patterns to detect:- Follow-up Questions: Short queries or questions building on previous context
- Deep Dives: Extended conversations on related topics
Code Reference
The OpenAI service is implemented in/home/daytona/workspace/source/src/utils/openai.ts:20
Key Methods
summarizeFile
Generates comprehensive summaries of individual files:generateOverallSummary
Creates a high-level project overview from file summaries:queryProject
Answers questions about your project with context optimization:chatWithProject
Enables conversational interactions with full history support:Pricing
GPT-4o pricing:- Input: $10 per million tokens
- Output: $30 per million tokens
Token usage is optimized through context caching and intelligent document selection.
Configuration Options
Context Limits
- Maximum Context Length: 50,000 characters
- Cache Timeout: 30 minutes
- Dynamic Adjustment: Context size varies based on query complexity
Optimization Strategies
The service employs several strategies to optimize API usage:- Context Reuse: Related queries share cached context
- Relevance Filtering: Only the most relevant documents are included
- Smart Truncation: Documents are truncated based on relevance scores
- Project Summaries: High-level overviews supplement missing context
Streaming Support
Both query and chat operations support streaming responses:- Real-time response generation
- Lower perceived latency
- Token usage estimation (exact counts unavailable during streaming)
Best Practices
- Efficient Queries
- Cost Optimization
- Quality Results
- Ask specific, focused questions
- Use streaming mode for long responses
- Leverage chat mode for related follow-up questions
Troubleshooting
API Key Not Found
If you see “OPENAI_API_KEY environment variable is required”:- Verify the environment variable is set:
echo $OPENAI_API_KEY - Restart your terminal after setting the variable
- Check for typos in the variable name
Rate Limits
If you encounter rate limiting:- Wait a few moments before retrying
- Consider reducing query frequency
- Check your API usage at platform.openai.com
- Upgrade your OpenAI plan if needed
Poor Response Quality
- Ensure your project is fully indexed:
adist reindex - Generate file summaries:
adist reindex --summarize - Try asking more specific questions
- Use chat mode for context-aware follow-ups
Streaming Issues
If streaming responses are incomplete or malformed:- Try non-streaming mode (remove
--streamflag) - Check your network connection
- Verify API key has proper permissions
Comparison with Other Providers
- vs Anthropic
- vs Ollama
OpenAI Advantages:
- Larger ecosystem and community
- More established API
- Multiple model tiers for cost optimization
- Larger context windows
- Better code understanding in some cases
- More transparent pricing
Next Steps
Start Querying
Ask questions about your codebase
Start Chatting
Have conversations about your project