Overview
Adist provides powerful search capabilities that go beyond simple text matching. Using block-based semantic search, you can find relevant code and documentation using natural language queries.Search Methods
Adist offers two main search commands:Document Search
Find files matching your query:AI-Powered Query
Get AI-generated answers with code context:How Block Search Works
The block search engine:- Normalizes the query: Converts to lowercase and extracts key terms
- Scores each block: Matches against content, titles, and metadata
- Includes context: Adds parent and child blocks for better understanding
- Ranks results: Sorts by relevance score
- Returns top matches: Shows the most relevant blocks from up to 5 documents
Scoring Algorithm
Blocks are scored based on: Title Matches (highest priority)- Title contains term: +5 points
- Exact title match: +10 points
- Term appears in content: +1 point
- Frequency bonus: up to +1 point
- Function/class name: +3 points
- Function signature: +2 points
- Code blocks (function, method, class): 1.2x
- Headings: 1.1x
The search engine automatically filters out common words like “and”, “or”, “the” to focus on meaningful terms.
Query Command Features
Basic Usage
Streaming Responses
Get real-time AI responses as they’re generated:Default Mode
Without streaming, the query command:- Shows a loading spinner while generating
- Applies full syntax highlighting to code blocks
- Formats markdown properly
Search Results Display
When you run a query, you’ll see:Debug Information
- File paths in a hierarchical structure
- Number of relevant blocks per file
- Block types and line numbers
- First 3 blocks per file (with ”… and N more” if applicable)
AI Answer
The AI provides:- Direct answer to your question
- Code snippets with syntax highlighting
- References to specific files and line numbers
- Contextual explanations
Cost and Metadata
- API cost for the query
- Whether cached context was used (reduces cost)
- Query complexity level (low, medium, high)
Summary Queries
Adist recognizes summary-related questions:- Overall project summary (if available)
- Document-level summaries
- High-level structural information
Context Caching
For improved performance and reduced costs, Adist uses context caching:- First query: Creates a cache of project context
- Subsequent queries: Reuses cached context
- Cache key: Based on project ID
- Benefits: Faster responses, lower API costs
When you see “(using cached context)”, the AI is reusing previously indexed project information.
Code Highlighting
The query command automatically detects and highlights code in multiple languages:Search Tips
Be Specific
Good vs. Better Queries
Good vs. Better Queries
Good: “authentication”Better: “how does user authentication work?”Best: “explain the JWT token validation in the authentication middleware”
Use Natural Language
Ask Follow-up Questions in Chat
For interactive conversations, use chat mode:Legacy Search
The previous full-document search is still available:Troubleshooting
No Results Found
If your search returns no results:-
Ensure the project is indexed:
-
Check the current project:
The current project shows with a green arrow (→)
- Try broader terms: Start with general terms and narrow down
LLM Errors
If you get LLM-related errors:Performance
Search Speed
Block-based search is typically very fast:- Small projects (< 100 files): Instant
- Medium projects (100-1000 files): < 1 second
- Large projects (1000+ files): 1-3 seconds
Query Response Time
AI query response time depends on:- LLM provider (Ollama is slower but free)
- Query complexity
- Number of relevant blocks
- Whether context is cached
Next Steps
Interactive Chat
Have conversations about your code
Project Management
Manage multiple projects