query command searches your knowledge graph index to answer questions using different retrieval strategies.
Usage
Arguments
The question or query to execute against the knowledge graph.
Options
The project root directory containing the configuration and index.Aliases:
-rThe query algorithm to use.Aliases:
-mAvailable methods:global- Best for questions about the entire dataset or high-level themeslocal- Best for specific entity-focused questionsdrift- Advanced multi-hop reasoning across communitiesbasic- Simple similarity-based search over text chunks
Index output directory containing the parquet files. If not specified, uses the
output_storage.base_dir from configuration.Aliases: -dLeiden hierarchy level from which to load community reports. Higher values represent smaller, more granular communities.Used by:
global, local, and drift methods.Use global search with dynamic community selection. This allows the search to adaptively select relevant communities.Use
--dynamic-community-selection to enable.Only used by: global method.Free-form description of the desired response format.Examples:
"Single Sentence""Multiple Paragraphs""List of 3-7 Points""Detailed Report""Executive Summary"
Print the response in a streaming manner as it’s generated, rather than waiting for the complete response.Use
--streaming to enable, --no-streaming to disable.Run the query with verbose logging to see detailed processing information.Aliases:
-vSearch methods
Global search
Best for questions about overall themes, trends, or dataset-wide patterns.- Uses community reports at specified hierarchy level
- Employs map-reduce approach to aggregate insights
- Best for broad, thematic questions
Local search
Best for questions about specific entities, events, or detailed information.- Focuses on specific entities and their neighborhoods
- Uses entity embeddings to find relevant context
- Best for targeted, specific questions
DRIFT search
Advanced search using multi-hop reasoning across communities.- Performs multi-hop reasoning across the graph
- Explores multiple community levels
- Best for complex analytical questions requiring deep reasoning
Basic search
Simple similarity-based search over text chunks.- Uses text embeddings for similarity matching
- Searches over raw text units
- Best for simple retrieval or when you want direct text excerpts
Examples
Global search with custom response format
Local search with streaming
Query specific community level
Dynamic community selection
Query with custom data directory
Verbose output for debugging
Response types
The--response-type parameter is a free-form instruction that guides the LLM’s response format:
- “Single Sentence” - Concise one-sentence answer
- “Multiple Paragraphs” - Detailed multi-paragraph response (default)
- “List of 3-7 Points” - Bullet-point summary
- “Single Paragraph” - Brief paragraph
- “Detailed Report” - Comprehensive analysis
- “Executive Summary” - High-level overview
- “Comparison Table” - Structured comparison
Performance considerations
- Global search: Processes community reports, faster than local for broad questions
- Local search: Retrieves entity neighborhoods, can be slower for large graphs
- DRIFT search: Most computationally intensive due to multi-hop reasoning
- Basic search: Fastest method, simple vector similarity
- Streaming: Recommended for long responses to see results incrementally
Output format
By default, the query response is printed to stdout. When using--verbose, you’ll also see:
- Retrieved context information
- Token usage statistics
- Processing time
- Search parameters used