{token_name}, and the descriptions for the available tokens can be found below.
Indexing prompts
These prompts are used during the indexing pipeline to extract and process knowledge from your input data.Entity/relationship extraction
This prompt is used to extract entities and relationships from text units.View source
See the default prompt implementation
Available tokens
The input text to be processed for entity and relationship extraction.
A list of entity types to extract from the text.
A delimiter for separating values within a tuple. A single tuple is used to represent an individual entity or relationship.
A delimiter for separating tuple instances.
An indicator for when generation is complete.
Example usage
Summarize entity/relationship descriptions
This prompt is used to summarize multiple descriptions of the same entity or relationship into a single coherent description.View source
See the default prompt implementation
Available tokens
The name of the entity or the source/target pair of the relationship.
A list of descriptions for the entity or relationship that need to be summarized.
Example usage
settings.yaml
Claim extraction
This prompt is used to extract claims or facts from text that could be relevant for information discovery.View source
See the default prompt implementation
Available tokens
The input text to be processed for claim extraction.
A delimiter for separating values within a tuple. A single tuple is used to represent an individual claim.
A delimiter for separating tuple instances.
An indicator for when generation is complete.
A list of entity types relevant to the claims.
Description of what claims should look like. Default is: “Any claims or facts that could be relevant to information discovery.”
See the configuration documentation for details on how to change the claim description.
Example usage
settings.yaml
Generate community reports
This prompt is used to generate comprehensive reports for communities detected in the knowledge graph.View source
See the default prompt implementation
Available tokens
The input text to generate the report with. This will contain tables of entities and relationships that belong to the community.
Example usage
settings.yaml
Query prompts
These prompts are used during query operations to generate responses based on the knowledge graph.Local search
The local search prompt is used for detailed queries that focus on specific entities and their relationships.View source
See the default prompt implementation
Available tokens
Describe how the response should look. Default is “multiple paragraphs”.
The data tables from GraphRAG’s index containing relevant entities, relationships, and claims.
Example usage
settings.yaml
Global search
Global search uses a map/reduce approach to summarization. You can tune these prompts independently. This search also includes the ability to adjust the use of general knowledge from the model’s training.Mapper prompt
View source
Reducer prompt
View source
Knowledge prompt
View source
Available tokens
Describe how the response should look (reducer only). Default is “multiple paragraphs”.
The data tables from GraphRAG’s index containing community reports and summaries.
Example usage
settings.yaml
Drift search
Drift search is a hybrid approach that combines aspects of both local and global search.View source
See the default prompt implementation
Available tokens
Describe how the response should look. Default is “multiple paragraphs”.
The data tables from GraphRAG’s index.
The most relevant community reports to include in the summarization.
The query text as injected into the context.
Example usage
settings.yaml
Best practices
Start with auto tuning
Start with auto tuning
Before manually tuning prompts, run auto tuning first to generate domain-adapted prompts. You can then manually refine these auto-generated prompts for even better results.
Test incrementally
Test incrementally
When manually tuning prompts, make small changes and test the results. Run indexing on a small subset of your data to quickly evaluate the impact of your changes.
Preserve token placeholders
Preserve token placeholders
Ensure that all required token placeholders (e.g.,
{input_text}, {entity_types}) are included in your custom prompts. Missing tokens will cause errors during execution.Version control
Version control
Keep your custom prompts in version control and document why specific changes were made. This helps with debugging and collaboration.
Review source prompts
Review source prompts
Click the “View source” links above to see the default prompt implementations. This will help you understand the expected structure and best practices.
Workflow
Choose a prompt to customize
Identify which prompt you want to modify based on your use case (entity extraction, summarization, etc.).
Review the default prompt
Click the “View source” link to see the default implementation and understand the available tokens.
Create a custom prompt file
Write your custom prompt in a text file, ensuring all required tokens are included.
Update settings.yaml
Add the path to your custom prompt file in the appropriate section of your configuration.
Test on sample data
Run indexing on a small subset of your data to verify the prompt works correctly.
Next steps
Configuration
Learn more about GraphRAG configuration options
Indexing
Run the indexing pipeline with your custom prompts
Auto tuning
Generate domain-adapted prompts automatically
Query operations
Use custom query prompts for better search results