Skip to main content
The GraphRAG indexer, by default, will run with a handful of prompts that are designed to work well in the broad context of knowledge discovery. However, it is quite common to want to tune the prompts to better suit your specific use case.
Manual tuning is an advanced feature. Most users should use auto tuning instead, which automatically generates domain-adapted prompts.
We provide a means for you to customize prompts by allowing you to specify a custom prompt file, which will use a series of token-replacements internally. Each of these prompts may be overridden by writing a custom prompt file in plaintext. We use token-replacements in the form of {token_name}, and the descriptions for the available tokens can be found below.

Indexing prompts

These prompts are used during the indexing pipeline to extract and process knowledge from your input data.

Entity/relationship extraction

This prompt is used to extract entities and relationships from text units.

View source

See the default prompt implementation

Available tokens

{input_text}
string
The input text to be processed for entity and relationship extraction.
{entity_types}
string
A list of entity types to extract from the text.
{tuple_delimiter}
string
A delimiter for separating values within a tuple. A single tuple is used to represent an individual entity or relationship.
{record_delimiter}
string
A delimiter for separating tuple instances.
{completion_delimiter}
string
An indicator for when generation is complete.

Example usage

entity_extraction:
  prompt: "prompts/custom_extract_graph.txt"

Summarize entity/relationship descriptions

This prompt is used to summarize multiple descriptions of the same entity or relationship into a single coherent description.

View source

See the default prompt implementation

Available tokens

{entity_name}
string
The name of the entity or the source/target pair of the relationship.
{description_list}
string
A list of descriptions for the entity or relationship that need to be summarized.

Example usage

settings.yaml
summarize_descriptions:
  prompt: "prompts/custom_summarize_descriptions.txt"

Claim extraction

This prompt is used to extract claims or facts from text that could be relevant for information discovery.

View source

See the default prompt implementation

Available tokens

{input_text}
string
The input text to be processed for claim extraction.
{tuple_delimiter}
string
A delimiter for separating values within a tuple. A single tuple is used to represent an individual claim.
{record_delimiter}
string
A delimiter for separating tuple instances.
{completion_delimiter}
string
An indicator for when generation is complete.
{entity_specs}
string
A list of entity types relevant to the claims.
{claim_description}
string
Description of what claims should look like. Default is: “Any claims or facts that could be relevant to information discovery.”
See the configuration documentation for details on how to change the claim description.

Example usage

settings.yaml
claim_extraction:
  enabled: true
  prompt: "prompts/custom_extract_claims.txt"
  description: "Claims related to scientific discoveries and findings"

Generate community reports

This prompt is used to generate comprehensive reports for communities detected in the knowledge graph.

View source

See the default prompt implementation

Available tokens

{input_text}
string
The input text to generate the report with. This will contain tables of entities and relationships that belong to the community.

Example usage

settings.yaml
community_reports:
  prompt: "prompts/custom_community_report.txt"

Query prompts

These prompts are used during query operations to generate responses based on the knowledge graph. The local search prompt is used for detailed queries that focus on specific entities and their relationships.

View source

See the default prompt implementation

Available tokens

{response_type}
string
Describe how the response should look. Default is “multiple paragraphs”.
{context_data}
string
The data tables from GraphRAG’s index containing relevant entities, relationships, and claims.

Example usage

settings.yaml
local_search:
  prompt: "prompts/custom_local_search.txt"
  text_unit_prop: 0.5
  community_prop: 0.1
  conversation_history_max_turns: 5
Global search uses a map/reduce approach to summarization. You can tune these prompts independently. This search also includes the ability to adjust the use of general knowledge from the model’s training.

Mapper prompt

View source

Reducer prompt

View source

Knowledge prompt

View source

Available tokens

{response_type}
string
Describe how the response should look (reducer only). Default is “multiple paragraphs”.
{context_data}
string
The data tables from GraphRAG’s index containing community reports and summaries.

Example usage

settings.yaml
global_search:
  map_prompt: "prompts/custom_global_map.txt"
  reduce_prompt: "prompts/custom_global_reduce.txt"
  knowledge_prompt: "prompts/custom_global_knowledge.txt"
  max_data_tokens: 12000
Drift search is a hybrid approach that combines aspects of both local and global search.

View source

See the default prompt implementation

Available tokens

{response_type}
string
Describe how the response should look. Default is “multiple paragraphs”.
{context_data}
string
The data tables from GraphRAG’s index.
{community_reports}
string
The most relevant community reports to include in the summarization.
{query}
string
The query text as injected into the context.

Example usage

settings.yaml
drift_search:
  prompt: "prompts/custom_drift_search.txt"

Best practices

Start with auto tuning

Before manually tuning prompts, run auto tuning first to generate domain-adapted prompts. You can then manually refine these auto-generated prompts for even better results.
When manually tuning prompts, make small changes and test the results. Run indexing on a small subset of your data to quickly evaluate the impact of your changes.
Ensure that all required token placeholders (e.g., {input_text}, {entity_types}) are included in your custom prompts. Missing tokens will cause errors during execution.
Keep your custom prompts in version control and document why specific changes were made. This helps with debugging and collaboration.
Click the “View source” links above to see the default prompt implementations. This will help you understand the expected structure and best practices.

Workflow

1

Choose a prompt to customize

Identify which prompt you want to modify based on your use case (entity extraction, summarization, etc.).
2

Review the default prompt

Click the “View source” link to see the default implementation and understand the available tokens.
3

Create a custom prompt file

Write your custom prompt in a text file, ensuring all required tokens are included.
4

Update settings.yaml

Add the path to your custom prompt file in the appropriate section of your configuration.
5

Test on sample data

Run indexing on a small subset of your data to verify the prompt works correctly.
6

Iterate and refine

Review the results and adjust your prompt as needed. Repeat until you achieve the desired output quality.

Next steps

Configuration

Learn more about GraphRAG configuration options

Indexing

Run the indexing pipeline with your custom prompts

Auto tuning

Generate domain-adapted prompts automatically

Query operations

Use custom query prompts for better search results

Build docs developers (and LLMs) love