Comprehensive guide to configuring GEPA optimization runs
GEPA provides extensive configuration options to control optimization behavior, resource usage, and experiment tracking. This guide covers all configuration settings and how to use them effectively.
from gepa.optimize_anything import EngineConfigengine = EngineConfig( # Stopping conditions max_metric_calls=300, # Stop after 300 evaluations max_candidate_proposals=50, # Stop after 50 proposal attempts # Execution control seed=42, # Random seed for reproducibility run_dir="./runs/experiment_1", # Save checkpoints here raise_on_exception=True, # Raise on errors (vs. log and continue) # UI display_progress_bar=False, # Show tqdm progress bar)
from gepa.optimize_anything import ReflectionConfigreflection = ReflectionConfig( reflection_lm="openai/gpt-4o", # Model for reflection reflection_minibatch_size=3, # Examples per reflection)
Type:LanguageModel | str | None Default:"openai/gpt-5.1"LLM for proposing improved candidates. Can be:
String: LiteLLM model name (e.g., "openai/gpt-4o")
Callable: Custom LM function
# Using LiteLLMreflection = ReflectionConfig( reflection_lm="openai/gpt-4o",)# Using custom functionfrom gepa.optimize_anything import make_litellm_lmcustom_lm = make_litellm_lm("anthropic/claude-3-5-sonnet-20241022")reflection = ReflectionConfig( reflection_lm=custom_lm,)
2
reflection_minibatch_size
Type:int | None Default:None (auto: 1 for single-task, 3 for multi-task)Number of examples shown to the LLM per reflection step.
reflection = ReflectionConfig( reflection_minibatch_size=5, # Show 5 examples at a time)
Smaller batches → more focused improvements
Larger batches → more context, potentially better generalization
3
reflection_prompt_template
Type:str | dict[str, str] | None Default: Built-in templateCustom prompt template for reflection. Use <curr_param> and <side_info> placeholders.
custom_template = """You are optimizing a system parameter.Current parameter:
curr_param
Evaluation results:
side_info
Propose an improved parameter based on the results.Provide ONLY the improved parameter within ``` blocks."""reflection = ReflectionConfig( reflection_prompt_template=custom_template,)
4
module_selector
Type:ReflectionComponentSelector | Literal["round_robin", "all"] Default:"round_robin"Strategy for selecting which components to update each iteration:
"round_robin": Cycle through components
"all": Update all components together
reflection = ReflectionConfig( module_selector="all", # Update all params together)
Enables cross-pollination between candidates on the Pareto frontier.
from gepa.optimize_anything import MergeConfigmerge = MergeConfig( max_merge_invocations=5, # Try up to 5 merges merge_val_overlap_floor=5, # Min overlap for merge)config = GEPAConfig( merge=merge, # Enable merging)
from gepa.optimize_anything import RefinerConfigrefiner = RefinerConfig( refiner_lm="openai/gpt-4o-mini", # Use cheaper model max_refinements=2, # Refine up to 2 times)config = GEPAConfig( refiner=refiner, # Enable refinement)