Skip to main content

Overview

GEPAConfig is the main configuration class for optimize_anything(). It groups all settings into nested component configs, providing a clean interface for controlling optimization behavior. Most users only need to set engine.max_metric_calls and optionally reflection.reflection_lm.

Class Definition

from gepa.optimize_anything import GEPAConfig

config = GEPAConfig(
    engine=EngineConfig(...),
    reflection=ReflectionConfig(...),
    tracking=TrackingConfig(...),
    merge=MergeConfig(...),
    refiner=RefinerConfig(...),
    stop_callbacks=...,
)

Parameters

engine
EngineConfig
default:"EngineConfig()"
Controls the optimization run loop: budget, parallelism, caching, and stopping conditions. See EngineConfig for details.
reflection
ReflectionConfig
default:"ReflectionConfig()"
Controls how the LLM proposes improved candidates each iteration.Key parameters:
  • reflection_lm: The model used for reflection (default: "openai/gpt-5.1")
  • reflection_minibatch_size: Number of examples shown per reflection step
  • reflection_prompt_template: Custom prompt template for reflection
tracking
TrackingConfig
default:"TrackingConfig()"
Experiment tracking and logging configuration (W&B, MLflow, or custom logger).Key parameters:
  • use_wandb: Enable Weights & Biases tracking
  • wandb_api_key: W&B API key
  • use_mlflow: Enable MLflow tracking
  • logger: Custom logger instance
merge
MergeConfig | None
default:"None"
Enables cross-pollination between candidates on the Pareto frontier.When set, GEPA periodically attempts to merge strengths of two candidates that each excel on different subsets of the validation set.Key parameters:
  • max_merge_invocations: Maximum number of merge operations
  • merge_val_overlap_floor: Minimum validation overlap required for merging
refiner
RefinerConfig | None
default:"None"
Automatic per-evaluation candidate refinement via LLM.When enabled, after each evaluation GEPA calls an LLM to propose a refined version based on feedback. The better of (original, refined) is kept.Key parameters:
  • refiner_lm: Language model for refinement (defaults to reflection_lm)
  • max_refinements: Maximum refinement iterations per evaluation
Set to None to disable refinement.
stop_callbacks
StopperProtocol | Sequence[StopperProtocol] | None
default:"None"
Custom stopping conditions beyond the basic ones in EngineConfig.Can be a single stopper or a list of stoppers. See Stop Conditions for available options.

Methods

to_dict()

Convert config to dictionary representation.
config = GEPAConfig(engine=EngineConfig(max_metric_calls=100))
config_dict = config.to_dict()
Returns: dict[str, Any] - Dictionary representation of the config

from_dict()

Create config from dictionary representation.
config_dict = {"engine": {"max_metric_calls": 100}}
config = GEPAConfig.from_dict(config_dict)
Parameters:
  • d (dict[str, Any]): Dictionary containing config parameters
Returns: GEPAConfig - New config instance

Usage Examples

Basic Configuration

from gepa.optimize_anything import optimize_anything, GEPAConfig, EngineConfig

# Minimal config - just set the evaluation budget
config = GEPAConfig(
    engine=EngineConfig(max_metric_calls=200)
)

result = optimize_anything(
    seed_candidate="def solve(x): return x",
    evaluator=my_evaluator,
    objective="Optimize the algorithm for speed",
    config=config,
)

Advanced Configuration

from gepa.optimize_anything import (
    optimize_anything,
    GEPAConfig,
    EngineConfig,
    ReflectionConfig,
    RefinerConfig,
    MergeConfig,
    TrackingConfig,
)

config = GEPAConfig(
    engine=EngineConfig(
        max_metric_calls=500,
        parallel=True,
        max_workers=16,
        cache_evaluation=True,
        capture_stdio=True,
        run_dir="./experiments/run_001",
    ),
    reflection=ReflectionConfig(
        reflection_lm="openai/gpt-5.1",
        reflection_minibatch_size=5,
        module_selector="round_robin",
    ),
    refiner=RefinerConfig(
        max_refinements=2,
    ),
    merge=MergeConfig(
        max_merge_invocations=10,
        merge_val_overlap_floor=3,
    ),
    tracking=TrackingConfig(
        use_wandb=True,
        wandb_api_key="your-api-key",
        wandb_init_kwargs={"project": "my-optimization"},
    ),
)

result = optimize_anything(
    seed_candidate={"prompt": "Solve this problem:", "context": "Math problem"},
    evaluator=my_evaluator,
    dataset=train_data,
    valset=val_data,
    objective="Generate prompts that solve math problems",
    background="Use step-by-step reasoning",
    config=config,
)

With Custom Stop Conditions

from gepa.optimize_anything import GEPAConfig, EngineConfig
from gepa.utils import NoImprovementStopper, TimeoutStopCondition

config = GEPAConfig(
    engine=EngineConfig(
        max_metric_calls=1000,
        parallel=True,
    ),
    stop_callbacks=[
        NoImprovementStopper(max_iterations_without_improvement=50),
        TimeoutStopCondition(timeout_seconds=3600),  # 1 hour
    ],
)

result = optimize_anything(
    seed_candidate=initial_code,
    evaluator=my_evaluator,
    objective="Optimize for performance",
    config=config,
)

Configuration from Dictionary

import json
from gepa.optimize_anything import GEPAConfig

# Load from JSON file
with open("config.json", "r") as f:
    config_dict = json.load(f)

config = GEPAConfig.from_dict(config_dict)

# Save to JSON file
with open("config_output.json", "w") as f:
    json.dump(config.to_dict(), f, indent=2)

Build docs developers (and LLMs) love