Skip to main content

Understanding generators in Infrahub

Generators are plugins that query infrastructure data and automatically create or update objects based on the results. They transform abstract service definitions into concrete infrastructure components, enabling service-oriented workflows where high-level intent creates detailed implementations automatically.

Why generators matter

Infrastructure teams often think in terms of services—“deploy a web application” or “create a BGP peering”—but implementation requires creating dozens of objects: devices, interfaces, IP addresses, routing configs. Manually creating these objects is error-prone and tedious. Generators automate this translation from service intent to infrastructure implementation: Service abstraction: Define high-level services (web application, database cluster) without specifying every implementation detail Automated object creation: Generators create required infrastructure objects based on service definitions and policies Consistency: Generated objects follow organizational standards encoded in generator logic Lifecycle management: Generators update or remove objects when service definitions change Pattern reuse: Encode infrastructure patterns once, apply them to multiple services

Core concepts

Generator definition

A generator definition (CoreGeneratorDefinition) specifies: Targets group: Objects that trigger generator execution. When a target object is created or modified, the generator runs:
generators:
  - name: web_service_generator
    targets: web_service_group  # Group of WebService objects
GraphQL query: Retrieves data needed to generate objects. The query collects information about target objects and related context:
query GetWebService($service_id: String!) {
  WebService(ids: [$service_id]) {
    edges {
      node {
        name { value }
        instance_count { value }
        location { 
          node { name { value } }
        }
      }
    }
  }
}
Generator class: Python code that creates/updates/deletes objects based on query results:
from infrahub_sdk.generator import InfrahubGenerator

class WebServiceGenerator(InfrahubGenerator):
    async def generate(self, data: dict) -> None:
        service = data["WebService"]["edges"][0]["node"]
        # Create load balancer, instances, networks, etc.
Repository: Git repository containing the generator class and query definition. The generator definition connects these components:
generators:
  - name: web_service_generator
    query: web_service_query
    file_path: generators/web_service.py
    class_name: WebServiceGenerator
    targets: web_service_group

Generator instance

Each time a generator runs for a specific target object, Infrahub creates a generator instance (CoreGeneratorInstance). The instance:
  • Tracks which objects the generator created
  • Manages lifecycle of generated objects
  • Enables cleanup when target is deleted
  • Records execution status and errors
Instances are branch-local—they exist only in the branch where they execute and are not merged. This allows generators to run in feature branches without affecting main.

Object tracking

Generators use the SDK’s tracking feature to manage object lifecycle. When a generator creates objects, the SDK tracks them:
class WebServiceGenerator(InfrahubGenerator):
    async def generate(self, data: dict) -> None:
        service = data["WebService"]["edges"][0]["node"]
        
        # Create load balancer (automatically tracked)
        lb = await self.client.create(
            kind="InfraLoadBalancer",
            name=f"{service['name']['value']}-lb"
        )
        await lb.save()
        
        # Create instances
        for i in range(service["instance_count"]["value"]):
            instance = await self.client.create(
                kind="InfraServer",
                name=f"{service['name']['value']}-{i}",
                load_balancer=lb
            )
            await instance.save()
When the service is deleted or the generator re-runs, the SDK automatically removes tracked objects that are no longer needed.

Execution contexts

Generators run in several contexts: Development: Use infrahubctl generator to test locally:
infrahubctl generator run web_service_generator --target-id <service_uuid>
Manual UI execution: From the Generator Definition detail page, click Run to trigger on demand. Proposed Changes: When you create a Proposed Change affecting generator targets, generators run automatically as CI checks. Review results in the Checks and Data tabs. Event Actions: Configure events and actions to trigger generators automatically based on data changes. This enables fully automated workflows.

Architecture and implementation

High-level design

Generators follow this flow:
  1. Target identification: Determine which objects (group members) need generator execution
  2. Query execution: Run the GraphQL query for each target, collecting required data
  3. Generation: Execute generator class logic, creating/updating/deleting objects
  4. Tracking: Record which objects were created for lifecycle management
  5. Result reporting: Report success/failure and created objects
Generator overview diagram

Groups as targets

Generator targets are defined as groups (CoreGeneratorGroup). Groups can contain any schema objects—services, devices, contracts, or custom types. When a generator runs:
  1. Query the target group for current members
  2. For each member, execute the generator
  3. Track results per member
This design enables: Dynamic targeting: Add objects to the group and they automatically get generator execution Flexible grouping: Use any group criteria (location, type, tags) to determine targets Bulk operations: Run generators for all group members

Query groups

Generators also create GraphQL query groups (CoreGraphQLQueryGroup). These groups contain objects identified by the generator’s query—objects whose changes affect generator output. During Proposed Change pipeline runs, Infrahub:
  1. Determines which objects changed in the branch
  2. Finds query groups containing those objects
  3. Identifies generators using those queries
  4. Executes affected generators
This ensures generators run when their input data changes, maintaining consistency.

Branch awareness

Generators are branch-aware objects. This means: Different generators per branch: Feature branches can have modified generator logic Branch-local instances: Generator instances exist only in their branch Branch-specific objects: Generated objects follow normal branch semantics—they’re local to the branch until merged Testing a new generator in a branch doesn’t affect production. Only when you merge the branch do the generator and its results reach main.

Implementation examples

Service catalog pattern

A common pattern is a service catalog where abstract service objects generate concrete infrastructure:
from infrahub_sdk.generator import InfrahubGenerator

class DatabaseServiceGenerator(InfrahubGenerator):
    async def generate(self, data: dict) -> None:
        service = data["DatabaseService"]["edges"][0]["node"]
        
        # Extract service parameters
        name = service["name"]["value"]
        size = service["size"]["value"]  # small/medium/large
        location = service["location"]["node"]
        
        # Determine implementation based on size
        instance_map = {
            "small": {"count": 2, "memory": 16, "storage": 500},
            "medium": {"count": 3, "memory": 32, "storage": 1000},
            "large": {"count": 5, "memory": 64, "storage": 2000}
        }
        
        config = instance_map[size]
        
        # Create instances
        instances = []
        for i in range(config["count"]):
            instance = await self.client.create(
                kind="InfraServer",
                name=f"{name}-db-{i}",
                memory_gb=config["memory"],
                location=location["id"]
            )
            await instance.save()
            instances.append(instance)
        
        # Create storage volumes
        for instance in instances:
            volume = await self.client.create(
                kind="InfraStorageVolume",
                name=f"{instance.name.value}-data",
                size_gb=config["storage"],
                server=instance
            )
            await volume.save()
        
        # Create load balancer
        lb = await self.client.create(
            kind="InfraLoadBalancer",
            name=f"{name}-lb",
            backend_servers=[inst.id for inst in instances]
        )
        await lb.save()
This generator creates a complete database cluster from a single service object.

IP allocation pattern

Generators can interact with Resource Manager to allocate resources:
class InterfaceIPGenerator(InfrahubGenerator):
    async def generate(self, data: dict) -> None:
        interface = data["InfraInterface"]["edges"][0]["node"]
        
        # Get the network this interface connects to
        network = interface["network"]["node"]
        prefix = network["prefix"]["node"]
        
        # Allocate an IP from the prefix
        ip_resource = await self.client.allocate_next_ip_address(
            resource_pool=prefix["id"],
            identifier=interface["id"],
            data={
                "description": f"IP for {interface['name']['value']}"
            }
        )
        
        # Create IP address object
        ip = await self.client.create(
            kind="InfraIPAddress",
            address=ip_resource.value,
            interface=interface["id"]
        )
        await ip.save()
This pattern ensures IP addresses are allocated consistently from the correct pools.

Configuration generation pattern

Generators can create artifact definitions dynamically:
class ServiceArtifactGenerator(InfrahubGenerator):
    async def generate(self, data: dict) -> None:
        service = data["WebService"]["edges"][0]["node"]
        
        # Get all servers created for this service
        servers = await self.client.filters(
            kind="InfraServer",
            service_id__value=service["id"]
        )
        
        # Create artifact definition for each server
        for server in servers:
            artifact = await self.client.create(
                kind="CoreArtifactDefinition",
                name=f"{server.name.value}-config",
                transformation="server_config_transform",
                parameters={"server_id": server.id}
            )
            await artifact.save()
This creates artifact definitions that generate server configurations.

Use cases and workflows

Service factory

Turn Infrahub into a service factory where teams request services through simple objects:
  1. Create service schema (WebService, DatabaseService, etc.)
  2. Build generators that implement each service
  3. Teams create service objects with required parameters
  4. Generators automatically create all infrastructure
  5. Generated artifacts configure actual systems
See the blog post How to Turn Your Source of Truth into a Service Factory for detailed examples.

Contract-based workflows

Use generators to implement contracts between teams:
  1. Network team creates ContractNetwork schema
  2. Application team creates contract specifying requirements
  3. Generator creates network resources (VLANs, subnets, firewall rules)
  4. Application team uses generated resources
  5. Changes to contract trigger generator updates

Multi-cloud provisioning

Generators can create objects for multiple cloud providers:
class MultiCloudServiceGenerator(InfrahubGenerator):
    async def generate(self, data: dict) -> None:
        service = data["CloudService"]["edges"][0]["node"]
        providers = service["providers"]  # ["aws", "azure", "gcp"]
        
        for provider in providers:
            if provider == "aws":
                await self.create_aws_resources(service)
            elif provider == "azure":
                await self.create_azure_resources(service)
            elif provider == "gcp":
                await self.create_gcp_resources(service)
This enables multi-cloud strategies from unified service definitions.

Design trade-offs

Imperative vs. declarative

Generators are imperative—they contain procedural logic that creates objects. An alternative would be purely declarative templates. Trade-offs: Imperative (current):
  • Full programming flexibility
  • Complex logic (conditionals, loops, calculations)
  • Harder to validate statically
  • May be harder to understand
Declarative:
  • Easier to validate
  • Clearer intent
  • Limited expressiveness
  • May require complex template language
Infrahub chose imperative generators because infrastructure patterns often require complex logic that’s difficult to express declaratively.

Tracking granularity

Generators track objects at the generator instance level. An alternative would be tracking at the target level (all objects created for a target). Trade-offs: Instance-level (current):
  • Fine-grained lifecycle control
  • Clear ownership
  • More metadata overhead
Target-level:
  • Simpler model
  • Less metadata
  • Less precise cleanup
Instance-level tracking provides better lifecycle management at the cost of additional generator instance objects.

Execution timing

Generators can execute automatically (Proposed Changes, events) or manually (UI, CLI). Automatic execution provides convenience but may surprise users when objects appear unexpectedly. Manual execution provides control but requires remembering to run generators. Infrahub defaults to automatic execution in Proposed Changes but allows disabling per generator. This balances automation with control.

Known limitations

  • Target deletion: Deleting a generator target object does not automatically delete created objects (Issue 3289). Workaround: Manually delete generated objects or re-run the generator.
  • Generator errors: If a generator fails partway through, some objects may be created while others are not. Generators should be idempotent to handle re-execution.
  • Cross-branch dependencies: Generators in one branch cannot create objects in another branch. Generated objects always live in the branch where the generator executes.

Build docs developers (and LLMs) love