Component Types
ZenML defines several stack component types, each serving a specific purpose in the ML workflow:Required Components
Every stack must include these two components:Orchestrator
Controls pipeline execution, scheduling, and step coordination. Determines where and how your pipeline runs.
Artifact Store
Stores all artifacts (data, models, etc.) produced by pipeline steps. Provides versioned storage.
Optional Components
These components add specialized functionality:Container Registry
Stores Docker images for containerized step execution.
Step Operator
Runs specific steps on specialized infrastructure (GPUs, distributed compute).
Experiment Tracker
Logs metrics, parameters, and artifacts for experiment tracking.
Model Deployer
Deploys trained models as prediction services.
Model Registry
Manages model versions, metadata, and lifecycle stages.
Feature Store
Serves features for training and real-time inference.
Data Validator
Validates data quality and schema compliance.
Alerter
Sends notifications via Slack, email, or other channels.
Annotator
Manages data annotation workflows and labeling.
Image Builder
Builds Docker images for containerized execution.
Orchestrators
Orchestrators are the brain of your pipeline execution. They determine how steps are executed and scheduled.Available Orchestrators
Using Orchestrators
Artifact Stores
Artifact stores provide persistent storage for all pipeline artifacts.Available Artifact Stores
Accessing Artifact Store
Container Registries
Container registries store Docker images for containerized step execution.Available Container Registries
When Are Containers Used?
Containers are built when:- Running on remote orchestrators (Kubernetes, Vertex AI, etc.)
- Using step operators
- Explicitly configured via Docker settings
Step Operators
Step operators run individual steps on specialized infrastructure.Use Cases
- GPU-intensive training steps
- Distributed computation
- Steps requiring different resources than the orchestrator
Available Step Operators
Using Step Operators
Experiment Trackers
Experiment trackers log metrics, parameters, and artifacts during training.Available Experiment Trackers
Using Experiment Trackers
Model Deployers
Model deployers deploy trained models as prediction services.Available Model Deployers
Deploying Models
Model Registries
Model registries manage model versions and metadata.Available Model Registries
Using Model Registry
Feature Stores
Feature stores serve features for training and inference.Available Feature Stores
Using Feature Stores
Data Validators
Data validators check data quality and schema.Available Data Validators
Using Data Validators
Alerters
Alerters send notifications when pipelines succeed or fail.Available Alerters
Using Alerters
Component Configuration
View Component Details
Programmatic Access
Component Flavors
Each component type has multiple flavors (implementations):Best Practices
Start Simple
Begin with local components (local orchestrator, local artifact store) and graduate to cloud components as needed.
Match Your Workflow
Choose components that fit your actual needs. Don’t over-engineer with components you don’t use.
Test Incrementally
Add one new component at a time and validate it works before adding more.
Use Integrations
Install ZenML integrations to get pre-built components:
zenml integration install <integration>Related Concepts
- Stacks - Learn how components combine into stacks
- Pipelines - Understand how pipelines use components
- Steps - Configure steps for specific components
Code Reference
- StackComponent base class:
src/zenml/stack/stack_component.py:67 - StackComponentType enum:
src/zenml/enums.py:158 - Component flavors:
src/zenml/stack/flavor.py
