Skip to main content

Production-Ready MLOps Framework

Build, deploy, and scale ML pipelines and AI agents from development to production with ZenML’s powerful orchestration framework.

What is ZenML?

ZenML is an extensible, open-source MLOps framework for creating production-ready ML pipelines and orchestrating AI agents. It enables ML engineers and data scientists to:
  • Orchestrate pipelines with simple @pipeline and @step decorators
  • Abstract infrastructure through composable stacks that run anywhere
  • Track everything - artifacts, metadata, models, and experiments automatically
  • Deploy as services - turn batch pipelines into HTTP endpoints with one command
  • Scale to production - from local development to Kubernetes and cloud platforms

Install in Seconds

Get ZenML running locally with pip install

Quick Start Guide

Build your first pipeline in 5 minutes

Core Concepts

Learn about pipelines, steps, and stacks

69+ Integrations

Connect to AWS, GCP, Azure, MLflow, and more

Key Features

Pipeline Orchestration

Define ML workflows as Python functions with @pipeline and @step decorators. ZenML handles execution, containerization, and tracking automatically.

Stack-Based Architecture

Abstract infrastructure with composable stacks. Switch between local, Kubernetes, or cloud orchestrators without changing code.

AI Agent Support

Orchestrate AI agents and LLM workflows with built-in support for LangGraph, LangChain, CrewAI, and other frameworks.

Artifact Tracking

Automatic versioning and lineage tracking for all pipeline artifacts, models, and datasets.

Model Registry

Centralized model management with versioning, metadata, and deployment tracking.

HTTP Deployments

Deploy pipelines as persistent HTTP services for real-time inference and predictions.

Quick Example

Here’s a complete ZenML pipeline in just a few lines:
from zenml import pipeline, step

@step
def load_data() -> dict:
    """Load training data."""
    return {"features": [...], "labels": [...]}

@step
def train_model(data: dict) -> str:
    """Train a model on the data."""
    # Your training logic here
    return "model_v1"

@pipeline
def training_pipeline():
    """Complete training pipeline."""
    data = load_data()
    model = train_model(data)

# Run the pipeline
if __name__ == "__main__":
    training_pipeline()
ZenML automatically:
  • Tracks all artifacts and metadata
  • Containerizes your code for reproducibility
  • Enables remote execution on any infrastructure
  • Logs metrics and visualizations to your dashboard

Production-Ready Architecture

ZenML uses a client-server architecture designed for production deployments:
  • Local Development: Run everything locally with pip install "zenml[server]"
  • Production Setup: Deploy the ZenML server separately, clients connect via zenml login
  • Web Dashboard: Built-in UI for monitoring pipelines, artifacts, and models
  • REST API: Full programmatic access to all ZenML functionality

Deployment Guide

Learn how to deploy ZenML in production

Kubernetes Setup

Deploy with Helm charts on Kubernetes

Trusted by Leading Companies

ZenML is used by thousands of companies worldwide, including:
  • Airbus - Aircraft manufacturer
  • AXA - Insurance and financial services
  • JetBrains - Software development tools
  • Rivian - Electric vehicle manufacturer
  • WiseTech Global - Logistics software

Get Started

1

Install ZenML

pip install "zenml[server]"
zenml init
2

Start the Server

zenml login
This launches a local ZenML server and opens the dashboard.
3

Run Your First Pipeline

Check out the quickstart guide to create and run your first pipeline in minutes.

Community and Support

Documentation

Complete guides and API reference

GitHub

5,200+ stars on GitHub

Slack Community

Join thousands of ML practitioners

License

ZenML is distributed under the Apache License 2.0. See LICENSE for details.

Build docs developers (and LLMs) love