What is ZenML?
ZenML is an extensible, open-source MLOps framework for creating production-ready machine learning pipelines and AI workflows. Whether you’re building classical ML models, LLM applications, or agentic systems, ZenML provides the orchestration layer that lets you focus on your logic while it handles the infrastructure complexity.For ML Engineers
Write pipelines in pure Python using familiar tools like scikit-learn, PyTorch, or TensorFlow
For LLM Developers
Orchestrate agents and LLM workflows with LangGraph, LangChain, or custom implementations
For Data Scientists
Track experiments, compare models, and deploy to production with reproducibility built-in
For Platform Teams
Deploy on Kubernetes, AWS, GCP, Azure, or any infrastructure with pluggable stack components
Why ZenML?
At its core, ZenML allows you to write workflows (pipelines) that run on any infrastructure backend (stacks). You can embed any Pythonic logic within these pipelines, like training a model or running an agentic loop. ZenML then operationalizes your application by:Automatic Containerization
Your code is automatically containerized and versioned, ensuring reproducibility across environments without manual Docker configuration.
Comprehensive Tracking
Every pipeline run is tracked with metrics, logs, artifacts, and metadata. Compare experiments, debug failures, and audit model lineage effortlessly.
Infrastructure Abstraction
Switch between local execution, Kubernetes, cloud platforms (AWS SageMaker, GCP Vertex AI, Azure ML), or custom orchestrators without rewriting code.
Tool Integration
Integrate your existing stack: MLflow for experiment tracking, Weights & Biases for visualization, Langfuse for LLM observability, and 60+ other integrations.
How ZenML Works
ZenML uses a client-server architecture where your code (the client) communicates with a ZenML server that manages metadata, artifacts, and orchestration:ZenML tracks every step execution, including inputs, outputs, parameters, and artifacts. This creates a complete lineage graph from raw data to deployed models.
Core Concepts
Pipelines
Pipelines are directed acyclic graphs (DAGs) of steps that define your ML workflow. They’re decorated Python functions that compose multiple steps:Steps
Steps are individual Python functions that represent discrete operations in your pipeline. They can take inputs, produce outputs, and access context:Artifacts
Artifacts are the data objects passed between steps. ZenML automatically serializes, stores, and versions them:Stacks
Stacks are combinations of infrastructure components (orchestrator, artifact store, container registry, etc.) that define where and how your pipelines run:Use Cases
Classical ML Pipelines
Train, evaluate, and deploy traditional ML models with scikit-learn, XGBoost, or LightGBM with full experiment tracking.
Deep Learning Workflows
Build PyTorch or TensorFlow training pipelines with distributed training, hyperparameter tuning, and GPU orchestration.
LLM Applications
Orchestrate RAG pipelines, prompt engineering workflows, and LLM evaluation with Langfuse or MLflow integration.
AI Agents
Manage agentic workflows with LangGraph, CrewAI, or custom implementations with full observability and deployment capabilities.
Computer Vision
Build image processing pipelines, model training, and inference services for object detection, segmentation, or classification.
NLP & Text Processing
Create text classification, named entity recognition, or document analysis pipelines with model versioning and A/B testing.
Production-Ready Features
Reproducibility
Every pipeline run captures the complete environment: code version, dependencies, data snapshots, and configuration. Reproduce any experiment or debug production issues with confidence.Caching
Intelligent caching prevents redundant computation. If inputs and code haven’t changed, ZenML reuses previous outputs:Scheduling
Schedule pipelines to run on cron expressions or trigger them via webhooks:Model Registry
Version and manage models across their lifecycle with the built-in model registry:Who Uses ZenML?
ZenML is used by thousands of companies worldwide, from startups to Fortune 500 enterprises:- Airbus: Building ML models for aerospace applications
- AXA: Insurance risk modeling and fraud detection
- JetBrains: ML-powered developer tools
- Rivian: Autonomous vehicle AI systems
- WiseTech Global: Supply chain optimization
Next Steps
Installation
Install ZenML and set up your first environment
Quickstart
Build and run your first pipeline in 5 minutes
Examples
Explore production-ready example projects
Documentation
Dive deep into ZenML’s capabilities
Community & Support
- Slack: Join 4,000+ ML engineers at zenml.io/slack
- GitHub: Star and contribute at github.com/zenml-io/zenml
- Docs: Complete documentation at docs.zenml.io
- Blog: Best practices and case studies at zenml.io/blog
