Orchestrate your data pipelines with confidence
Dagster is a cloud-native orchestrator for building, testing, and monitoring data assets throughout the entire development lifecycle.
Quick start
Get up and running with Dagster in minutes
Define your first asset
Create a Python file with a simple data asset:Assets are the core building blocks of Dagster. Each asset represents a data object that you want to build and maintain.
my_data_pipeline.py
Create a Definitions object
Define your code location by wrapping assets in a
Definitions object:my_data_pipeline.py
Explore by topic
Learn about Dagster’s core capabilities
Assets
Define data assets with Python functions and dependencies
Jobs & Ops
Build task-based workflows and graphs
Resources
Configure external services and shared dependencies
Schedules
Automate pipeline execution on a schedule
Partitions
Process data in time windows or logical partitions
Testing
Test your data pipelines with confidence
Integrations
Connect with your existing data stack
dbt
Orchestrate dbt models as Dagster assets
Airflow
Migrate from Airflow or run both side-by-side
AWS
Deploy on AWS with S3, ECS, and more
Databricks
Run Spark jobs on Databricks clusters
Snowflake
Manage Snowflake tables and queries
View all
Explore 100+ integrations
Ready to build your data platform?
Start building production-ready data pipelines with integrated lineage, observability, and testing.
Get started with Dagster