Skip to main content

All Destinations

This page provides a complete list of all destinations supported by dlt. Destinations are organized by category to help you find the right one for your use case.

Cloud Data Warehouses

Fully managed, cloud-native data warehouses optimized for analytics.

BigQuery

Google’s serverless data warehouse with excellent scalability and BigQuery ML integration.

Snowflake

Cloud-native data platform with secure data sharing and zero-copy cloning.

Redshift

Amazon’s fast, fully managed cloud data warehouse optimized for large-scale queries.

Databricks

Lakehouse platform combining data lake and warehouse capabilities with Delta Lake.

Databases

Traditional relational and analytical databases.

PostgreSQL

Powerful open-source relational database with JSONB support and excellent reliability.

DuckDB

In-process analytical database perfect for local development and data analysis.

Additional SQL Databases

  • ClickHouse: Fast column-oriented database for real-time analytics
  • Microsoft SQL Server: Enterprise relational database from Microsoft
  • Azure Synapse: Cloud analytics service with SQL and Spark capabilities
  • Dremio: Data lakehouse platform with SQL query engine
  • Athena: Serverless query service for S3 data using SQL
  • SQLAlchemy: Generic destination supporting any SQLAlchemy-compatible database

File Systems

Store data as files in local or cloud storage.

Filesystem

Load data to local or cloud file systems (S3, GCS, Azure) with support for Delta Lake and Iceberg.

Supported Protocols

  • Local: file:///path/to/directory
  • AWS S3: s3://bucket-name
  • Google Cloud Storage: gs://bucket-name
  • Azure Blob Storage: az://container-name
  • Hugging Face: hf://datasets/username/dataset
  • Memory: memory://m (for testing)

Vector Databases

Specialized databases for storing and querying vector embeddings.

Weaviate

Vector database for AI-powered applications with built-in vectorization.

Qdrant

High-performance vector similarity search engine with rich filtering.

LanceDB

Embedded vector database built on Lance data format.

Lakehouse Platforms

Modern data platforms combining data lake and warehouse capabilities.
  • Databricks: See Databricks destination
  • DuckLake: DuckDB with data lake capabilities (MotherDuck)
  • MotherDuck: Serverless DuckDB in the cloud

Custom Destinations

Custom Destinations

Build your own destination adapter to load data anywhere.

Destination Comparison

By Use Case

Use CaseRecommended Destinations
Large-scale AnalyticsBigQuery, Snowflake, Databricks
Cost-Effective WarehouseRedshift, ClickHouse
Local DevelopmentDuckDB, PostgreSQL
Data LakeFilesystem (S3/GCS/Azure), Databricks
Real-time AnalyticsClickHouse, DuckDB
Machine LearningDatabricks, BigQuery ML
Vector SearchWeaviate, Qdrant, LanceDB

By Capability

FeatureDestinations
Nested TypesBigQuery (limited), DuckDB, Filesystem
Merge SupportAll except Filesystem (regular tables)
Staging SupportBigQuery, Snowflake, Redshift, Databricks
ACID TransactionsPostgreSQL, Snowflake, Databricks (Delta)
Schema EvolutionAll destinations
Case Sensitive IDsBigQuery, Snowflake, PostgreSQL, DuckDB

Installation

Install dlt with specific destination dependencies:
# Cloud warehouses
pip install "dlt[bigquery]"
pip install "dlt[snowflake]"
pip install "dlt[redshift]"
pip install "dlt[databricks]"

# Databases
pip install "dlt[postgres]"
pip install "dlt[duckdb]"

# Filesystem
pip install "dlt[filesystem]"

# Multiple destinations
pip install "dlt[bigquery,postgres,duckdb]"

Quick Start Example

Here’s how to use any destination:
import dlt

@dlt.resource
def my_data():
    yield {"id": 1, "name": "Alice"}
    yield {"id": 2, "name": "Bob"}

# Choose your destination
pipeline = dlt.pipeline(
    pipeline_name="my_pipeline",
    destination="bigquery",  # or snowflake, postgres, duckdb, etc.
    dataset_name="my_dataset"
)

info = pipeline.run(my_data())
print(info)

Configuration

All destinations can be configured via:
  1. Code: Pass configuration directly
    from dlt.destinations import bigquery
    
    pipeline = dlt.pipeline(
        destination=bigquery(location="EU"),
        dataset_name="my_dataset"
    )
    
  2. Config Files: Use .dlt/secrets.toml
    [destination.bigquery]
    location = "US"
    
    [destination.bigquery.credentials]
    project_id = "my-project"
    private_key = "..."
    
  3. Environment Variables:
    export DESTINATION__BIGQUERY__LOCATION="US"
    export DESTINATION__BIGQUERY__CREDENTIALS__PROJECT_ID="my-project"
    

Choosing a Destination

Consider these factors when selecting a destination:
1

Data Volume

  • Small to Medium: PostgreSQL, DuckDB
  • Large: BigQuery, Snowflake, Databricks
2

Query Patterns

  • OLTP: PostgreSQL
  • OLAP: BigQuery, Snowflake, DuckDB, ClickHouse
  • Mixed: Databricks
3

Budget

  • Free/Low Cost: DuckDB, PostgreSQL, Filesystem
  • Pay-as-you-go: BigQuery, Snowflake
  • Reserved Capacity: Redshift, Databricks
4

Infrastructure

  • Serverless: BigQuery, DuckDB, MotherDuck
  • Managed: Snowflake, Redshift, Databricks
  • Self-hosted: PostgreSQL, ClickHouse
5

Ecosystem

  • AWS: Redshift, Athena
  • GCP: BigQuery
  • Azure: Synapse, Azure SQL
  • Multi-cloud: Snowflake, Databricks

Next Steps

Destination Guides

Detailed guides for each destination

Custom Destinations

Build your own destination adapter

Configuration

Learn about destination configuration

Performance Tips

Optimize your data loading

Build docs developers (and LLMs) love