Skip to main content

Destinations Overview

dlt supports loading data into a wide variety of destinations, from cloud data warehouses to local databases and file systems. Each destination has its own unique capabilities and configuration options.

What is a Destination?

A destination in dlt is a target location where your data will be loaded. This could be:
  • Data Warehouses: BigQuery, Snowflake, Redshift, Databricks
  • Databases: PostgreSQL, DuckDB
  • File Systems: Local files, S3, GCS, Azure Blob Storage
  • Custom Destinations: Build your own destination adapter

How to Use Destinations

Every dlt pipeline requires a destination. You specify it when creating your pipeline:
import dlt

pipeline = dlt.pipeline(
    pipeline_name="my_pipeline",
    destination="bigquery",
    dataset_name="my_dataset"
)

Configuring Destinations

Destinations can be configured in multiple ways:
1

In Code

Pass configuration directly when creating the pipeline:
import dlt
from dlt.destinations import bigquery

pipeline = dlt.pipeline(
    destination=bigquery(location="EU"),
    dataset_name="my_dataset"
)
2

Using Config Files

Store credentials in .dlt/secrets.toml:
[destination.bigquery]
location = "US"

[destination.bigquery.credentials]
project_id = "my-project"
private_key = "..."
client_email = "..."
3

Environment Variables

Use environment variables for sensitive data:
export DESTINATION__BIGQUERY__CREDENTIALS__PROJECT_ID="my-project"
export DESTINATION__BIGQUERY__CREDENTIALS__PRIVATE_KEY="..."

Available Destinations

dlt supports the following destinations out of the box:

Cloud Data Warehouses

  • BigQuery - Google’s serverless data warehouse
  • Snowflake - Cloud-native data platform
  • Redshift - Amazon’s cloud data warehouse
  • Databricks - Lakehouse platform with Delta Lake

Databases

  • PostgreSQL - Popular open-source relational database
  • DuckDB - In-process analytical database

File Systems

  • Filesystem - Load to local or cloud file systems (S3, GCS, Azure)

Custom Destinations

For a complete list of all available destinations, see the All Destinations page.

Destination Capabilities

Different destinations have different capabilities:
FeatureBigQuerySnowflakeRedshiftPostgresDuckDB
DDL TransactionsNoYesYesYesYes
Merge OperationsYesYesYesYesYes
Staging SupportYesYesYesYesYes
Case Sensitive IDsYesYesNoYesYes
Nested TypesLimitedNoNoNoYes

Write Dispositions

All destinations support the following write dispositions:
  • append: Add new records to existing tables
  • replace: Replace all data in the table
  • merge: Update existing records and insert new ones
@dlt.resource(write_disposition="merge", primary_key="id")
def my_resource():
    yield {"id": 1, "value": "data"}

File Formats

Destinations support different file formats for loading:
  • JSONL: JSON Lines format (widely supported)
  • Parquet: Columnar format (better compression and performance)
  • CSV: Comma-separated values (limited support)
The preferred format depends on the destination and your use case.

Staging Support

Many destinations support staging data through cloud storage before loading:
import dlt

pipeline = dlt.pipeline(
    destination="snowflake",
    staging="filesystem",  # Stage files in cloud storage first
    dataset_name="my_dataset"
)
Staging can improve performance for large datasets and is required for some destinations.

Next Steps

Choose a Destination

Browse all available destinations

Configuration Guide

Learn how to configure destinations

Build docs developers (and LLMs) love