Supported Data Sources
Evidence supports these data sources out of the box:- Databases: PostgreSQL, MySQL, SQL Server, SQLite
- Warehouses: Snowflake, BigQuery, Databricks, Redshift
- Files: CSV, Parquet, DuckDB
- Query Engines: Trino, Athena
Connection Configuration
Create a Connection File
Data source connections are defined in YAML files in the Create Use environment variables (like
sources/ directory. Each subdirectory represents a separate data source.Create a directory and connection file:sources/my_warehouse/connection.yaml:${EVIDENCE_PASSWORD}) to keep credentials secure.Configure Environment Variables
Create a Important: Add
.env file in your project root:.env to your .gitignore to keep credentials out of version control.Connection Examples
DuckDB (Local Database)
DuckDB is perfect for local development and small to medium datasets:sources/local/connection.yaml
CSV Files
Evidence can read CSV files as a data source:sources/csv/connection.yaml
sources/csv/ and query them:
sources/csv/sales_data.sql
PostgreSQL
sources/postgres/connection.yaml
Snowflake
sources/snowflake/connection.yaml
BigQuery
sources/bigquery/connection.yaml
Databricks
sources/databricks/connection.yaml
Working with Multiple Sources
Querying Multiple Databases
You can query different data sources on the same page:Joining Across Sources
Use inline SQL to join data from different sources:combined_metrics
Query Organization
Source-Specific Queries
Store queries alongside their connection configuration:Shared Queries
Place reusable queries in thequeries/ directory:
Real Example: E-commerce Database
Here’s a complete example from the Evidence project:sources/ecommerce/connection.yaml
sources/ecommerce/orders.sql
sources/ecommerce/orders_by_month.sql
Best Practices
Security
- Never commit credentials - Use environment variables
- Use read-only accounts - Evidence only needs SELECT permissions
- Limit access scope - Grant access only to necessary schemas/tables
- Rotate credentials - Update passwords regularly
Performance
- Pre-aggregate data - Create summary tables in your warehouse
- Use materialized views - For complex, frequently-used queries
- Limit result sets - Use LIMIT or date filters for large tables
- Index appropriately - Ensure your database has proper indexes
Organization
- One source per directory - Keep connection files organized
- Descriptive names - Use clear names like
snowflake_prodorpostgres_staging - Document connections - Add comments to YAML files explaining purpose
- Version control - Commit connection files (without credentials)
Troubleshooting
Connection Fails
Check network access: Ensure your database allows connections from your IP Verify credentials: Test credentials using a database client Review SSL settings: Some databases requiressl: true or specific SSL modes
Queries Run Slowly
Add filters: Limit date ranges or row countsEnvironment Variables Not Working
Check .env file location: Must be in project root Restart dev server: Changes to.env require a restart
Use correct syntax: ${VARIABLE_NAME} in YAML files
Next Steps
- Build your first app - Create a complete dashboard
- Create charts - Visualize your connected data
- Use templating - Build dynamic pages from your data