Skip to main content

Overview

The SQL schema connector (sql-schema-connector) provides a unified implementation for all SQL databases with database-specific “flavours” for PostgreSQL, MySQL, SQLite, SQL Server (MSSQL), and CockroachDB. Location: schema-engine/connectors/sql-schema-connector/

Architecture

SqlSchemaConnector

The top-level SQL connector wraps database-specific flavour implementations:
pub struct SqlSchemaConnector {
    inner: Box<dyn SqlConnector + Send + Sync + 'static>,
    host: Arc<dyn ConnectorHost>,
}

SQL Dialect

Each database has a dialect implementing database-specific behavior:
pub trait SqlDialect: Send + Sync + 'static {
    fn renderer(&self) -> Box<dyn SqlRenderer>;
    fn schema_differ(&self) -> Box<dyn SqlSchemaDifferFlavour>;
    fn schema_calculator(&self) -> Box<dyn SqlSchemaCalculatorFlavour>;
    fn destructive_change_checker(&self) -> Box<dyn DestructiveChangeCheckerFlavour>;
    fn datamodel_connector(&self) -> &'static dyn Connector;
    fn migrations_table(&self) -> Table<'static>;
    // ...
}

SQL Connector Trait

Internal trait for SQL-specific operations:
pub trait SqlConnector: Send + Sync + Debug {
    fn dialect(&self) -> Box<dyn SqlDialect>;
    fn describe_schema(&mut self, namespaces: Option<Namespaces>) -> BoxFuture<'_, ConnectorResult<SqlSchema>>;
    fn introspect(&mut self, namespaces: Option<Namespaces>, ctx: &IntrospectionContext) -> BoxFuture<'_, ConnectorResult<SqlSchema>>;
    fn create_database(&mut self) -> BoxFuture<'_, ConnectorResult<String>>;
    fn drop_database(&mut self) -> BoxFuture<'_, ConnectorResult<()>>;
    fn reset(&mut self, namespaces: Option<Namespaces>) -> BoxFuture<'_, ConnectorResult<()>>;
    // ...
}

Supported Databases

PostgreSQL Connector

Provider: postgresqlSupported Versions: 9.6+, recommended 12+Features:
  • Full ACID transactions with all isolation levels
  • Native enums
  • JSON/JSONB with filtering support
  • Array types (scalar lists)
  • Full-text search
  • Multi-schema support
  • Extensions (PostGIS, pg_trgm, etc.)
  • Lateral joins
  • Partial indexes
  • Row-level security (RLS) aware
Capabilities:
CAPABILITIES = AdvancedJsonNullability | AnyId | AutoIncrement |
    AutoIncrementAllowedOnNonId | AutoIncrementMultipleAllowed |
    CompoundIds | CreateMany | CreateSkipDuplicates | Enums |
    NativeFullTextSearch | InsensitiveFilters | Json |
    JsonFiltering | ScalarLists | MultiSchema | NamedForeignKeys |
    NamedPrimaryKeys | UpdateableId | DecimalType |
    OrderByNullsFirstLast | NativeUpsert | InsertReturning |
    UpdateReturning | DeleteReturning | LateralJoin | PartialIndex
Connection String Format:
postgresql://USER:PASSWORD@HOST:PORT/DATABASE?schema=SCHEMA
postgresql://USER:PASSWORD@HOST:PORT/DATABASE?schema=SCHEMA&connection_limit=10
postgresql://USER:PASSWORD@HOST/DATABASE?host=/var/run/postgresql/
Parameters:
  • schema: Default schema (default: public)
  • connection_limit: Max connections (default: unlimited)
  • connect_timeout: Connection timeout in seconds
  • pool_timeout: Pool acquisition timeout
  • sslmode: SSL mode (prefer, disable, require)
  • sslcert, sslidentity, sslpassword: SSL certificate options
  • host: Unix socket path (alternative to hostname)
Native Types:
  • SmallInt, Integer, BigInt
  • Decimal(p,s), Money
  • Real, DoublePrecision
  • VarChar(n), Char(n), Text
  • ByteA
  • Timestamp(p), Timestamptz(p), Date, Time(p), Timetz(p)
  • Boolean
  • Uuid
  • Json, JsonB
  • Xml, Inet, Citext, Bit(n), VarBit(n)
Initialization:
#[cfg(feature = "postgresql-native")]
SqlSchemaConnector::new_postgres(params)?;

// Or auto-detect PostgreSQL vs CockroachDB
SqlSchemaConnector::new_postgres_like(params)?;

Migration Workflow

1. Schema Diffing

let migration = dialect.diff(from_schema, to_schema, &filter);
The differ generates migration steps:
  • CreateTable, DropTable, AlterTable
  • CreateIndex, DropIndex
  • CreateEnum, DropEnum, AlterEnum
  • AddColumn, DropColumn, AlterColumn
  • AddForeignKey, DropForeignKey
  • Database-specific steps (e.g., CreateExtension for PostgreSQL)

2. Script Rendering

let script = dialect.render_script(&migration, &diagnostics)?;
Renderers generate database-specific SQL:
  • PostgreSQL: DDL with IF EXISTS, CASCADE support
  • MySQL: DDL with backtick identifiers
  • SQLite: DDL with table recreations (no ALTER COLUMN)
  • SQL Server: DDL with GO batch separators

3. Destructive Change Checking

let diagnostics = connector.destructive_change_checker()
    .check(&migration)
    .await?;
Warns about:
  • Data loss (dropping tables, columns)
  • Type changes that may truncate data
  • Constraint changes that may fail on existing data

4. Migration Application

let steps_applied = connector.apply_migration(&migration).await?;
Steps:
  1. Acquire advisory lock (if supported)
  2. Start transaction (if supported for DDL)
  3. Execute migration steps
  4. Record in _prisma_migrations table
  5. Commit and release lock

Introspection

Introspection scans the database and generates a Prisma schema:
let result = connector.introspect(&ctx, extension_types).await?;
Process:
  1. Describe schema: Query system tables/catalogs
  2. Build internal schema: Create SqlSchema representation
  3. Calculate datamodel: Generate PSL from schema
  4. Add warnings: Flag unsupported features
Generated elements:
  • Models from tables
  • Fields from columns with native types
  • Relations from foreign keys
  • Enums from database enums
  • Indexes and unique constraints
  • Default values

Shadow Database

For migration validation, SQL connectors can use a shadow database:
let schema = dialect.schema_from_migrations_with_target(
    migrations,
    namespaces,
    filter,
    ExternalShadowDatabase::ConnectionString {
        connection_string: shadow_url,
        preview_features,
    },
).await?;
The shadow database:
  • Is a temporary database for applying migrations
  • Validates migration scripts work correctly
  • Is automatically created and dropped
  • Can be user-provided or auto-generated

Multi-Schema Support

PostgreSQL and SQL Server support multiple schemas/namespaces:
datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
  schemas  = ["public", "auth", "billing"]
}
Connector handles:
  • Schema-qualified table names
  • Cross-schema relations
  • Search path configuration
  • Migration targeting specific schemas

Advisory Locks

PostgreSQL and MySQL use advisory locks to prevent concurrent migrations:
connector.acquire_lock().await?;
  • PostgreSQL: pg_advisory_lock()
  • MySQL: GET_LOCK()
  • Other databases: No-op or file-based locking
Disable with:
PRISMA_SCHEMA_DISABLE_ADVISORY_LOCK=1

Performance Considerations

  • Connection pooling: Configure connection_limit appropriately
  • Batch operations: Use create_records instead of multiple create_record
  • Relation loading: JoinStrategy for N+1 query prevention (database version dependent)
  • Indexes: Introspection captures all indexes for query optimization

Next Steps

MongoDB Connector

Learn about the MongoDB connector implementation

Build docs developers (and LLMs) love