Skip to main content
This guide walks you through migrating a MySQL database to CockroachDB using the MOLT toolkit. While CockroachDB primarily supports the PostgreSQL wire protocol, MOLT provides full support for MySQL migrations.

Before you begin

Ensure you have the following:
  • A MySQL database (versions 5.7, 8.0+ supported)
  • A CockroachDB cluster (Cloud or Self-Hosted)
  • Network connectivity between source and target databases
  • Appropriate user permissions on both databases

Installation

Install the MOLT CLI tools:
brew install cockroachdb/tap/molt

Migration workflow

1

Create migration users

Create MySQL migration user

The SQL user running MOLT requires specific privileges on the source database.
-- Connect to MySQL as root
CREATE USER 'molt_user'@'%' IDENTIFIED BY 'secure_password';

-- Grant table privileges
GRANT SELECT ON source_db.* TO 'molt_user'@'%';

-- For replication, grant additional privileges
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'molt_user'@'%';

FLUSH PRIVILEGES;

Create CockroachDB user

-- Connect to CockroachDB
CREATE USER molt_user WITH PASSWORD 'secure_password';

-- Grant database privileges
GRANT ALL ON DATABASE target_db TO molt_user;
GRANT CREATE ON DATABASE target_db TO molt_user;
2

Convert the schema

Use the MOLT Schema Conversion Tool to convert your MySQL schema to CockroachDB-compatible DDL.
The Schema Conversion Tool is available in the CockroachDB Cloud Console under the Migrations page.

Key schema differences

MySQL and CockroachDB have important differences:
MySQLCockroachDBNotes
AUTO_INCREMENTDEFAULT unique_rowid() or gen_random_uuid()Use UUID for better distribution
INT (32-bit)INT4 or INT8CockroachDB INT defaults to 64-bit
DATETIMETIMESTAMPTZStore with timezone
TINYINT(1)BOOLFor boolean values
ENUMSTRING with CHECK or custom ENUM typeMust be explicitly defined

Apply the schema

-- Example converted schema
CREATE TABLE employees (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    first_name STRING NOT NULL,
    last_name STRING NOT NULL,
    email STRING UNIQUE,
    hire_date TIMESTAMPTZ DEFAULT now(),
    is_active BOOL DEFAULT true
);
Drop constraints and secondary indexes before data load to improve performance. You’ll recreate them after the data load completes.
3

Configure MySQL for replication (optional)

If you’re planning a minimal-downtime migration with continuous replication, configure MySQL for GTID-based replication.

Enable GTID and binary logging

Edit my.cnf or my.ini:
[mysqld]
# Enable GTID
gtid_mode = ON
enforce_gtid_consistency = ON

# Enable binary logging
log_bin = mysql-bin
binlog_format = ROW
binlog_row_metadata = FULL
# Or for older MySQL versions:
# binlog_row_image = FULL

# Set retention (in seconds, 7 days example)
binlog_expire_logs_seconds = 604800
Restart MySQL for changes to take effect.

Verify configuration

SHOW VARIABLES LIKE 'gtid_mode';
-- Should return: ON

SHOW VARIABLES LIKE 'binlog_format';
-- Should return: ROW
4

Load data with MOLT Fetch

Set connection strings

export SOURCE="mysql://molt_user:password@tcp(source-host:3306)/source_db"
export TARGET="postgresql://molt_user:password@target-host:26257/target_db?sslmode=require"
MySQL connection strings use mysql:// protocol and tcp() format for host:port.

For bulk load (with downtime)

Perform a one-time bulk load:
molt fetch \
  --source $SOURCE \
  --target $TARGET \
  --table-filter 'employees|payments|orders' \
  --bucket-path 's3://migration-bucket/data' \
  --table-handling truncate-if-exists \
  --ignore-replication-check
MySQL doesn’t use schema namespaces like PostgreSQL. Tables are migrated to CockroachDB’s default public schema.

For minimal downtime migration

Load data and capture GTID checkpoint:
molt fetch \
  --source $SOURCE \
  --target $TARGET \
  --table-filter 'employees|payments|orders' \
  --bucket-path 's3://migration-bucket/data' \
  --table-handling truncate-if-exists
The output will include the GTID checkpoint:
cdc_cursor: 4c658ae6-e8ad-11ef-8449-0242ac140006:1-29
Save this GTID for starting replication.
5

Verify data consistency

Use MOLT Verify to confirm data consistency:
molt verify \
  --source $SOURCE \
  --target $TARGET \
  --table-filter 'employees|payments|orders'
MOLT Verify checks:
  • Table structure and column definitions
  • Row counts
  • Row-level data consistency
6

Start continuous replication (optional)

For minimal downtime migrations, start MOLT Replicator to replicate ongoing changes.

Create table filter userscript (if needed)

If you filtered tables during the initial load, create a userscript to filter tables on replication:
table_filter.ts
import { Userscript } from "@cockroachdb/replicator";

export const script: Userscript = {
  filterTable: (table) => {
    const allowedTables = ['employees', 'payments', 'orders'];
    return allowedTables.includes(table.name);
  }
};

Start replication

Use the GTID from the fetch checkpoint:
replicator mylogical \
  --sourceConn $SOURCE \
  --targetConn $TARGET \
  --targetSchema defaultdb.public \
  --defaultGTIDSet '4c658ae6-e8ad-11ef-8449-0242ac140006:1-29' \
  --stagingSchema defaultdb._replicator \
  --stagingCreateSchema \
  --metricsAddr :30005 \
  --userscript table_filter.ts \
  -v

Monitor replication

Access replication metrics at:
http://localhost:30005/_/varz
Key metrics to monitor:
  • replication_lag_seconds: Time lag between source and target
  • mutations_applied: Number of changes applied
  • binlog_position: Current position in MySQL binary log
7

Recreate constraints and indexes

After data load completes, recreate the constraints and indexes you dropped:
-- Add foreign key constraints
ALTER TABLE orders
  ADD CONSTRAINT fk_employee
  FOREIGN KEY (employee_id)
  REFERENCES employees(id);

-- Create secondary indexes
CREATE INDEX idx_employees_email
  ON employees(email);

CREATE INDEX idx_orders_date
  ON orders(order_date);
8

Cutover to CockroachDB

Pre-cutover checklist

  • Verify data consistency with MOLT Verify
  • Confirm replication lag is minimal (if using replication)
  • Test application queries on CockroachDB
  • Update application code for any MySQL-specific syntax
  • Prepare rollback plan

Cutover procedure

  1. Stop application writes to the source database
  2. Wait for replication to drain (if using continuous replication)
  3. Run final verification with MOLT Verify
  4. Update connection strings in your application to point to CockroachDB
  5. Start application traffic on CockroachDB
Keep the replication stream running initially so you can failback to MySQL if needed.

Application compatibility

Connection drivers

CockroachDB uses PostgreSQL drivers, not MySQL drivers. Update your application:
// Before (MySQL)
const mysql = require('mysql2');
const connection = mysql.createConnection({
  host: 'mysql-host',
  user: 'user',
  password: 'password',
  database: 'mydb'
});

// After (CockroachDB)
const { Pool } = require('pg');
const pool = new Pool({
  host: 'cockroach-host',
  port: 26257,
  user: 'user',
  password: 'password',
  database: 'mydb',
  ssl: { rejectUnauthorized: false }
});

SQL syntax differences

Update your SQL queries for CockroachDB compatibility:
  • LIMIT with OFFSET: Use PostgreSQL syntax
    • MySQL: LIMIT 10 OFFSET 20
    • CockroachDB: LIMIT 10 OFFSET 20 (same, but ensure parameterization)
  • String concatenation: Use || operator
    • MySQL: CONCAT(first_name, ' ', last_name)
    • CockroachDB: first_name || ' ' || last_name (or use CONCAT)
  • Date functions:
    • MySQL: NOW(), CURDATE(), DATE_ADD()
    • CockroachDB: now(), current_date(), date + INTERVAL
  • Case sensitivity: CockroachDB identifiers are case-insensitive by default (like MySQL) but can be quoted for case-sensitivity

Troubleshooting

Connection issues

Ensure your MySQL connection string uses the correct format:
mysql://user:password@tcp(host:3306)/database
For SSL connections:
mysql://user:password@tcp(host:3306)/database?tls=true
If GTID is not enabled, you’ll see an error during replication setup:
-- Check GTID status
SHOW VARIABLES LIKE 'gtid_mode';

-- If OFF, enable it (requires restart)
SET GLOBAL gtid_mode = ON;
SET GLOBAL enforce_gtid_consistency = ON;

Performance issues

  • Increase concurrency: Use --table-concurrency and --export-concurrency flags
  • Adjust batch sizes: Use --row-batch-size to tune shard size
  • Monitor binlog retention: Ensure binlog retention is sufficient for migration duration

Data type incompatibilities

MySQL INT is 32-bit, but CockroachDB INT defaults to 64-bit:
-- Use INT4 for 32-bit integers
CREATE TABLE example (
    id INT4 PRIMARY KEY,
    count INT8  -- explicitly 64-bit
);
MySQL ENUM types must be converted:
-- MySQL
CREATE TABLE users (
    status ENUM('active', 'inactive', 'pending')
);

-- CockroachDB option 1: Use CHECK constraint
CREATE TABLE users (
    status STRING CHECK (status IN ('active', 'inactive', 'pending'))
);

-- CockroachDB option 2: Create ENUM type
CREATE TYPE user_status AS ENUM ('active', 'inactive', 'pending');
CREATE TABLE users (
    status user_status
);

Next steps

Schema Design

Optimize your schema for CockroachDB

Migration Strategy

Plan your migration approach

Build docs developers (and LLMs) love