Migration Overview
PostgreSQL
Highly compatible with minimal application changes
MySQL
Requires SQL dialect translation and schema adjustments
Oracle
Moderate compatibility with some feature mapping
Other SQL Databases
Case-by-case assessment required
Migration Strategies
Offline Migration
Complete migration during a maintenance window:Prepare CockroachDB Cluster
Set up and configure your target CockroachDB cluster with appropriate sizing and replication.
Online Migration
Migrate with minimal downtime using change data capture:Migrating from PostgreSQL
CockroachDB is highly compatible with PostgreSQL, making migration straightforward:Schema Export
Export PostgreSQL Schema
Schema Compatibility Review
Review and adjust PostgreSQL-specific features:Common PostgreSQL Compatibility Issues
Common PostgreSQL Compatibility Issues
- SERIAL vs SEQUENCE: CockroachDB uses
unique_rowid()orgen_random_uuid()instead ofSERIAL - Stored Procedures: Limited support; may need conversion to application logic
- Extensions: Review required extensions (most are unsupported)
- Triggers: Evaluate trigger logic for conversion
- Custom Types: Map custom types to CockroachDB equivalents
- Partitioning Syntax: Different partitioning implementation
Schema Conversion Example
Data Export and Import
The
IMPORT statement is much faster than INSERT for bulk data loading.Migrating from MySQL
MySQL migrations require more extensive schema and SQL translation:Schema Differences
MySQL to CockroachDB Type Mapping
| MySQL Type | CockroachDB Type | Notes |
|---|---|---|
INT AUTO_INCREMENT | UUID or INT8 with unique_rowid() | Use UUID for distributed systems |
VARCHAR(n) | STRING(n) | Direct mapping |
DATETIME | TIMESTAMP | Similar functionality |
ENUM | STRING with CHECK constraint | Convert to check constraint |
TINYINT(1) | BOOL | Direct mapping for boolean |
BLOB | BYTES | Direct mapping |
TEXT | STRING | Direct mapping |
Data Migration from MySQL
Export MySQL Data
Using IMPORT for Bulk Data
TheIMPORT statement is the fastest way to load large datasets:
Import CSV Data
Import CSV
Import from Multiple Files
Import Multiple CSV Files
Import PostgreSQL Dump
Import PostgreSQL Dump
Import MySQL Dump
Import MySQL Dump
Change Data Capture (CDC)
For online migrations, use CDC to keep databases synchronized:Using External CDC Tools
Debezium
Open-source CDC platform supporting PostgreSQL and MySQL
Striim
Real-time data integration with CockroachDB support
AWS DMS
Database Migration Service for AWS environments
Custom Scripts
Application-level replication logic
Application Migration
Connection String Changes
ORM Configuration Updates
Query Optimization
Some queries may need optimization for CockroachDB:Common Query Adjustments
Common Query Adjustments
- LIMIT without ORDER BY: Add ORDER BY for consistent results in distributed system
- Transactions: Consider using
SELECT FOR UPDATEfor row-level locking - Auto-increment IDs: Replace with UUIDs or use
unique_rowid() - Timestamp precision: CockroachDB uses microsecond precision
- Implicit type conversions: May behave differently; make conversions explicit
Validation and Testing
Migration Best Practices
Start Small
Begin with non-critical databases or tables to gain experience with the migration process.
Test Thoroughly
Perform multiple test migrations in non-production environments before the actual migration.
Monitor Closely
Set up comprehensive monitoring before migration and watch closely during and after cutover.
Migrate During Low Traffic
Schedule cutover during periods of low application usage to minimize impact.
Common Migration Challenges
Sequence Gaps
Use UUIDs instead of auto-incrementing integers for distributed systems
Foreign Key Constraints
May need to disable during import and re-enable after
Large Object Storage
Consider external blob storage for very large files
Timezone Handling
Ensure consistent timezone handling (CockroachDB stores in UTC)
Monitoring Migration Progress
Check Import Job Status
View Job Details
Post-Migration Tasks
Migration Checklist
Complete Migration Checklist
Complete Migration Checklist
- Source database schema exported and reviewed
- Schema converted to CockroachDB-compatible format
- Test migration completed successfully
- Data validation queries prepared
- Application connection strings updated
- ORM configurations updated
- Performance baselines established
- Monitoring and alerting configured
- Rollback plan documented and tested
- Stakeholders notified of migration schedule
- Production data exported
- Data imported into CockroachDB
- Data validation completed
- Indexes and constraints verified
- Application testing passed
- Performance testing passed
- Cutover completed
- Post-migration monitoring active
- Source database archived
Next Steps
Scaling
Scale your new cluster
Backup & Restore
Set up backup strategies
Security
Secure your deployment