Skip to main content
CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. Each transaction guarantees ACID semantics spanning arbitrary tables and rows, even when data is distributed across multiple nodes.

Overview

Transactions in CockroachDB provide:
  • Atomicity: All statements succeed together or fail together
  • Consistency: Database constraints are maintained
  • Isolation: Concurrent transactions don’t interfere with each other
  • Durability: Committed data is permanently stored
By default, CockroachDB guarantees SERIALIZABLE isolation, the highest isolation level, ensuring that concurrent transactions appear to execute in a serial order.

Transaction Syntax

Transactions are initiated with BEGIN and completed with COMMIT or ROLLBACK.
BEGIN;

UPDATE products SET inventory = inventory - 1 WHERE sku = '8675309';
INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new');

COMMIT;

Transaction Statements

BEGIN
statement
Initiate a transaction with optional priority, access mode, timestamp, or isolation level.
BEGIN;
BEGIN PRIORITY HIGH;
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRANSACTION READ ONLY;
COMMIT
statement
Commit the current transaction, making all changes permanent.
COMMIT;
ROLLBACK
statement
Abort the transaction and discard all changes.
ROLLBACK;
SAVEPOINT
statement
Create a savepoint for nested transactions.
SAVEPOINT sp1;
RELEASE SAVEPOINT
statement
Commit a nested transaction.
RELEASE SAVEPOINT sp1;
ROLLBACK TO SAVEPOINT
statement
Roll back to a savepoint, discarding changes made after it.
ROLLBACK TO SAVEPOINT sp1;
SET TRANSACTION
statement
Set transaction properties after BEGIN.
SET TRANSACTION PRIORITY HIGH;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

Isolation Levels

CockroachDB supports two isolation levels:

SERIALIZABLE (Default)

SERIALIZABLE isolation provides the strongest consistency guarantees. Transactions appear to execute in a serial order, preventing all anomalies.
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- Your statements here
COMMIT;
Under SERIALIZABLE isolation, transactions may encounter retry errors during contention. Implement retry logic in your application.

READ COMMITTED

READ COMMITTED isolation provides weaker consistency but avoids transaction retry errors. Each statement sees a snapshot of data committed before the statement began.
BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Your statements here
COMMIT;
READ COMMITTED must be explicitly enabled via the sql.txn.read_committed_isolation.enabled cluster setting.

Transaction Priorities

You can set transaction priority to influence retry behavior during contention.
NORMAL
priority
default:true
Default priority for most transactions.
LOW
priority
Use for background or less critical transactions.
HIGH
priority
Use for critical transactions that should be prioritized.
BEGIN PRIORITY HIGH;
UPDATE critical_table SET value = 1;
COMMIT;
Use the default NORMAL priority unless you have specific requirements. Overusing HIGH priority can reduce its effectiveness.

Transaction Retries

Transactions may need to retry due to contention with concurrent transactions.

Automatic Retries

CockroachDB automatically retries:
  • Single statements (implicit transactions)
  • Batched statements where results fit in the 16 KiB buffer
-- Automatically retried if needed
DELETE FROM customers WHERE id = 1;
You can increase the buffer size to enable more automatic retries:
SET CLUSTER SETTING sql.conn.max_read_buffer_message_size = '65536';

Client-Side Retries

For SERIALIZABLE transactions that span multiple statements, implement retry logic in your application:
Python Example
import psycopg2
from psycopg2 import errorcodes

def run_transaction(conn):
    with conn.cursor() as cur:
        cur.execute("SAVEPOINT cockroach_restart")
        try:
            # Your transaction logic here
            cur.execute("UPDATE accounts SET balance = balance - 100 WHERE id = 1")
            cur.execute("UPDATE accounts SET balance = balance + 100 WHERE id = 2")
            cur.execute("RELEASE SAVEPOINT cockroach_restart")
        except psycopg2.Error as e:
            if e.pgcode == errorcodes.SERIALIZATION_FAILURE:
                cur.execute("ROLLBACK TO SAVEPOINT cockroach_restart")
                raise  # Retry the transaction
            else:
                raise

while True:
    try:
        run_transaction(conn)
        conn.commit()
        break
    except psycopg2.Error as e:
        if e.pgcode == errorcodes.SERIALIZATION_FAILURE:
            continue  # Retry
        else:
            raise
READ COMMITTED transactions do not require client-side retry handling.

Error Handling

Handle these error types in your application:
Error TypeCodeDescription
Transaction Retry40001Transaction failed due to contention. Retry the entire transaction.
Ambiguous Error40003Transaction state is unknown. Verify whether it committed.
SQL ErrorVariousStatement failed due to constraint violation, syntax error, etc.
ERROR: restart transaction: TransactionRetryWithProtoRefreshError
SQLSTATE: 40001
Action: Retry the entire transaction from BEGIN.

Nested Transactions

Use savepoints to create nested transactions that can be rolled back independently.
BEGIN;

INSERT INTO users (id, name) VALUES (1, 'Alice');

SAVEPOINT sp1;
INSERT INTO orders (user_id, total) VALUES (1, 100.00);

SAVEPOINT sp2;
INSERT INTO logs (message) VALUES ('Order created');

-- Rollback only the log insert
ROLLBACK TO SAVEPOINT sp2;

-- Commit the order insert
RELEASE SAVEPOINT sp1;

COMMIT;
The relationship between transaction and savepoint statements:
Transaction StatementEffectSavepoint StatementEffect
COMMITCommit entire transactionRELEASE SAVEPOINTCommit nested transaction
ROLLBACKDiscard entire transactionROLLBACK TO SAVEPOINTDiscard nested transaction

Batched Statements

Send multiple statements as a single batch for automatic retry:
Go Example
db.Exec(`
    BEGIN;
    DELETE FROM customers WHERE id = 1;
    DELETE FROM orders WHERE customer = 1;
    COMMIT;
`)
Batching tells CockroachDB that statements aren’t conditional on each other’s results, enabling automatic retries.

Transaction Limits

You can limit the number of rows read or written in a transaction to prevent large transactions from impacting cluster performance.
SET CLUSTER SETTING sql.defaults.transaction_rows_read_err = 100000;
SET CLUSTER SETTING sql.defaults.transaction_rows_written_err = 50000;
Enabling transaction_rows_read_err disables a performance optimization for mutation statements in implicit transactions.

Best Practices

  1. Keep transactions short: Long transactions increase contention and retry likelihood
  2. Use appropriate isolation: Choose READ COMMITTED if you don’t need full serializability
  3. Implement retry logic: Always handle retry errors in SERIALIZABLE transactions
  4. Use savepoints wisely: Nested transactions enable fine-grained rollback
  5. Batch when possible: Send related statements as a batch for automatic retries
  6. Monitor contention: Use the DB Console to identify and resolve transaction contention

AS OF SYSTEM TIME

Read historical data using the AS OF SYSTEM TIME clause:
BEGIN TRANSACTION READ ONLY AS OF SYSTEM TIME '-5m';
SELECT * FROM orders;
COMMIT;
This is useful for:
  • Consistent backups
  • Time-travel queries
  • Reducing contention on read-only transactions

See Also

Build docs developers (and LLMs) love