CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. Each transaction guarantees ACID semantics spanning arbitrary tables and rows, even when data is distributed. If a transaction succeeds, all mutations are applied together with virtual simultaneity. If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. By default, CockroachDB guarantees that while a transaction is pending, it is isolated from other concurrent transactions with SERIALIZABLE
isolation.
For a detailed discussion of CockroachDB transaction semantics, see How CockroachDB Does Distributed Atomic Transactions and Serializable, Lockless, Distributed: Isolation in CockroachDB. The explanation of the transaction model described in this blog post is slightly out of date. See the Transaction Retries section for more details.
SQL statements
The following SQL statements control transactions.
Statement | Description |
---|---|
BEGIN |
Initiate a transaction and optionally set its priority, access mode, "as of" timestamp, or isolation level. |
COMMIT |
Commit a regular transaction, or clear the connection after committing a transaction using the advanced retry protocol. |
RELEASE SAVEPOINT |
Commit a nested transaction; also used for retryable transactions. |
ROLLBACK |
Abort a transaction and roll the database back to its state before the transaction began. |
ROLLBACK TO SAVEPOINT |
Roll back a nested transaction; also used to handle retryable transaction errors. |
SAVEPOINT |
Used for nested transactions; also used to implement advanced client-side transaction retries. |
SET TRANSACTION |
Set a transaction's priority, access mode, "as of" timestamp, or isolation level. |
SHOW |
Display the current transaction settings. |
If you are using a framework or library that does not have advanced retry logic built in, you should implement an application-level retry loop with exponential backoff. See Client-side retry handling.
Syntax
In CockroachDB, a transaction is set up by surrounding SQL statements with the BEGIN
and COMMIT
statements.
To use advanced client-side transaction retries, you should also include the SAVEPOINT
, ROLLBACK TO SAVEPOINT
and RELEASE SAVEPOINT
statements.
> BEGIN;
> SAVEPOINT cockroach_restart;
<transaction statements>
> RELEASE SAVEPOINT cockroach_restart;
> COMMIT;
At any time before it's committed, you can abort the transaction by executing the ROLLBACK
statement.
Clients using transactions must also include logic to handle retries.
Error handling
To handle errors in transactions, you should check for the following types of server-side errors:
Type | Description |
---|---|
Transaction Retry Errors | Errors with the code 40001 and string restart transaction , which indicate that a transaction failed because it could not be placed in a serializable ordering of transactions by CockroachDB. This occurs under SERIALIZABLE isolation and only rarely under READ COMMITTED isolation. For details on transaction retry errors and how to resolve them, see the Transaction Retry Error Reference. |
Ambiguous Errors | Errors with the code 40003 which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. For information about how to handle ambiguous errors, see here. |
SQL Errors | All other errors, which indicate that a statement in the transaction failed. For example, violating the UNIQUE constraint generates a 23505 error. After encountering these errors, you can either issue a COMMIT or ROLLBACK to abort the transaction and revert the database to its state before the transaction began.If you want to attempt the same set of statements again, you must begin a completely new transaction. |
Transaction retries
Transactions may require retries due to contention with another concurrent or recent transaction attempting to write to the same data.
There are two cases in which transaction retries can occur:
- Automatic retries, which CockroachDB silently processes for you.
- Client-side retries, which your application must handle after receiving a transaction retry error under
SERIALIZABLE
isolation. Client-side retry handling is not necessary forREAD COMMITTED
transactions.
To reduce the need for transaction retries, see Reduce transaction contention.
Automatic retries
CockroachDB automatically retries individual statements (implicit transactions) and transactions sent from the client as a single batch, as long as the size of the results being produced for the client, including protocol overhead, is less than 16KiB by default. Once that buffer overflows, CockroachDB starts streaming results back to the client, at which point automatic retries cannot be performed any more. As long as the results of a single statement or batch of statements are known to stay clear of this limit, the client does not need to worry about transaction retries.
You can increase the occurrence of automatic retries as a way to minimize transaction retry errors:
- Send statements in transactions as a single batch. Batching allows CockroachDB to automatically retry a transaction when previous reads are invalidated at a pushed timestamp. When a multi-statement transaction is not batched, and takes more than a single round trip, CockroachDB cannot automatically retry the transaction. For an example showing how to break up large transactions in an application, see Break up large transactions into smaller units of work.
- Limit the size of the result sets of your transactions to under 16KB, so that CockroachDB is more likely to automatically retry when previous reads are invalidated at a pushed timestamp. When a transaction returns a result set over 16KB, even if that transaction has been sent as a single batch, CockroachDB cannot automatically retry the transaction. You can change the results buffer size for all new sessions using the
sql.defaults.results_buffer.size
cluster setting, or for a specific session using theresults_buffer_size
session variable.
Use ALTER ROLE ALL SET {sessionvar} = {val}
instead of the sql.defaults.*
cluster settings. This allows you to set a default value for all users for any session variable that applies during login, making the sql.defaults.*
cluster settings redundant.
Individual statements
Individual statements are treated as implicit transactions, and so they fall
under the rules described above. If the results are small enough, they will be
automatically retried. In particular, INSERT/UPDATE/DELETE
statements without
a RETURNING
clause are guaranteed to have minuscule result sizes.
For example, the following statement would be automatically retried by CockroachDB:
> DELETE FROM customers WHERE id = 1;
Batched statements
Transactions can be sent from the client as a single batch. Batching implies that CockroachDB receives multiple statements without being asked to return results in between them; instead, CockroachDB returns results after executing all of the statements, except when the accumulated results overflow the buffer mentioned above, in which case they are returned sooner and automatic retries can no longer be performed.
Batching is generally controlled by your driver or client's behavior. Technically, it can be achieved in two ways, both supporting automatic retries:
When the client/driver is using the PostgreSQL Extended Query protocol, a batch is made up of all queries sent in between two
Sync
messages. Many drivers support such batches through explicit batching constructs.When the client/driver is using the PostgreSQL Simple Query protocol, a batch is made up of semicolon-separated strings sent as a unit to CockroachDB. For example, in Go, this code would send a single batch (which would be automatically retried):
db.Exec( "BEGIN; DELETE FROM customers WHERE id = 1; DELETE orders WHERE customer = 1; COMMIT;" )
Within a batch of statements, CockroachDB infers that the statements are not conditional on the results of previous statements, so it can retry all of them. Of course, if the transaction relies on conditional logic (e.g., statement 2 is executed only for some results of statement 1), then the transaction cannot be all sent to CockroachDB as a single batch. In these common cases, CockroachDB cannot retry, say, statement 2 in isolation. Since results for statement 1 have already been delivered to the client by the time statement 2 is forcing the transaction to retry, the client needs to be involved in retrying the whole transaction and so you should write your transactions to use client-side retry handling.
The enable_implicit_transaction_for_batch_statements
session variable defaults to true
. This means that any batch of statements is treated as an implicit transaction, so the BEGIN
/COMMIT
commands are not needed to group all the statements in one transaction.
Bounded staleness reads
In the event bounded staleness reads are used along with either the with_min_timestamp
function or the with_max_staleness
function and the nearest_only
parameter is set to true
, the query will throw an error if it can't be served by a nearby replica.
Nested transactions
CockroachDB supports the nesting of transactions using savepoints. These nested transactions are also known as sub-transactions. Nested transactions can be rolled back without discarding the state of the entire surrounding transaction.
This can be useful in applications that abstract database access using an application development framework or ORM. Different components of the application can operate on different sub-transactions without having to know about each others' internal operations, while trusting that the database will maintain isolation between sub-transactions and preserve data integrity.
Just as COMMIT
and ROLLBACK
are used to commit and discard entire transactions, respectively, RELEASE SAVEPOINT
and ROLLBACK TO SAVEPOINT
are used to commit and discard nested transactions. This relationship is shown in the following table:
Statement | Effect |
---|---|
COMMIT |
Commit an entire transaction. |
ROLLBACK |
Discard an entire transaction. |
RELEASE SAVEPOINT |
Commit (really, forget) the named nested transaction. |
ROLLBACK TO SAVEPOINT |
Discard the changes in the named nested transaction. |
For more information, including examples showing how to use savepoints to create nested transactions, see the savepoints documentation.
Transaction priorities
Every transaction in CockroachDB is assigned an initial priority. By default, the transaction priority is NORMAL
.
Set transaction priority
Cockroach Labs recommends leaving the transaction priority at the default setting in almost all cases. Changing the transaction priority to HIGH
in particular can lead to difficult-to-debug interactions with other transactions executing on the system.
If you are setting a transaction priority to avoid contention or hot spots, or to get better query performance, it is usually a sign that you need to update your schema design and/or review the data access patterns of your workload.
For transactions that you are absolutely sure should be given higher or lower priority, you can set the priority in the BEGIN
statement:
> BEGIN PRIORITY <LOW | NORMAL | HIGH>;
You can also set the priority immediately after a transaction is started:
> SET TRANSACTION PRIORITY <LOW | NORMAL | HIGH>;
To set the default transaction priority for all transactions in a session, use the default_transaction_priority
session variable. For example:
> SET default_transaction_priority = 'low';
View transaction priority
transaction_priority
is a read-only session variable.
To view the current priority of a transaction, use SHOW transaction_priority
or SHOW TRANSACTION PRIORITY
:
> SHOW transaction_priority;
transaction_priority
------------------------
low
> SHOW TRANSACTION PRIORITY;
transaction_priority
------------------------
low
Isolation levels
Isolation is an element of ACID transactions that determines how concurrency is controlled, and ultimately guarantees consistency. CockroachDB offers two transaction isolation levels: SERIALIZABLE
and READ COMMITTED
.
By default, CockroachDB executes all transactions at the strongest ANSI transaction isolation level: SERIALIZABLE
, which permits no concurrency anomalies. To place all transactions in a serializable ordering, SERIALIZABLE
isolation may require transaction restarts and client-side retry handling. For a demonstration of how SERIALIZABLE
prevents anomalies such as write skew, see Serializable Transactions.
CockroachDB can be configured to execute transactions at READ COMMITTED
instead of SERIALIZABLE
isolation. If enabled, READ COMMITTED
is no longer an alias for SERIALIZABLE
. READ COMMITTED
permits some concurrency anomalies in exchange for minimizing transaction aborts and removing the need for client-side retries. Depending on your workload requirements, this may be desirable. For more information, see Read Committed Transactions.
Comparison to ANSI SQL isolation levels
CockroachDB uses slightly different isolation levels than ANSI SQL isolation levels.
Aliases
SNAPSHOT
, READ UNCOMMITTED
, READ COMMITTED
, and REPEATABLE READ
are aliases for SERIALIZABLE
.
If READ COMMITTED
isolation is enabled using the sql.txn.read_committed_isolation.enabled
cluster setting, READ COMMITTED
is no longer an alias for SERIALIZABLE
, and READ UNCOMMITTED
becomes an alias for READ COMMITTED
.
Comparison
The CockroachDB SERIALIZABLE
isolation level is stronger than the ANSI SQL READ UNCOMMITTED
, READ COMMITTED
, and REPEATABLE READ
levels and equivalent to the ANSI SQL SERIALIZABLE
level.
The CockroachDB READ COMMITTED
isolation level is stronger than the PostgreSQL READ COMMITTED
isolation level, and is the strongest isolation level that does not experience serialization errors that require client-side handling.
For more information about the relationship between these levels, see A Critique of ANSI SQL Isolation Levels.
Limit the number of rows written or read in a transaction
You can limit the number of rows written or read in a transaction at the cluster or session level. This allows you configure CockroachDB to log or reject statements that could destabilize a cluster or violate application best practices.
When the
transaction_rows_read_err
session setting is enabled, transactions that read more than the specified number of rows will fail. In addition, the optimizer will not create query plans with scans that exceed the specified row limit. For example, to set a default value for all users at the cluster level:ALTER ROLE ALL SET transaction_rows_read_err = 1000;
When the
transaction_rows_written_err
session setting is enabled, transactions that write more than the specified number of rows will fail. For example, to set a default value for all users at the cluster level:ALTER ROLE ALL SET transaction_rows_written_err = 1000;
To assess the impact of configuring these session settings, use the corresponding session settings transaction_rows_read_log
and transaction_rows_written_log
to log transactions that read or write the specified number of rows. Transactions are logged to the SQL_PERF
channel.
The limits are enforced after each statement of a transaction has been fully executed. The "write" limits apply to INSERT
, INSERT INTO SELECT FROM
, INSERT ON CONFLICT
, UPSERT
, UPDATE
, and DELETE
SQL statements. The "read" limits apply to the SELECT
statement in addition to the statements subject to the "write" limits. The limits do not apply to CREATE TABLE AS
, IMPORT
, TRUNCATE
, DROP
, ALTER TABLE
, BACKUP
, RESTORE
, or CREATE STATISTICS
statements.
Enabling transaction_rows_read_err
disables a performance optimization for mutation statements in implicit transactions where CockroachDB can auto-commit without additional network round trips.