Migrate from PostgreSQL

On this page Carat arrow pointing down
Warning:
GA releases for CockroachDB v23.1 are no longer supported. Cockroach Labs will stop providing LTS Assistance Support for v23.1 LTS releases on November 13, 2025. Prior to that date, upgrade to a more recent version to continue receiving support. For more details, refer to the Release Support Policy.

This page describes basic considerations and provides a basic example of migrating data from PostgreSQL to CockroachDB. The information on this page assumes that you have read Migration Overview, which describes the broad phases and considerations of migrating a database to CockroachDB.

The PostgreSQL migration example on this page demonstrates how to use MOLT tooling to update the PostgreSQL schema, perform an initial load of data, and validate the data. These steps are essential when preparing for a full migration.

Tip:

If you need help migrating to CockroachDB, contact our sales team.

Syntax differences

CockroachDB supports the PostgreSQL wire protocol and is largely compatible with PostgreSQL syntax.

For syntax differences, refer to Features that differ from PostgreSQL.

Unsupported features

The following PostgreSQL features do not yet exist in CockroachDB:

Load PostgreSQL data

You can use one of the following methods to migrate PostgreSQL data to CockroachDB:

The following example uses IMPORT INTO to perform the initial data load.

Example: Migrate frenchtowns to CockroachDB

The following steps demonstrate converting a schema, performing an initial load of data, and validating data consistency during a migration.

In the context of a full migration, these steps ensure that PostgreSQL data can be properly migrated to CockroachDB and your application queries tested against the cluster. For details, see the Migration Overview.

Before you begin

The example uses a modified version of the PostgreSQL french-towns-communes-francais data set and demonstrates how to migrate the schema and data to a Serverless cluster. To follow along with these steps:

  1. Download the frenchtowns data set:

    icon/buttons/copy

       curl -O https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/frenchtowns.sql
    
  2. Create a frenchtowns database on your PostgreSQL instance:

    icon/buttons/copy

       createdb frenchtowns
    
  3. Load the frenchtowns data into PostgreSQL, specifying the path of the downloaded file:

    icon/buttons/copy

       psql frenchtowns -a -f frenchtowns.sql
    
  4. Create a free Cloud account, which is used to access the Schema Conversion Tool and create the Serverless cluster.

Tip:

If you are migrating to a CockroachDB Self-Hosted database, you can export the converted schema from the Schema Conversion Tool and execute the statements in cockroach sql, or use a third-party schema migration tool such as Alembic, Flyway, or Liquibase.

Step 1. Convert the PostgreSQL schema

Use the Schema Conversion Tool to convert the frenchtowns schema for compatibility with CockroachDB. The schema has three tables: regions, departments, and towns.

  1. Dump the PostgreSQL frenchtowns schema with the following pg_dump command:

    icon/buttons/copy

       pg_dump --schema-only frenchtowns > frenchtowns_schema.sql
    
  2. Open the Schema Conversion Tool in the Cloud Console and add a new PostgreSQL schema.

    After conversion is complete, review the results. The Summary Report shows that there are errors under Required Fixes. You must resolve these in order to migrate the schema to CockroachDB.

    Tip:
    You can also add your PostgreSQL database credentials to have the Schema Conversion Tool obtain the schema directly from the PostgreSQL database.

  3. Missing user: postgres errors indicate that the SQL user postgres is missing from CockroachDB. Click Add User to create the user.

  4. Miscellaneous Errors includes a SELECT pg_catalog.set_config('search_path', '', false) statement that can safely be removed. Click Delete to remove the statement from the schema.

  5. Review the CREATE SEQUENCE statements listed under Suggestions. Cockroach Labs does not recommend using a sequence to define a primary key column. For more information, see Unique ID best practices.

    For this example, Acknowledge the suggestion without making further changes. In practice, after conducting the full migration to CockroachDB, you would modify your CockroachDB schema to use unique and non-sequential primary keys.

  6. Click Retry Migration. The Summary Report now shows that there are no errors. This means that the schema is ready to migrate to CockroachDB.

    This example migrates directly to a Serverless cluster. If you are migrating to a CockroachDB Self-Hosted database, you can export the converted schema from the Schema Conversion Tool and execute the statements in cockroach sql, or use a third-party schema migration tool such as Alembic, Flyway, or Liquibase.

  7. Click Migrate Schema to create a new Serverless cluster with the converted schema. Name the database frenchtowns.

    You can view this database on the Databases page of the Cloud Console.

Step 2. Load the PostgreSQL data

Load the frenchtowns data into CockroachDB using IMPORT INTO with CSV-formatted data. IMPORT INTO requires that you export one file per table with the following attributes:

  • Files must be in valid CSV (comma-separated values) or TSV (tab-separated values) format.
  • The delimiter must be a single character. Use the delimiter option to set a character other than a comma (such as a tab, for TSV format).
  • Files must be UTF-8 encoded.
  • If one of the following characters appears in a field, the field must be enclosed by double quotes:
    • Delimiter (, by default).
    • Double quote ("). Because the field will be enclosed by double quotes, escape a double quote inside a field by preceding it with another double quote. For example: "aaa","b""bb","ccc".
    • Newline (\n).
    • Carriage return (\r).
  • If a column is of type BYTES, it can either be a valid UTF-8 string or a hex-encoded byte literal beginning with \x. For example, a field whose value should be the bytes 1, 2 would be written as \x0102.
Note:

By default, IMPORT INTO invalidates all foreign key constraints on the target table.

  1. Dump each table in the PostgreSQL frenchtowns database to a CSV-formatted file:

    icon/buttons/copy

       psql frenchtowns -c "COPY regions TO stdout DELIMITER ',' CSV;" > regions.csv
    

    icon/buttons/copy

       psql frenchtowns -c "COPY departments TO stdout DELIMITER ',' CSV;" > departments.csv
    

    icon/buttons/copy

       psql frenchtowns -c "COPY towns TO stdout DELIMITER ',' CSV;" > towns.csv
    
  2. Host the files where the CockroachDB cluster can access them.

    Each node in the CockroachDB cluster needs to have access to the files being imported. There are several ways for the cluster to access the data; for more information on the types of storage IMPORT INTO can pull from, see the following:

    Cloud storage such as Amazon S3 or Google Cloud is highly recommended for hosting the data files you want to import.

    The dump files generated in the preceding step are already hosted on a public S3 bucket created for this example.

  3. Open a SQL shell to the CockroachDB frenchtowns cluster. To find the command, open the Connect dialog in the Cloud Console and select the frenchtowns database and CockroachDB Client option. It will look like:

    icon/buttons/copy

       cockroach sql --url "postgresql://{username}@{hostname}:{port}/frenchtowns?sslmode=verify-full" 
    
  4. Use IMPORT INTO to import each PostgreSQL dump file into the corresponding table in the frenchtowns database.

    The following commands point to a public S3 bucket where the frenchtowns data dump files are hosted for this example.

    Tip:
    You can add the row_limit option to specify the number of rows to import. For example, row_limit = '10' will import the first 10 rows of the table. This option is useful for finding errors quickly before executing a more time- and resource-consuming import.

    icon/buttons/copy

       IMPORT INTO regions
         CSV DATA (
           'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/regions.csv'
         );
    
               job_id       |  status   | fraction_completed | rows | index_entries | bytes
       ---------------------+-----------+--------------------+------+---------------+--------
         893753132185026561 | succeeded |                  1 |   26 |            52 |  2338
    

    icon/buttons/copy

       IMPORT INTO departments
         CSV DATA (
           'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/departments.csv'
         );
    
               job_id       |  status   | fraction_completed | rows | index_entries | bytes
       ---------------------+-----------+--------------------+------+---------------+--------
         893753147892465665 | succeeded |                  1 |  100 |           300 | 11166
    

    icon/buttons/copy

       IMPORT INTO towns
         CSV DATA (
           'https://cockroachdb-migration-examples.s3.us-east-1.amazonaws.com/postgresql/frenchtowns/towns.csv'
         );
    
               job_id       |  status   | fraction_completed | rows  | index_entries |  bytes
       ---------------------+-----------+--------------------+-------+---------------+----------
         893753162225680385 | succeeded |                  1 | 36684 |         36684 | 2485007
    
  5. Recall that IMPORT INTO invalidates all foreign key constraints on the target table. View the constraints that are defined on departments and towns:

    icon/buttons/copy

       SHOW CONSTRAINTS FROM departments;
    
         table_name  |     constraint_name     | constraint_type |                         details                         | validated
       --------------+-------------------------+-----------------+---------------------------------------------------------+------------
         departments | departments_capital_key | UNIQUE          | UNIQUE (capital ASC)                                    |     t
         departments | departments_code_key    | UNIQUE          | UNIQUE (code ASC)                                       |     t
         departments | departments_name_key    | UNIQUE          | UNIQUE (name ASC)                                       |     t
         departments | departments_pkey        | PRIMARY KEY     | PRIMARY KEY (id ASC)                                    |     t
         departments | departments_region_fkey | FOREIGN KEY     | FOREIGN KEY (region) REFERENCES regions(code) NOT VALID |     f
    

    icon/buttons/copy

       SHOW CONSTRAINTS FROM towns;
    
         table_name |      constraint_name      | constraint_type |                             details                             | validated
       -------------+---------------------------+-----------------+-----------------------------------------------------------------+------------
         towns      | towns_code_department_key | UNIQUE          | UNIQUE (code ASC, department ASC)                               |     t
         towns      | towns_department_fkey     | FOREIGN KEY     | FOREIGN KEY (department) REFERENCES departments(code) NOT VALID |     f
         towns      | towns_pkey                | PRIMARY KEY     | PRIMARY KEY (id ASC)                                            |     t
    
  6. To validate the foreign keys, issue an ALTER TABLE ... VALIDATE CONSTRAINT statement for each table:

    icon/buttons/copy

       ALTER TABLE departments VALIDATE CONSTRAINT departments_region_fkey;
    

    icon/buttons/copy

       ALTER TABLE towns VALIDATE CONSTRAINT towns_department_fkey;
    

Step 3. Validate the migrated data

Use MOLT Verify to check that the data on PostgreSQL and CockroachDB are consistent.

  1. Install MOLT Verify.

  2. In the directory where you installed MOLT Verify, use the following command to compare the two databases, specifying the PostgreSQL connection string with --source and the CockroachDB connection string with --target:

    Tip:
    To find the CockroachDB connection string, open the Connect dialog in the Cloud Console and select the frenchtowns database and the General connection string option.

    icon/buttons/copy

       ./molt verify --source 'postgresql://{username}:{password}@{host}:{port}/frenchtowns' --target 'postgresql://{user}:{password}@{host}:{port}/frenchtowns?sslmode=verify-full'
    

    You will see the initial output:

       <nil> INF verification in progress
    

    The following output indicates that MOLT Verify has completed verification:

       <nil> INF finished row verification on public.regions (shard 1/1): truth rows seen: 26, success: 26, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
       <nil> INF finished row verification on public.departments (shard 1/1): truth rows seen: 100, success: 100, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
       <nil> INF progress on public.towns (shard 1/1): truth rows seen: 10000, success: 10000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
       <nil> INF progress on public.towns (shard 1/1): truth rows seen: 20000, success: 20000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
       <nil> INF progress on public.towns (shard 1/1): truth rows seen: 30000, success: 30000, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
       <nil> INF finished row verification on public.towns (shard 1/1): truth rows seen: 36684, success: 36684, missing: 0, mismatch: 0, extraneous: 0, live_retry: 0
       <nil> INF verification complete
    

With the schema migrated and the initial data load verified, the next steps in a real-world migration are to ensure that you have made any necessary application changes, validate application queries, and perform a dry run before conducting the full migration.

To learn more, see the Migration Overview.

See also


Yes No
On this page

Yes No