MOLT Fetch

On this page Carat arrow pointing down

MOLT Fetch moves data from a source database into CockroachDB as part of a database migration.

MOLT Fetch uses IMPORT INTO or COPY FROM to move the source data to cloud storage (Google Cloud Storage, Amazon S3, or Azure Blob Storage), a local file server, or local memory. Once the data is exported, MOLT Fetch loads the data into a target CockroachDB database.

How it works

MOLT Fetch operates in two distinct phases to move data from the source databases to CockroachDB. The data export phase moves data to intermediate storage (either cloud storage or a local file server). The data import phase moves data from that intermediate storage to the CockroachDB cluster. For details on available modes, refer to Define fetch mode.

MOLT Fetch flow draft

Data export phase

In this first phase, MOLT Fetch connects to the source database and exports table data to intermediate storage.

Data import phase

MOLT Fetch loads the exported data from intermediate storage to the target CockroachDB database.

Refer to the MOLT Fetch flags to learn how to use any flag for the molt fetch command.

Run MOLT Fetch

The following section describes how to use the molt fetch command and how to set its main flags.

Specify source and target databases

Tip:

Follow the recommendations in Connection security.

--source specifies the connection string of the source database.

PostgreSQL or CockroachDB connection string:

icon/buttons/copy
--source 'postgresql://{username}:{password}@{host}:{port}/{database}'

MySQL connection string:

icon/buttons/copy
--source 'mysql://{username}:{password}@{protocol}({host}:{port})/{database}'

Oracle connection string:

icon/buttons/copy
--source 'oracle://{username}:{password}@{host}:{port}/{service_name}'

For Oracle Multitenant databases, --source-cdb specifies the container database (CDB) connection. --source specifies the pluggable database (PDB):

icon/buttons/copy
--source 'oracle://{username}:{password}@{host}:{port}/{pdb_service_name}'
--source-cdb 'oracle://{username}:{password}@{host}:{port}/{cdb_service_name}'

--target specifies the CockroachDB connection string:

icon/buttons/copy
--target 'postgresql://{username}:{password}@{host}:{port}/{database}'

Define fetch mode

--mode specifies the MOLT Fetch behavior.

data-load (default) instructs MOLT Fetch to load the source data into CockroachDB:

icon/buttons/copy
--mode data-load

export-only instructs MOLT Fetch to export the source data to the specified cloud storage or local file server. It does not load the data into CockroachDB:

icon/buttons/copy
--mode export-only

import-only instructs MOLT Fetch to load the source data in the specified cloud storage or local file server into the CockroachDB target:

icon/buttons/copy
--mode import-only

Select data to migrate

By default, MOLT Fetch moves all data from the --source database to CockroachDB. Use the following flags to move a subset of data.

Schema and table selection

--schema-filter specifies a range of schema objects to move to CockroachDB, formatted as a POSIX regex string. For example, to move every table in the source database's migration_schema schema:

icon/buttons/copy
--schema-filter 'migration_schema'
Note:

--schema-filter does not apply to MySQL sources because MySQL tables belong directly to the database specified in the connection string, not to a separate schema.

--table-filter and --table-exclusion-filter specify tables to include and exclude from the migration, respectively, formatted as POSIX regex strings. For example, to move every source table that has "user" in the table name and exclude every source table that has "temp" in the table name:

icon/buttons/copy
--table-filter '.*user.*' --table-exclusion-filter '.*temp.*'

Row-level filtering

Use --filter-path to specify the path to a JSON file that defines row-level filtering for data load. This enables you to move a subset of data in a table, rather than all data in the table. To apply row-level filters during replication, use MOLT Replicator with userscripts.

icon/buttons/copy
--filter-path 'data-filter.json'

The JSON file should contain one or more entries in filters, each with a resource_specifier (schema and table) and a SQL expression expr. For example, the following example exports only rows from migration_schema.t1 where v > 100:

{
  "filters": [
    {
      "resource_specifier": {
        "schema": "migration_schema",
        "table": "t1"
      },
      "expr": "v > 100"
    }
  ]
}

expr is case-sensitive and must be valid in your source dialect. For example, when using Oracle as the source, quote all identifiers and escape embedded quotes:

{
  "filters": [
    {
      "resource_specifier": {
        "schema": "C##FETCHORACLEFILTERTEST",
        "table": "FILTERTBL"
      },
      "expr": "ABS(\"X\") > 10 AND CEIL(\"X\") < 100 AND FLOOR(\"X\") > 0 AND ROUND(\"X\", 2) < 100.00 AND TRUNC(\"X\", 0) > 0 AND MOD(\"X\", 2) = 0 AND FLOOR(\"X\" / 3) > 1"
    }
  ]
}
Note:

If the expression references columns that are not indexed, MOLT Fetch will emit a warning like: filter expression 'v > 100' contains column 'v' which is not indexed. This may lead to performance issues.

Shard tables for concurrent export

During the data export phase, MOLT Fetch can divide large tables into multiple shards for concurrent export.

To control the number of shards created per table, use the --export-concurrency flag. For example:

icon/buttons/copy
--export-concurrency=4
Tip:

For performance considerations with concurrency settings, refer to Best practices.

Two sharding mechanisms are available:

  • Range-based sharding (default): Tables are divided based on numerical ranges found in primary key values. Only tables with INT, FLOAT, or UUID primary keys can use range-based sharding. Tables with other primary key data types export as a single shard.

  • Stats-based sharding (PostgreSQL only): Enable with --use-stats-based-sharding for PostgreSQL 11+ sources. Tables are divided by analyzing the pg_stats view to create more evenly distributed shards, up to a maximum of 200 shards. Primary keys of any data type are supported.

Stats-based sharding requires that the user has SELECT permissions on source tables and on each table's pg_stats view. The latter permission is automatically granted to users that can read the table.

To optimize stats-based sharding, run ANALYZE on source tables before migration to ensure that table statistics are up-to-date and shards are evenly distributed. This requires MAINTAIN or OWNER permissions on the table. You can analyze specific primary key columns or the entire table. For example:

icon/buttons/copy
ANALYZE table_name(PK1, PK2, PK3);
icon/buttons/copy
ANALYZE table_name;

Large tables may take time to analyze, but ANALYZE can run in the background. You can run ANALYZE with MAINTAIN or OWNER privileges during migration preparation, then perform the actual migration with standard SELECT privileges.

Note:

Migration without running ANALYZE will still work, but shard distribution may be less even.

When using --use-stats-based-sharding, monitor the log output for each table you want to migrate.

If stats-based sharding is successful on a table, MOLT logs the following INFO message:

Stats based sharding enabled for table {table_name}

If stats-based sharding fails on a table, MOLT logs the following WARNING message and defaults to range-based sharding:

Warning: failed to shard table {table_name} using stats based sharding: {reason_for_failure}, falling back to non stats based sharding

The number of shards is dependent on the number of distinct values in the first primary key column of the table to be migrated. If this is different from the number of shards requested with --export-concurrency, MOLT logs the following WARNING and continues with the migration:

number of shards formed: {num_shards_formed} is not equal to number of shards requested: {num_shards_requested} for table {table_name}

Because stats-based sharding analyzes the entire table, running --use-stats-based-sharding with --filter-path (refer to Selective data movement) will cause imbalanced shards to form.

Define intermediate storage

MOLT Fetch can move the source data to CockroachDB via cloud storage, a local file server, or directly without an intermediate store.

Bucket path

Tip:

Only the path specified in --bucket-path is used. Query parameters, such as credentials, are ignored. To authenticate cloud storage, follow the steps in Secure cloud storage.

--bucket-path instructs MOLT Fetch to write intermediate files to a path within Google Cloud Storage, Amazon S3, or Azure Blob Storage to which you have the necessary permissions. Use additional flags, shown in the following examples, to specify authentication or region parameters as required for bucket access.

Connect to a Google Cloud Storage bucket with implicit authentication and assume role:

icon/buttons/copy
--bucket-path 'gs://migration/data/cockroach'
--assume-role 'user-test@cluster-ephemeral.iam.gserviceaccount.com'
--use-implicit-auth

Connect to an Amazon S3 bucket and explicitly specify the ap_south-1 region:

icon/buttons/copy
--bucket-path 's3://migration/data/cockroach'
--import-region 'ap-south-1'
Note:

When --import-region is set, IMPORT INTO must be used for data movement.

Connect to an Azure Blob Storage container with implicit authentication:

icon/buttons/copy
--bucket-path 'azure-blob://migration/data/cockroach'
--use-implicit-auth

Local path

--local-path instructs MOLT Fetch to write intermediate files to a path within a local file server. --local-path-listen-addr specifies the address of the local file server. For example:

icon/buttons/copy
--local-path /migration/data/cockroach
--local-path-listen-addr 'localhost:3000'

In some cases, CockroachDB will not be able to use the local address specified by --local-path-listen-addr. This will depend on where CockroachDB is deployed, the runtime OS, and the source dialect.

For example, if you are migrating to CockroachDB Cloud, such that the Cloud cluster is in a different physical location than the machine running molt fetch, then CockroachDB cannot reach an address such as localhost:3000. In these situations, use --local-path-crdb-access-addr to specify an address for the local file server that is publicly accessible. For example:

icon/buttons/copy
--local-path /migration/data/cockroach
--local-path-listen-addr 'localhost:3000'
--local-path-crdb-access-addr '44.55.66.77:3000'
Tip:

Cloud storage is often preferable to a local file server, which can require considerable disk space.

Direct copy

--direct-copy specifies that MOLT Fetch should use COPY FROM to move the source data directly to CockroachDB without an intermediate store:

  • Because the data is held in memory, the machine must have sufficient RAM for the data currently in flight:

    average size of each row * --row-batch-size * --export-concurrency * --table-concurrency
    
  • Direct copy does not support compression or continuation.

  • The --use-copy flag is redundant with --direct-copy.

IMPORT INTO vs. COPY FROM

MOLT Fetch can use either IMPORT INTO or COPY FROM to load data into CockroachDB.

By default, MOLT Fetch uses IMPORT INTO:

--use-copy configures MOLT Fetch to use COPY FROM:

  • COPY FROM enables your tables to remain online and accessible. However, it is slower than using IMPORT INTO.
  • COPY FROM does not support compression.
Note:

COPY FROM is also used for direct copy.

Handle target tables

--table-handling defines how MOLT Fetch loads data on the CockroachDB tables that match the selection.

To load the data without changing the existing data in the tables, use none:

icon/buttons/copy
--table-handling none

To truncate tables before loading the data, use truncate-if-exists:

icon/buttons/copy
--table-handling truncate-if-exists

To drop existing tables and create new tables before loading the data, use drop-on-target-and-recreate:

icon/buttons/copy
--table-handling drop-on-target-and-recreate

When using the drop-on-target-and-recreate option, MOLT Fetch creates a new CockroachDB table to load the source data if one does not already exist. To guide the automatic schema creation, you can explicitly map source types to CockroachDB types. drop-on-target-and-recreate does not create indexes or constraints other than PRIMARY KEY and NOT NULL.

Mismatch handling

If either none or truncate-if-exists is set, molt fetch loads data into the existing tables on the target CockroachDB database. If the target schema mismatches the source schema, molt fetch will exit early in certain cases, and will need to be re-run from the beginning. For details, refer to Fetch exits early due to mismatches.

Note:

This does not apply when drop-on-target-and-recreate is specified, since this option automatically creates a compatible CockroachDB schema.

Skip primary key matching

--skip-pk-check removes the requirement that source and target tables share matching primary keys for data load. When this flag is set:

  • The data load proceeds even if the source or target table lacks a primary key, or if their primary key columns do not match.
  • Table sharding is disabled. Each table is exported in a single batch within one shard, bypassing --export-concurrency and --row-batch-size. As a result, memory usage and execution time may increase due to full table scans.
  • If the source table contains duplicate rows but the target has PRIMARY KEY or UNIQUE constraints, duplicate rows are deduplicated during import.

When --skip-pk-check is set, all tables are treated as if they lack a primary key, and are thus exported in a single unsharded batch. To avoid performance issues, use this flag with --table-filter to target only tables without a primary key.

For example:

icon/buttons/copy
molt fetch \
  --mode data-load \
  --table-filter 'nopktbl' \
  --skip-pk-check

Example log output when --skip-pk-check is enabled:

{"level":"info","message":"sharding is skipped for table public.nopktbl - flag skip-pk-check is specified and thus no PK for source table is specified"}

Type mapping

If drop-on-target-and-recreate is set, MOLT Fetch automatically creates a CockroachDB schema that is compatible with the source data. The column types are determined by MOLT's default type mappings.

--type-map-file specifies the path to the JSON file containing the explicit type mappings. For example:

icon/buttons/copy
--type-map-file 'type-mappings.json'

The following JSON example defines two type mappings:

 [
  {
    "table": "public.t1",
    "column_type_map": [
      {
        "column": "*",
        "source_type": "int",
        "crdb_type": "INT2"
      },
      {
        "column": "name",
        "source_type": "varbit",
        "crdb_type": "string"
      }
    ]
  }
]
  • table specifies the table that will use the custom type mappings in column_type_map. The value is written as {schema}.{table}.
  • column specifies the column that will use the custom type mapping. If * is specified, then all columns in the table with the matching source_type are converted.
  • source_type specifies the source type to be mapped.
  • crdb_type specifies the target CockroachDB type to be mapped.

Define transformations

You can define transformation rules to be performed on the target database during the fetch task. These can be used to:

Transformation rules are defined in the JSON file indicated by the --transformations-file flag. For example:

icon/buttons/copy
--transformations-file 'transformation-rules.json'

Transformation rules example

The following JSON example defines three transformation rules: rule 1 maps computed columns, rule 2 renames tables, and rule 3 renames schemas.

{
  "transforms": [
    {
      "id": 1,
      "resource_specifier": {
        "schema": ".*",
        "table": ".*"
      },
      "column_exclusion_opts": {
        "add_computed_def": true,
        "column": "^age$"
      }
    },
    {
      "id": 2,
      "resource_specifier": {
        "schema": "public",
        "table": "charges_part.*"
      },
      "table_rename_opts": {
        "value": "charges"
      }
    },
    {
      "id": 3,
      "resource_specifier": {
        "schema": "previous_schema"
      },
      "schema_rename_opts": {
        "value": "new_schema"
      }
    }
  ]
}

Column exclusions and computed columns

  • resource_specifier: Identifies which schemas and tables to transform.
    • schema: POSIX regex matching source schemas.
    • table: POSIX regex matching source tables.
  • column_exclusion_opts: Exclude columns or map them as computed columns.
    • column: POSIX regex matching source columns to exclude.
    • add_computed_def: When true, map matching columns as computed columns on target tables using ALTER TABLE ... ADD COLUMN and the source column definition. All matching columns must be computed columns on the source.
      Warning:
      Columns matching column are not moved to CockroachDB if add_computed_def is false (default) or if matching columns are not computed columns.

Example rule 1 maps all source age columns to computed columns on CockroachDB. This assumes that all matching age columns are defined as computed columns on the source:

{
  "id": 1,
  "resource_specifier": {
    "schema": ".*",
    "table": ".*"
  },
  "column_exclusion_opts": {
    "add_computed_def": true,
    "column": "^age$"
  }
},

Table renaming

  • resource_specifier: Identifies which schemas and tables to transform.
    • schema: POSIX regex matching source schemas.
    • table: POSIX regex matching source tables.
  • table_rename_opts: Rename tables on the target.

Example rule 2 maps all table names with prefix charges_part to a single charges table on CockroachDB (an n-to-1 mapping). This assumes that all matching charges_part.* tables have the same table definition:

{
  "id": 2,
  "resource_specifier": {
    "schema": "public",
    "table": "charges_part.*"
  },
  "table_rename_opts": {
    "value": "charges"
  }
},

Schema renaming

  • resource_specifier: Identifies which schemas and tables to transform.
    • schema: POSIX regex matching source schemas.
    • table: POSIX regex matching source tables.
  • schema_rename_opts: Rename database schemas on the target.
    • value: Target schema name. For example, previous_schema.table1 becomes new_schema.table1.

Example rule 3 renames the database schema previous_schema to new_schema on CockroachDB:

{
  "id": 3,
  "resource_specifier": {
    "schema": "previous_schema"
  },
  "schema_rename_opts": {
    "value": "new_schema"
  }
}

General notes

Each rule is applied in the order it is defined. If two rules overlap, the later rule will override the earlier rule.

To verify that the logging shows that the computed columns are being created:

When running molt fetch, set --logging debug and look for ALTER TABLE ... ADD COLUMN statements with the STORED or VIRTUAL keywords in the log output:

{"level":"debug","time":"2024-07-22T12:01:51-04:00","message":"running: ALTER TABLE IF EXISTS public.computed ADD COLUMN computed_col INT8 NOT NULL AS ((col1 + col2)) STORED"}

After running molt fetch, issue a SHOW CREATE TABLE statement on CockroachDB:

icon/buttons/copy
SHOW CREATE TABLE computed;
  table_name |                         create_statement
-------------+-------------------------------------------------------------------
  computed   | CREATE TABLE public.computed (
  ...
             |     computed_col INT8 NOT NULL AS (col1 + col2) STORED
             | )

Continue MOLT Fetch after interruption

If MOLT Fetch fails while loading data into CockroachDB from intermediate files, it exits with an error message, fetch ID, and continuation token for each table that failed to load on the target database. You can use this information to continue the task from the continuation point where it was interrupted.

Continuation is only possible under the following conditions:

  • All data has been exported from the source database into intermediate files on cloud or local storage.
  • The initial load of source data into the target CockroachDB database is incomplete.
Note:

Only one fetch ID and set of continuation tokens, each token corresponding to a table, are active at any time. See List active continuation tokens.

To retry all data starting from the continuation point, reissue the molt fetch command and include the --fetch-id.

icon/buttons/copy
--fetch-id d44762e5-6f70-43f8-8e15-58b4de10a007

To retry a specific table that failed, include both --fetch-id and --continuation-token. The latter flag specifies a token string that corresponds to a specific table on the source database. A continuation token is written in the molt fetch output for each failed table. If the fetch task encounters a subsequent error, it generates a new token for each failed table. See List active continuation tokens.

Note:

This will retry only the table that corresponds to the continuation token. If the fetch task succeeds, there may still be source data that is not yet loaded into CockroachDB.

icon/buttons/copy
--fetch-id d44762e5-6f70-43f8-8e15-58b4de10a007
--continuation-token 011762e5-6f70-43f8-8e15-58b4de10a007

To retry all data starting from a specific file, include both --fetch-id and --continuation-file-name. The latter flag specifies the filename of an intermediate file in cloud or local storage. All filenames are prepended with part_ and have the .csv.gz or .csv extension, depending on compression type (gzip by default). For example:

icon/buttons/copy
--fetch-id d44762e5-6f70-43f8-8e15-58b4de10a007
--continuation-file-name part_00000003.csv.gz
Note:

Continuation is not possible when using direct copy.

List active continuation tokens

To view all active continuation tokens, issue a molt fetch tokens list command along with --conn-string, which specifies the connection string for the target CockroachDB database. For example:

icon/buttons/copy
molt fetch tokens list \
--conn-string 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full'
+--------------------------------------+--------------------------------------+------------------+----------------------+
|                  ID                  |               FETCH ID               |    TABLE NAME    |      FILE NAME       |
+--------------------------------------+--------------------------------------+------------------+----------------------+
| f6f0284c-d9c1-43c9-8fde-af609d0dbd82 | 66443597-5689-4df3-a7b9-9fc5e27180eb | public.employees | part_00000001.csv.gz |
+--------------------------------------+--------------------------------------+------------------+----------------------+
Continuation Tokens.

Enable replication

A change data capture (CDC) cursor is written to the MOLT Fetch output as cdc_cursor at the beginning and end of the fetch task.

For MySQL:

{"level":"info","type":"summary","fetch_id":"735a4fe0-c478-4de7-a342-cfa9738783dc","num_tables":1,"tables":["public.employees"],"cdc_cursor":"b7f9e0fa-2753-1e1f-5d9b-2402ac810003:3-21","net_duration_ms":4879.890041,"net_duration":"000h 00m 04s","time":"2024-03-18T12:37:02-04:00","message":"fetch complete"}

For Oracle:

{"level":"info","type":"summary","fetch_id":"735a4fe0-c478-4de7-a342-cfa9738783dc","num_tables":3,"tables":["migration_schema.employees"],"cdc_cursor":"backfillFromSCN=26685444,scn=26685786","net_duration_ms":6752.847625,"net_duration":"000h 00m 06s","time":"2024-03-18T12:37:02-04:00","message":"fetch complete"}

Use the cdc_cursor value as the checkpoint for MySQL or Oracle replication with MOLT Replicator.

You can also use the cdc_cursor value with an external change data capture (CDC) tool to continuously replicate subsequent changes from the source database to CockroachDB.

Common uses

Bulk data load

When migrating data to CockroachDB in a bulk load (without utilizing continuous replication to minimize system downtime), run the molt fetch command with the required flags, as shown below:

Specify the source and target database connections. For connection string formats, refer to Source and target databases.

icon/buttons/copy
--source $SOURCE
--target $TARGET

For Oracle Multitenant (CDB/PDB) sources, also include --source-cdb to specify the container database (CDB) connection string.

icon/buttons/copy
--source $SOURCE
--source-cdb $SOURCE_CDB
--target $TARGET

Specify how to move data to CockroachDB. Use cloud storage for intermediate file storage:

icon/buttons/copy
--bucket-path 's3://bucket/path'

Alternatively, use a local file server for intermediate storage:

icon/buttons/copy
--local-path /migration/data/cockroach
--local-path-listen-addr 'localhost:3000'

Alternatively, use direct copy to move data directly without intermediate storage:

icon/buttons/copy
--direct-copy

Optionally, filter the source data to migrate. By default, all schemas and tables are migrated. For details, refer to Schema and table selection.

icon/buttons/copy
--schema-filter 'migration_schema'
--table-filter '.*user.*'

For Oracle sources, --schema-filter is case-insensitive. You can use either lowercase or uppercase:

icon/buttons/copy
--schema-filter 'migration_schema'
--table-filter '.*user.*'

For MySQL sources, omit --schema-filter because MySQL tables belong directly to the database specified in the connection string, not to a separate schema. If needed, use --table-filter to select specific tables:

icon/buttons/copy
--table-filter '.*user.*'

Specify how to handle target tables. By default, --table-handling is set to none, which loads data without changing existing data in the tables. For details, refer to Target table handling:

icon/buttons/copy
--table-handling truncate-if-exists

When performing a bulk load without subsequent replication, use --ignore-replication-check to skip querying for replication checkpoints (such as pg_current_wal_insert_lsn() on PostgreSQL, gtid_executed on MySQL, and CURRENT_SCN on Oracle). This is appropriate when:

  • Performing a one-time data migration with no plan to replicate ongoing changes.
  • Exporting data from a read replica where replication checkpoints are unavailable.
icon/buttons/copy
--ignore-replication-check

At minimum, the molt fetch command should include the source, target, data path, and --ignore-replication-check flags:

icon/buttons/copy
molt fetch \
--source $SOURCE \
--target $TARGET \
--bucket-path 's3://bucket/path' \
--ignore-replication-check

For detailed walkthroughs of migrations that use molt fetch in this way, refer to the Classic Bulk Load and Phased Bulk Load migration approaches.

For detailed walkthroughs of migrations that use molt fetch in this way, refer to these common migration approaches:

Initial bulk load (before replication)

In a migration that utilizes continuous replication, perform an initial data load before setting up ongoing replication with MOLT Replicator. Run the molt fetch command without --ignore-replication-check, as shown below:

The workflow is the same as Bulk data load, except:

  • Exclude --ignore-replication-check. MOLT Fetch will query and record replication checkpoints.
  • You must include --pglogical-replication-slot-name and --pglogical-publication-and-slot-drop-and-recreate to automatically create the publication and replication slot during the data load.
  • After the data load completes, check the CDC cursor in the output for the checkpoint value to use with MOLT Replicator.

At minimum, the molt fetch command should include the source, target, and data path flags:

icon/buttons/copy
molt fetch \
--source $SOURCE \
--target $TARGET \
--bucket-path 's3://bucket/path' \
--pglogical-replication-slot-name molt_slot \
--pglogical-publication-and-slot-drop-and-recreate
icon/buttons/copy
molt fetch \
--source $SOURCE \
--target $TARGET \
--bucket-path 's3://bucket/path'
icon/buttons/copy
molt fetch \
--source $SOURCE \
--source-cdb $SOURCE_CDB \
--target $TARGET \
--bucket-path 's3://bucket/path'

The output will include a cdc_cursor value at the end of the fetch task:

{"level":"info","type":"summary","fetch_id":"735a4fe0-c478-4de7-a342-cfa9738783dc","num_tables":1,"tables":["public.employees"],"cdc_cursor":"b7f9e0fa-2753-1e1f-5d9b-2402ac810003:3-21","net_duration_ms":4879.890041,"net_duration":"000h 00m 04s","time":"2024-03-18T12:37:02-04:00","message":"fetch complete"}

Use this cdc_cursor value when starting MOLT Replicator to ensure replication begins from the correct position. For detailed steps, refer to Load and replicate.

For detailed walkthroughs of migrations that use molt fetch in this way, refer to these common migration approaches:

See also

×