Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mantrixflow.com/llms.txt

Use this file to discover all available pages before exploring further.

Incremental sync is in beta. It has not been fully tested in production across all source types. Use full sync for any pipeline where correctness is critical. Do not rely on incremental sync for production workloads until this notice is removed.
Incremental sync reads only records that are newer than the last successful run. In MantrixFlow, this is the most common production choice for application tables and SaaS resources that grow continuously and expose a dependable update field.

Best fit use cases

  • orders tables with a trustworthy updated_at
  • CRM objects such as deals or contacts that update throughout the day
  • subscription or invoice resources with predictable modification timestamps
  • larger source tables where full sync would be too expensive to run repeatedly

Sources that use incremental sync in MantrixFlow

SourceIncremental strategyDelete handling
PostgreSQL fallbackxmin soft CDC or user-selected columndeletes are not captured
MySQLuser-selected timestamp or monotonic IDdeletes are not captured
MariaDBuser-selected timestamp or monotonic IDdeletes are not captured
SQLiteuser-selected column or rowid for append-only tablesdeletes are not captured
CockroachDB fallbackuser-selected column on older clustersdeletes are not captured
SaaS connectorssource-native API cursordepends on each API

Step by step in MantrixFlow

  1. Create and test the source connection. Run at least one full-sync pipeline successfully first.
  2. Go to Data Pipelines → + New Pipeline, name it, pick the source connection, and click Create & open canvas.
  3. Click ⚙️ on the Source node — discover schema, tick Include on the table, preview raw rows.
  4. Add a Transform node if needed.
  5. Click ⚙️ on the Destination node — open the Config tab:
    • Set Sync mode to INCREMENTAL
    • Set Replication key to the cursor column (e.g. updated_at)
    • Set Write mode to Upsert
    • Click Validate config
  6. Open the Destination Preview tab to confirm the target table.
  7. Click ▷ to run manually. Verify the destination row count and the sync state checkpoint.
  8. Open the Scheduling tab and set the schedule only after the first run is confirmed correct.

What makes a good cursor column

  • it updates whenever the record changes
  • it is always populated
  • it moves forward consistently
  • it is indexed or otherwise efficient to query
Good examples:
  • updated_at
  • modified_at
  • created_at for append-only tables
  • monotonic numeric IDs for insert-only workloads
  • rowid for append-only SQLite tables

Source-specific reminders

  • MySQL, MariaDB, and SQLite incremental syncs do not capture hard deletes.
  • If the chosen column is not indexed, each sync may scan the full source table.
  • For CockroachDB, prefer MVCC timestamp mode when the cluster version supports it.
  • For PostgreSQL, if hard-delete tracking is required, model deletes using a soft-delete flag (is_deleted = true) and filter in the transform layer.

Real-world example

Summit RevOps syncs public.orders from PostgreSQL into PostgreSQL every 15 minutes. They use updated_at as the incremental key because new orders, refund changes, and fulfillment updates all touch that column. This keeps the destination current without rereading years of order history on every run.

Important limitations

Incremental sync is excellent for inserts and updates, but it is not the best fit for hard deletes. Model deletes explicitly with a soft-delete flag if downstream consumers need that signal.