Incremental sync reads only records that are newer than the last successful run. In MantrixFlow, this is the most common production choice for application tables and SaaS resources that grow continuously and expose a dependable update field.Documentation Index
Fetch the complete documentation index at: https://docs.mantrixflow.com/llms.txt
Use this file to discover all available pages before exploring further.
Best fit use cases
orderstables with a trustworthyupdated_at- CRM objects such as deals or contacts that update throughout the day
- subscription or invoice resources with predictable modification timestamps
- larger source tables where full sync would be too expensive to run repeatedly
Sources that use incremental sync in MantrixFlow
| Source | Incremental strategy | Delete handling |
|---|---|---|
| PostgreSQL fallback | xmin soft CDC or user-selected column | deletes are not captured |
| MySQL | user-selected timestamp or monotonic ID | deletes are not captured |
| MariaDB | user-selected timestamp or monotonic ID | deletes are not captured |
| SQLite | user-selected column or rowid for append-only tables | deletes are not captured |
| CockroachDB fallback | user-selected column on older clusters | deletes are not captured |
| SaaS connectors | source-native API cursor | depends on each API |
Step by step in MantrixFlow
- Create and test the source connection. Run at least one full-sync pipeline successfully first.
- Go to Data Pipelines → + New Pipeline, name it, pick the source connection, and click Create & open canvas.
- Click ⚙️ on the Source node — discover schema, tick Include on the table, preview raw rows.
- Add a Transform node if needed.
- Click ⚙️ on the Destination node — open the Config tab:
- Set Sync mode to
INCREMENTAL - Set Replication key to the cursor column (e.g.
updated_at) - Set Write mode to
Upsert - Click Validate config
- Set Sync mode to
- Open the Destination Preview tab to confirm the target table.
- Click ▷ to run manually. Verify the destination row count and the sync state checkpoint.
- Open the Scheduling tab and set the schedule only after the first run is confirmed correct.
What makes a good cursor column
- it updates whenever the record changes
- it is always populated
- it moves forward consistently
- it is indexed or otherwise efficient to query
updated_atmodified_atcreated_atfor append-only tables- monotonic numeric IDs for insert-only workloads
rowidfor append-only SQLite tables
Source-specific reminders
- MySQL, MariaDB, and SQLite incremental syncs do not capture hard deletes.
- If the chosen column is not indexed, each sync may scan the full source table.
- For CockroachDB, prefer MVCC timestamp mode when the cluster version supports it.
- For PostgreSQL, if hard-delete tracking is required, model deletes using a soft-delete flag (
is_deleted = true) and filter in the transform layer.
Real-world example
Summit RevOps syncspublic.orders from PostgreSQL into PostgreSQL every 15 minutes. They use updated_at as the incremental key because new orders, refund changes, and fulfillment updates all touch that column. This keeps the destination current without rereading years of order history on every run.