A MantrixFlow pipeline is the operational definition of how data should move from one source connection into one or more destinations.Documentation Index
Fetch the complete documentation index at: https://docs.mantrixflow.com/llms.txt
Use this file to discover all available pages before exploring further.
The current lifecycle
- Create a pipeline shell from Data Pipelines.
- Open the builder canvas (Source and Destination nodes appear automatically).
- Click ⚙️ on the Source node — tick tables to include, use Discover schema and Preview.
- Option A — simple: Add a SQL Transform node on the canvas and write your
SELECTusing{{ source }}. - Option B — Normalisation + dbt Layer: Click ⚙️ on the Destination node:
- Normalisation tab: Rename or exclude source columns before they land in the raw layer.
- dbt Layer tab: Write a dbt SQL model on
{{ source('raw', 'schema__tablename') }}.
- In the Config tab: set connection, final delivery schema, sync mode, write mode, and click Validate config.
- Click ▷ to run manually, check the history icon, then add a schedule in the Scheduling tab.
The parts of a pipeline
- Pipeline shell: name plus source connection
- Source configuration: the concrete table or resource to sync
- Transform node (canvas): optional SQL
SELECTusing{{ source }}— runs before normalisation - Destination panel — 5 tabs:
- Config: connection, final delivery schema, sync mode, replication key, write mode, validate
- Normalisation: column-level Rename and Exclude rules; lands in
raw.schema__tablename - dbt Layer: dbt SQL model on
{{ source('raw', 'schema__tablename') }}; uses renamed column names - Preview: inspect the target table before running
- Scheduling: recurring schedule configuration
- Run history: the operational record of every execution
- Sync state: the stored cursor checkpoint for incremental sync pipelines
3-stage Normalisation + dbt pipeline
For pipelines that combine Normalisation and the dbt Layer (any supported source and SQL destination), the flow is:What is true in the live builder today
- Stream selection happens in the Source panel (⚙️ on Source node).
- The destination panel has 5 tabs: Config, Normalisation, dbt Layer, Preview, Scheduling.
- SQL Transform node uses
{{ source }}; dbt Layer uses{{ source('raw', 'schema__tablename') }}. - Upsert is the available write mode. Append and Replace are coming soon.
Typical production patterns
- one PostgreSQL table feeding one PostgreSQL reporting table
- one Shopify or Stripe resource feeding one SQL destination table
- one source stream branching to PostgreSQL for analytics and MySQL for internal operations
- a PostgreSQL source using full sync for first validation, then incremental sync (beta) for volume efficiency