Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mantrixflow.com/llms.txt

Use this file to discover all available pages before exploring further.

Connectors and availability

Which connectors are documented as supported today? The current docs only cover the connectors that complete the live create, test, save, and pipeline flow. If a connector tile appears earlier in rollout but does not complete that full workflow yet, it is intentionally excluded from the supported docs set. Sources:
  • PostgreSQL
  • MySQL
  • MariaDB
  • SQLite
  • CockroachDB
  • Stripe
  • Shopify
  • HubSpot
  • GitHub
  • Notion
Destinations:
  • PostgreSQL
  • MySQL
  • MariaDB
  • SQLite
  • CockroachDB
Why are some future connectors missing from the docs? Because this documentation is intentionally product-aligned. Connectors that are not fully available in the app are not documented as supported.

Pipeline behavior

Why does pipeline creation only ask for a source connection? The app creates a pipeline shell first, then opens the canvas builder where you configure everything else. Click Create & open canvas to enter the builder. Where do I choose the source tables? Click ⚙️ on the Source node in the canvas. The Source panel opens with Discover schema, Refresh tables, and per-table Include and Preview controls. Where do I configure the destination? Click ⚙️ on the Destination node. The Destination panel has five tabs: Config, Normalisation, dbt Layer, Preview, and Scheduling. Set the connection, Final delivery schema, sync mode, and write mode in the Config tab. Where do I set the sync mode? In the Destination node → Config tab → Sync mode dropdown (FULL_TABLE or INCREMENTAL). Where do I set the schedule? In the Destination node → Scheduling tab. Always run the pipeline manually first and validate the output before enabling a schedule.

Transforms and write behavior

How do transformations work today? MantrixFlow transforms are SQL-based. There are two approaches:
  • SQL Transform node — add a Transform node on the canvas and write a SELECT using {{ source }}. Supports casts, filters, derived columns, aggregations, and JSONB operators.
  • Normalisation + dbt Layer — use the Destination panel tabs: Normalisation (rename/exclude columns) then dbt Layer (write a dbt SQL model on {{ source('raw', 'schema__tablename') }}). Best for production pipelines with complex transformations.
Both approaches support preview before saving. See Transformation rules for the full reference. Which write mode should I use? Use Upsert. That is the current supported production write mode in the destination panel.

Operations

Can I reset sync state from the UI? Not as a standard self-serve workflow in the current app. Treat sync-state changes as an operational recovery task and verify the pipeline with a manual run before restoring schedules. How do I troubleshoot a failed first run? Check these in order:
  1. Re-test the source and destination connections (Source panel → Test connection).
  2. Confirm the selected source table exists (Source panel → Refresh tables).
  3. Preview source rows (Source panel → Preview on the table).
  4. Click Validate config in the Destination Config tab.
  5. Use the Destination Preview tab to inspect the target table.
  6. Check the history icon in the builder top bar for run errors.
  7. Check the org-wide Activity Log for the full event trace.