Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mantrixflow.com/llms.txt

Use this file to discover all available pages before exploring further.

In the builder, a branch is a concrete path from the selected source stream to a destination. Each branch can have its own transform logic, filter, destination connection, and schedule.

Transform approaches

MantrixFlow provides three SQL-based approaches:
ApproachWhereSource reference
SQL Transform nodeCanvas (between Source and Destination){{ source }}
Normalisation tabDestination panel → NormalisationAutomatic (rename/exclude)
dbt Layer tabDestination panel → dbt Layer{{ source('raw', 'schema__tablename') }}

SQL Transform node

Add a Transform node on the canvas and write a SELECT using {{ source }}:
SELECT
  id,
  LOWER(email) AS email,
  first_name || ' ' || last_name AS full_name,
  amount_cents::NUMERIC / 100 AS amount_usd
FROM {{ source }}
WHERE is_deleted = false

Normalisation + dbt Layer

For production pipelines, use the Destination panel tabs:
  1. Normalisation tab — Rename or Exclude source columns before data lands in the raw layer.
  2. dbt Layer tab — Write a dbt SQL model on {{ source('raw', 'schema__tablename') }} using the renamed column names.
See Normalisation and dbt Layer for the full reference.

A realistic branching example

A commerce team might branch one orders stream two ways:
  • Analytics branch writes cleaned rows to PostgreSQL in analytics.orders_live.
  • Operations branch writes a narrower customer-service view to MySQL in ops.order_lookup.
That pattern keeps one source read while allowing two different SQL destinations to serve different teams.

Safe transform workflow

  1. Pick the source table first so previews use the right record shape.
  2. Start with a pass-through (SELECT * FROM {{ source }}) and change one thing at a time.
  3. Use Preview after every meaningful edit.
  4. Save the transform before running the pipeline.
  5. Validate the destination output before turning the schedule on.

Good use cases for a transform

  • standardize enum values such as paid, Paid, and PAID
  • derive reporting-friendly columns such as order_date or mrr
  • remove fields that should not leave the source system
  • flatten JSONB payloads before they reach the destination
  • add metadata such as loaded_from or sync_batch_id

Filter node

The builder includes a dedicated Filter node that applies a SQL WHERE clause:
is_deleted = false AND status IN ('active', 'pending')
Use the Filter node for simple keep-or-drop logic. Use the Transform node or dbt Layer when you also need reshaping. Current limitation: the filter panel is primarily a configuration surface. Treat source preview, destination preview, and a manual run as the authoritative validation path.