A Postgres-to-Postgres (P2P) pipeline reads from one PostgreSQL source and writes the result into a PostgreSQL destination. Both databases can be on any managed provider. P2P pipelines support a 3-stage transformation pipeline inside the Destination panel:Documentation Index
Fetch the complete documentation index at: https://docs.mantrixflow.com/llms.txt
Use this file to discover all available pages before exploring further.
{{ source }}. For production pipelines with multiple tables, JSONB extraction, or derived metrics, use the Normalisation tab + dbt Layer tab instead.
When to use this
- Mirror a production Neon or RDS database into a reporting schema on Supabase or Aurora
- Consolidate multiple application databases into a single analytics PostgreSQL instance
- Promote data from a staging schema to a clean destination schema with type widening
- Build a separate analytics replica without touching the production database
Prerequisites
- A saved Source connection for the PostgreSQL database you read from
- A saved Destination connection for the PostgreSQL database you write to
- Source user:
CONNECT,USAGEon the schema,SELECTon the tables - Destination user:
CONNECT,USAGE,INSERT,UPDATE,CREATEon the destination schema - Both databases reachable from the MantrixFlow ELT server (public endpoint or IP allowlist)
Setting up the source connection
Open Connections and create a source
Go to Connections → + New Connection. Set the role to Source and choose PostgreSQL.
Fill in the connection fields
Enter host, port (
5432), database, username, password, schema (public), and SSL mode. Use require for Neon, Supabase, and AWS RDS.Supported providers
| Provider | Setup note |
|---|---|
| Neon | Use the direct connection string. Add MantrixFlow’s IP under IP Allow on paid plans. |
| AWS RDS / Aurora | Add MantrixFlow’s /32 to the security group inbound rule for port 5432. |
| Supabase | Use the host from Connection Settings. Add MantrixFlow’s IP under Network Restrictions. |
| GCP Cloud SQL | Enable public IP and add MantrixFlow’s /32 as an authorized network. |
| Azure Database for PostgreSQL | Add a firewall rule with start and end IP both set to MantrixFlow’s static IP. |
| Aiven | Add MantrixFlow’s /32 to the service IP filter. |
| DigitalOcean Managed PostgreSQL | Add MantrixFlow’s IP as a trusted source on the cluster. |
| Self-hosted | Requires a public IP or port-forward reachable from MantrixFlow. |
Setting up the destination connection
Create a destination connection
Go to Connections → + New Connection. Set the role to Destination and choose PostgreSQL.
Fill in the fields
Same fields as the source form. The schema here is the default — you can override it per-pipeline in the builder.
Creating the pipeline
Create the pipeline shell
Go to Data Pipelines → + New Pipeline. Enter a name, select the PostgreSQL source connection, and click Create & open canvas.
Configure the Source node
On the canvas, click ⚙️ on the Source node. In the panel that opens:
- Click Discover schema to load available tables.
- Tick Include next to each table to sync.
- Click Preview on a table to verify raw rows.
Add a SQL transform (optional)
For lightweight pre-processing, add a Transform node on the canvas and write a Skip this step if you are using the Normalisation + dbt Layer approach.
SELECT using {{ source }}:Configure Normalisation (optional)
Click ⚙️ on the Destination node and open the Normalisation tab. For each source column:
- Rename — enter the column name you want in the raw layer (e.g.
customer_id→customer_uuid) - Exclude — tick to drop the column entirely (e.g. auto-generated
serial_col, or PII you do not want in the warehouse)
Write the dbt Layer model (optional)
Click the dbt Layer tab in the Destination panel. Write a dbt SQL model referencing the normalised raw table:The source reference is
{{ source('raw', 'schema__tablename') }} — not {{ source }}.
Column names here are the renamed names from the Normalisation tab.See dbt Layer for more examples.Configure the Destination node
Click ⚙️ on the Destination node. In the Config tab:
- Connection — select the PostgreSQL destination connection
- Final delivery schema — e.g.
analytics - Sync mode —
FULL_TABLEfor all first runs - Write mode — select
Upsert - Click Validate config.
Run and verify
Click ▷ on the Destination node or the Run button in the top bar. Check run results via the history icon in the top bar.
Write modes
| Mode | Status | Behavior |
|---|---|---|
Upsert | Available | Inserts new rows; updates existing rows matched by primary key |
Append | Coming soon | Insert only — no deduplication |
Replace | Coming soon | Drops and recreates the destination table on each run |
SQL transform examples
These examples cover the most common SQL patterns for P2P pipelines. All use{{ source }} as the table reference; MantrixFlow replaces it with the qualified stream name at run time. None of them are tied to a specific provider.
1. Copy a table as-is
2. Select specific columns
3. Rename columns
4. Cast types explicitly
5. Filter rows before loading
6. Compute derived columns
7. Convert cents to currency
8. Flatten a JSONB column
9. Aggregate before writing
10. Pivot status into boolean flags
11. Cast range types to TEXT
PostgreSQL range types must be cast explicitly — they are not mapped natively by the extraction layer:12. Cast geometric types to TEXT
13. Extract from arrays
14. NULL-safe coalesce
15. Full cross-type widening in one transform
Scheduling
Open the Scheduling tab in the Destination panel (⚙️ on the Destination node → Scheduling tab). Choose a cron or fixed interval such asevery 1 hour, every 6 hours, or daily at 02:00 UTC. Always validate data correctness with a manual run before enabling a schedule.
Running multiple destinations from the same source
Create two pipelines pointing to the same source connection but different destinations. MantrixFlow serialises runs for the same source to avoid contention — the second run starts automatically after the first completes.Supported data types
All standard PostgreSQL types are supported. Complex types without a native destination representation (geometric, range, text search, composite, enum) land asTEXT for maximum cross-provider compatibility.
See Data type compatibility for the full mapping table.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
| Connection test times out | Provider firewall blocking MantrixFlow | Add MantrixFlow’s /32 to the provider allowlist |
| No tables discovered | Source user missing USAGE on the schema | GRANT USAGE ON SCHEMA public TO <user> |
| Permission denied at run time | Destination user missing INSERT or UPDATE | Grant write privileges on the destination table |
| Type error at load time | Source type incompatible with destination column | Check data type compatibility and update the destination DDL |
| Duplicate rows on re-run | Destination table has no primary key | ALTER TABLE analytics.your_table ADD PRIMARY KEY (id) |
| Range type columns empty | Range types pass through as NULL by default | Cast to TEXT in the transform: int4range_col::TEXT AS int4range_col |