This guide walks you through creating your first data pipeline from sign-up to seeing data in your destination. You will use the Faker source (synthetic data) and a PostgreSQL destination. No external accounts are required beyond MantrixFlow and a PostgreSQL database.
2. Navigate to connections
In the workspace, open Connections from the sidebar. You will create a source connection and a destination connection.3. Create a Faker source connection
Click + New Connection (or + New Connection in the empty state). In the catalog, pick Faker as the connector. Give it a name (e.g. “Test Data”). The Faker connector generates synthetic data for testing. You can leave the default count and seed values. Save the connection.4. Create a PostgreSQL destination connection
Click + New Connection again, toggle Destination if needed (/workspace/connections/new?role=destination), and select PostgreSQL. Enter host, port (default 5432), database name, username, and password. For hosted Postgres (Supabase, Neon, etc.), use your provider’s connection values. Save and use Test Connection when offered.5. Navigate to Data Pipelines
In the sidebar, open Data Pipelines (/workspace/data-pipelines). Click New Pipeline (the header action uses this label).6. Create the pipeline (name + source only)
On Create Pipeline, enter a Pipeline name, choose your Faker source under Source connection, then click Create & open canvas. The app creates a pipeline shell and sends you to the builder—you do not pick the PostgreSQL destination on this screen (that happens on the canvas).In the builder, wire the source to your real PostgreSQL destination (replace any placeholder destination with your destination connection / table as the UI allows). Select the stream (e.g. users from Faker), set sync mode to Full and write mode to Replace (or equivalent labels on the destination node), then Save if the toolbar exposes save. 8. Run the pipeline
Use Run pipeline / Run now from the builder or the pipeline detail page. Wait until the run finishes; open Run history on the pipeline detail view if you need status, row counts, or errors.9. Verify the data
In PostgreSQL, confirm the destination table holds rows synced from Faker (shape depends on the stream you selected).
No PostgreSQL?If you do not have PostgreSQL, you can use DuckDB as the destination for local testing, or any other supported destination you have access to.