Skip to main content
The Postgres destination writes structured pipeline output to a PostgreSQL database using batch inserts. It follows the same batching model as the ClickHouse destination with defaults tuned for PostgreSQL workloads.

Configuration

destination:
  type: "postgres"
  url: "postgresql://host:5432/db"
  user: "${PG_USER}"
  password: "${PG_PASSWORD}"
  batch_size: 1000
  flush_interval_ms: 5000

Fields

FieldTypeRequiredDefaultDescription
urlstringyesJDBC connection URL
userstringyesDatabase user
passwordstringyesDatabase password
batch_sizeintno1,000Rows per batch insert
flush_interval_mslongno5,000Max time (ms) between flushes

How It Works

  1. Rows are buffered in memory as they arrive from the pipeline.
  2. When the buffer reaches batch_size or flush_interval_ms elapses since the last flush, a batch INSERT executes against the target table.
  3. The INSERT SQL is dynamically generated from the field names present in each row.
  4. All column names are automatically quoted in the generated SQL to handle PostgreSQL reserved words such as user, from, and order.