The pipeline reads its configuration from a single YAML file. Secrets can be injected using ${ENV_VAR} interpolation in destination credential fields. The source is managed by GoldRush — you only configure the topic, processing options, and destination.
Minimal Example
project: "analytics-prod"
topic: "base.mainnet.ref.block.logs"
destination:
type: "postgres"
url: "postgresql://db.example.com:5432/analytics"
user: "${PG_USER}"
password: "${PG_PASSWORD}"
Top-Level Fields
| Field | Type | Required | Description |
|---|
project | string | yes | Project identifier. Used to derive database schema, consumer group, and checkpoint path. |
topic | string | yes | Source topic. Must follow the {chain}.{network}.{qualifier}.block.{entity} pattern. |
destination | object | yes | Destination configuration. See Destination Types below. |
abi | object | no | ABI decoding configuration. |
transforms | map | no | SQL transforms per table. |
execution | object | no | Execution mode and settings. |
Topics follow a strict naming convention:
{chain}.{network}.{qualifier}.block.{entity}
For example: base.mainnet.ref.block.logs, base.mainnet.ref.block.transactions.
Destination Types
Each destination type has its own set of required and optional fields. The type field within the destination object determines which destination is used.
| Destination Type | type Value | Description |
|---|
| ClickHouse | clickhouse | Column-oriented analytics database |
| Postgres | postgres | Relational database |
| Object Storage | object_storage | S3-compatible blob storage |
| Amazon SQS | sqs | Message queue |
| Webhook | webhook | HTTP POST to your endpoint |
| Kafka | kafka | Raw byte-level passthrough (raw pipeline only) |
Each destination type has its own reference page with full configuration details, batching options, and examples.
ABI Decoding
The abi object configures optional ABI decoding for event logs and transaction input data.
| Field | Type | Required | Description |
|---|
abi.path | string | yes | Path to the ABI JSON file. |
abi.contract_addresses | list | no | Restrict decoding to specific contract addresses. If omitted, all matching signatures are decoded. |
abi.unmatched | string | no | Behavior for entries that do not match the ABI. skip (default) drops them; raw writes them to a raw_* fallback table. |
The transforms map applies SQL statements per output table. Keys are table names (e.g., evt_swap, raw_logs); values are SQL strings.
transforms:
evt_swap: >
SELECT block_number, tx_hash, contract_address, sender, recipient, amount0, amount1
FROM evt_swap
WHERE contract_address = '0x68b3465833fb72a70ecdf485e0e4c7bd8665fc45'
Transforms run after ABI decoding (if configured) and before the destination stage. Standard SQL is supported, including WHERE, SELECT, joins across tables within the same pipeline, and aggregate functions.
A SQL syntax error or runtime error in a transform will fail the pipeline. Validate your SQL before deploying.
Execution
The execution object controls how the pipeline runs.
| Field | Type | Required | Default | Description |
|---|
execution.mode | string | no | unbounded | unbounded (continuous) or bounded (batch). |
execution.start_from | string | no | earliest | earliest, latest, or block height. |
execution.stop_from | string | no | never | never, latest, or block height. Only meaningful in bounded mode. |
Complete Example
project: "analytics-prod"
topic: "base.mainnet.ref.block.logs"
destination:
type: "postgres"
url: "postgresql://db.example.com:5432/analytics"
user: "${PG_USER}"
password: "${PG_PASSWORD}"
batch_size: 1000
abi:
path: "/etc/pipeline-api/uniswap-v3.json"
contract_addresses:
- "0x68b3465833fb72a70ecdf485e0e4c7bd8665fc45"
unmatched: "skip"
transforms:
evt_swap: >
SELECT block_number, tx_hash, contract_address, sender, recipient, amount0, amount1
FROM evt_swap
WHERE contract_address = '0x68b3465833fb72a70ecdf485e0e4c7bd8665fc45'
execution:
mode: "unbounded"
start_from: "earliest"
Use ${ENV_VAR} interpolation for all destination credentials. Never commit secrets directly in the YAML file.