Skip to main content
The GoldRush Pipeline API processes blockchain data from the Covalent Database and routes it to your infrastructure. There are two distinct processing paths depending on your use case.

Processing Paths

                          Covalent Database
                                |
                 +--------------+--------------+
                 |                             |
          Structured Pipeline            Raw Pipeline
                 |                             |
          Normalization                   Byte-level
                 |                        passthrough
          ABI Decoding (optional)              |
                 |                          Kafka
          SQL Transforms (optional)
                 |
              +--+--+--+--+
              |  |  |  |  |
             CH  PG OS SQS WH

  CH = ClickHouse    PG = Postgres    OS = Object Storage
  SQS = Amazon SQS   WH = Webhook

Structured pipeline

Data flows from the Covalent Database through a series of processing stages before landing in your chosen destination: ClickHouse, Postgres, Object Storage, SQS, or Webhook. Each stage is optional except normalization and the final destination. Structured destinations provide at-least-once delivery guarantees.

Raw pipeline

Byte-level passthrough directly to a Kafka topic. No normalization, no decoding, no transforms. Use this when you need the lowest latency path to raw block data. The raw pipeline provides exactly-once delivery guarantees.

Processing Stages

Normalization

Converts source data into a consistent tabular schema. Each entity type (logs, transactions, traces, etc.) is mapped to a well-defined set of columns with predictable types. This stage runs on every record in the structured pipeline and cannot be skipped.

ABI decoding

Optionally decodes raw event logs and transaction input data against a provided ABI. Matched entries are expanded into named, typed columns (one table per event or function signature). Unmatched entries can be skipped or routed to a raw_* fallback table depending on your configuration.

SQL transforms

Optional SQL statements applied per table after decoding. Use these to filter, project, or reshape data before it reaches the destination. Transforms run as standard SQL against the in-flight table, so you can use WHERE, SELECT, joins across tables in the same pipeline, and aggregate functions.

Destination

The terminal stage that writes processed data to your destination. Each destination type has its own configuration, batching behavior, and retry semantics. See the individual destination reference pages for details.

Execution Modes

The default mode. The pipeline runs continuously, consuming new data as it arrives. There is no defined stop point. Use this for real-time indexing, monitoring, and any workload that should stay current with the chain.
The pipeline processes data between a start point and a defined stop point, then terminates. Use this for backfills, one-off analyses, or any workload with a known data range. Configure stop_from and optionally stop_offsets to define the end boundary.
Checkpointing is enabled in both modes. If a pipeline restarts, it resumes from the last successful checkpoint rather than reprocessing from the beginning.