Skip to main content
The Pipeline API pushes structured blockchain data directly to your infrastructure. Instead of polling an API, you configure a pipeline once and data flows continuously into your database, warehouse, queue, or webhook - decoded, transformed, and ready to query.

Architecture

How It Works

1

Create a New Pipeline

Log in to the GoldRush Platform and navigate to Manage Pipelines. Click Create New Pipeline.
2

Select your Destination Type

Select a destination type - ClickHouse, Postgres, Kafka, Object Storage (S3/GCS/R2), AWS SQS, or Webhook - and provide your connection credentials. You can also test if the Pipeline API can connect to your destination successfully.
3

Select your Data Object Type

Choose a chain and data type. Pick from event logs, transactions, token transfers, or protocol-specific data streams like Hyperliquid fills or Solana DEX trades.
4

Select your Data Range

Configure the block range for your pipeline. Your pipeline can run in both bounded (with start/end block heights) as well as unbounded mode for continuously delivering data.
5

Apply SQL transforms and ABI decoding

Optionally add SQL transforms to filter rows, select specific columns, or compute derived fields before data reaches your destination. Aditionally you can also provide ABI definitions to decode raw event logs or call data into structured, typed columns.
6

Deploy and monitor

Review your configuration, deploy the pipeline, and monitor throughput and status from the Pipelines dashboard. Data begins flowing in real time.

Destination Types

ClickHouse

Stream data into ClickHouse for high-performance analytical queries over large volumes of blockchain data.

Postgres

Push decoded blockchain data into Postgres for application backends and transactional workloads.

Kafka

Publish raw data to Kafka topics for downstream consumers, stream processing, and event-driven architectures.

Object Storage

Write data to S3, GCS, or Cloudflare R2 as Parquet or JSON files for data lake and batch processing workflows.

AWS SQS

Deliver events to SQS queues for reliable, decoupled message processing with at-least-once delivery.

Webhook

Receive HTTP POST callbacks for each event, enabling lightweight integrations without managing infrastructure.

Key Features

ABI Decoding

Automatically decode raw EVM event logs into structured, typed columns using standard Solidity ABI definitions.

SQL Transforms

Apply SQL expressions to filter rows, select specific columns, or compute derived fields before data reaches your destination. Reduce storage costs and processing overhead at the source.

Protocol-Native Schemas

Purpose-built normalizers for Hyperliquid and Solana DEX data that produce clean, protocol-aware tables and not raw byte arrays.

Supported Chains

ChainStatus
BaseLive
HyperliquidLive
SolanaLive
PolygonComing Soon
ArbitrumComing Soon
Ethereum MainnetComing Soon
For the full list of supported chains and available data entities, see the Supported Chains page.

Pipeline API vs Foundational and Streaming

Each GoldRush API serves a different access pattern. Choose based on how your application consumes data.
Foundational APIStreaming APIPipeline API
PatternPull (REST)Push (WebSocket)Push (to your infra)
Best forOn-demand queries, wallet lookups, historical dataReal-time UI updates, trading bots, live feedsData warehousing, analytics, ETL, backend indexing
Data deliveryRequest/responseWebSocket subscriptionContinuous destination delivery
DestinationYour applicationYour applicationYour database, warehouse, queue, or webhook
TransformsNone (query parameters only)None (filter on subscribe)SQL transforms, ABI decoding, normalizers
Use the Foundational API when you need data on demand. Use the Streaming API when you need real-time events in your application. Use the Pipeline API when you need blockchain data flowing continuously into your own infrastructure.