Use Case
Uniswap V3 is one of the highest-volume DEX protocols on Base. If you are building trading analytics, monitoring liquidity, or tracking volume trends, you need a reliable stream of decoded swap events landing in a queryable database. This guide walks you through configuring a pipeline that:- Subscribes to decoded log events on Base Mainnet
- Filters for the Uniswap V3
Swapevent using ABI decoding - Writes structured swap rows into a Postgres table
Prerequisites
- A GoldRush API key with Pipeline API access
- A Postgres database reachable from the internet (or via VPN peering)
- The Uniswap V3 Pool ABI (the
Swapevent signature)
Pipeline Configuration
The following YAML is generated by the GoldRush Platform when you complete the setup steps below.Key Configuration Details
| Field | Purpose |
|---|---|
topic | base.mainnet.ref.block.logs delivers all decoded log events on Base Mainnet. |
abi.path | Points to the Uniswap V3 Pool ABI. The platform provides common ABIs; you can also upload your own. |
abi.contract_addresses | Restricts decoding to the specified contract. Add more addresses to cover additional pools. |
abi.unmatched | Set to skip so that log events not matching the ABI are silently dropped. |
transforms.base_evt_swap | A SQL transform that selects specific columns from the decoded evt_swap table. |
destination.batch_size | Rows are flushed to Postgres in batches of 1,000 for throughput. |
ABI Snippet
The pipeline uses theSwap event from the Uniswap V3 Pool contract:
Deploy via the GoldRush Platform
Create a new pipeline
Log in to the GoldRush Platform and navigate to Manage Pipeline API in the left sidebar. Click Create Pipeline.
Configure the Postgres destination
Select Postgres as the destination type. Enter your JDBC connection URL, and use environment variable references (
${PG_USER}, ${PG_PASSWORD}) for credentials. Set Batch size to 1000.Select source topic
Choose Base Mainnet as the network and Block Logs (
base.mainnet.ref.block.logs) as the data topic.Configure ABI decoding
In the ABI Decoding section, select the built-in Uniswap V3 Pool ABI or upload your own JSON ABI file. Enter the contract address
0x68b3465833fb72a70ecdf485e0e4c7bd8665fc45. Set Unmatched events to Skip.Add a SQL transform
In the Transforms section, add a transform for the
base_evt_swap output table. Paste the SQL SELECT statement from the configuration above to project only the columns you need.Verify the Pipeline
Once the pipeline is running, connect to your Postgres database and confirm that data is arriving:Tips
Scaling for production
Scaling for production
Increase
batch_size to 5,000 or higher if your Postgres instance can handle larger write batches. This reduces the number of round trips and improves throughput during high-volume periods.Monitoring multiple pools
Monitoring multiple pools
Add additional contract addresses to the
abi.contract_addresses list to capture swaps across multiple Uniswap V3 pools in a single pipeline.Indexing for query performance
Indexing for query performance
Create indexes on
block_height, tx_hash, and contract_address in your Postgres table to speed up common analytical queries.Handling schema changes
Handling schema changes
If the ABI changes (for example, a new version of the pool contract), create a new pipeline with the updated ABI. The platform does not modify existing table schemas automatically.