Skip to main content

Use Case

Uniswap V3 is one of the highest-volume DEX protocols on Base. If you are building trading analytics, monitoring liquidity, or tracking volume trends, you need a reliable stream of decoded swap events landing in a queryable database. This guide walks you through configuring a pipeline that:
  • Subscribes to decoded log events on Base Mainnet
  • Filters for the Uniswap V3 Swap event using ABI decoding
  • Writes structured swap rows into a Postgres table

Prerequisites

  • A GoldRush API key with Pipeline API access
  • A Postgres database reachable from the internet (or via VPN peering)
  • The Uniswap V3 Pool ABI (the Swap event signature)

Pipeline Configuration

The following YAML is generated by the GoldRush Platform when you complete the setup steps below.
project: "uniswap-analytics"
topic: "base.mainnet.ref.block.logs"

destination:
  type: "postgres"
  url: "postgresql://your-host:5432/analytics"
  user: "${PG_USER}"
  password: "${PG_PASSWORD}"
  batch_size: 1000

abi:
  path: "uniswap-v3-pool.json"
  contract_addresses:
    - "0x68b3465833fb72a70ecdf485e0e4c7bd8665fc45"
  unmatched: "skip"

transforms:
  base_evt_swap: >
    SELECT block_height, block_signed_at, tx_hash, contract_address,
           sender, recipient, amount0, amount1, sqrt_price_x96, liquidity, tick
    FROM base_evt_swap

Key Configuration Details

FieldPurpose
topicbase.mainnet.ref.block.logs delivers all decoded log events on Base Mainnet.
abi.pathPoints to the Uniswap V3 Pool ABI. The platform provides common ABIs; you can also upload your own.
abi.contract_addressesRestricts decoding to the specified contract. Add more addresses to cover additional pools.
abi.unmatchedSet to skip so that log events not matching the ABI are silently dropped.
transforms.base_evt_swapA SQL transform that selects specific columns from the decoded evt_swap table.
destination.batch_sizeRows are flushed to Postgres in batches of 1,000 for throughput.

ABI Snippet

The pipeline uses the Swap event from the Uniswap V3 Pool contract:
{
  "anonymous": false,
  "inputs": [
    { "indexed": true, "name": "sender", "type": "address" },
    { "indexed": true, "name": "recipient", "type": "address" },
    { "indexed": false, "name": "amount0", "type": "int256" },
    { "indexed": false, "name": "amount1", "type": "int256" },
    { "indexed": false, "name": "sqrtPriceX96", "type": "uint160" },
    { "indexed": false, "name": "liquidity", "type": "uint128" },
    { "indexed": false, "name": "tick", "type": "int24" }
  ],
  "name": "Swap",
  "type": "event"
}

Deploy via the GoldRush Platform

1

Create a new pipeline

Log in to the GoldRush Platform and navigate to Manage Pipeline API in the left sidebar. Click Create Pipeline.
2

Configure the Postgres destination

Select Postgres as the destination type. Enter your JDBC connection URL, and use environment variable references (${PG_USER}, ${PG_PASSWORD}) for credentials. Set Batch size to 1000.
3

Select source topic

Choose Base Mainnet as the network and Block Logs (base.mainnet.ref.block.logs) as the data topic.
4

Configure ABI decoding

In the ABI Decoding section, select the built-in Uniswap V3 Pool ABI or upload your own JSON ABI file. Enter the contract address 0x68b3465833fb72a70ecdf485e0e4c7bd8665fc45. Set Unmatched events to Skip.
5

Add a SQL transform

In the Transforms section, add a transform for the base_evt_swap output table. Paste the SQL SELECT statement from the configuration above to project only the columns you need.
6

Deploy

Review the generated YAML configuration, then click Create. The pipeline will begin consuming events within a few seconds.

Verify the Pipeline

Once the pipeline is running, connect to your Postgres database and confirm that data is arriving:
-- Check row count
SELECT COUNT(*) FROM evt_swap;

-- View the most recent swaps
SELECT block_height, tx_hash, sender, recipient,
       amount0, amount1, tick
FROM evt_swap
ORDER BY block_height DESC
LIMIT 10;

-- Aggregate swap volume over the last hour
SELECT DATE_TRUNC('minute', block_signed_at) AS minute,
       COUNT(*) AS swap_count,
       SUM(ABS(amount0)) AS total_amount0
FROM evt_swap
WHERE block_signed_at > NOW() - INTERVAL '1 hour'
GROUP BY 1
ORDER BY 1 DESC;

Tips

Increase batch_size to 5,000 or higher if your Postgres instance can handle larger write batches. This reduces the number of round trips and improves throughput during high-volume periods.
Add additional contract addresses to the abi.contract_addresses list to capture swaps across multiple Uniswap V3 pools in a single pipeline.
Create indexes on block_height, tx_hash, and contract_address in your Postgres table to speed up common analytical queries.
If the ABI changes (for example, a new version of the pool contract), create a new pipeline with the updated ABI. The platform does not modify existing table schemas automatically.