Build a Real-Time Agent on Base with GoldRush (Part 1) — Set Up Streaming Data and Configure a Deterministic Signal

Joseph Appolos
Content Writer
Guide to build a real-time agent on Base using GoldRush Part One—connect streaming data, define a deterministic signal, and prep your pipeline for automation.

Over the past year, Base has gone from “promising L2” to a throughput workhorse. In July 2025, Dune’s analysis estimated that Base was contributing over 80% of L2 transaction fees, a proxy for real usage and demand.

Around the same time, Base rolled out Flashblocks, reducing block times from roughly two seconds to about 200 milliseconds, which significantly improves the speed at which on-chain systems can react. Pair that with GoldRush’s sub-second streaming feed (GraphQL over WebSockets). And you finally have the ingredients for practical, latency-aware AI agents that make decisions on fresh data—on Base, in production.

This post is a hands-on guide to building one of those agents end-to-end on Base. We’ll tap GoldRush for real-time market signals (GraphQL over WebSockets), convert those updates into a transparent prediction rule, and wire guarded contract calls that respect Base’s fee model. The aim is practical: a minimal agent you can read, run, and extend—grounded in the actual network settings and Flashblocks endpoints with the data plane you’ll use in production. 

Why Base + Why Now

Base is an OP Stack Layer-2—EVM-compatible, inexpensive, and fast enough that “reactive” logic actually pays off. Great for bots and agents that need to try ideas without burning a hole in your wallet, Flashblocks' integration has improved performance up to 10x with Flashbots that stream sub-blocks at very low latency—200 ms.

Just as important: there’s already meaningful agent activity on Base. Coinbase’s developer team reports that over 20,000 agents have been deployed via AgentKit, and more than 600,000 on-chain transactions have been executed on Base and Base Sepolia as of April 14, 2025, a clear signal that agents are already active on this stack.

To cap it off, Covalent opened the GoldRush Streaming API to public beta in July 2025 with sub-second updates, then followed in August with “Speed Runs on Base.” Together, these make the Base ecosystem especially attractive for agents.

GoldRush (Covalent) gives you two data planes:

  1. Streaming API (GraphQL over WebSockets): sub-second, structured events like OHLCV, new DEX pairs, wallet activity, and token balances. You subscribe and get pushed updates with no polling.

  2. Foundational REST API: historical, decoded data across 100+ chains (transactions v3, logs, balances, etc.). Use it for backtests, PnL attribution, and audits.

What “AI agent” means here (and what it does)

In Web3, an AI agent isn’t just a model; it’s a program that perceives on-chain state, makes decisions based on a policy, and executes transactions with a bounded identity (an EOA or smart account). The chain provides verifiability (every action is signed and auditable), composability (it can call any contract it’s authorised to), and accountability (including permissions, limits, and pause controls).

Most practical agents run off-chain loops that watch live data and submit on-chain intents when rules are met. The “AI” part can be as simple as a statistical signal or as involved as an LLM planner; what matters is that decisions are reproducible, logged, and constrained by risk guards.

What the agent does in this guide

In this tutorial, an AI agent is a loop that:

  • Subscribes to live market/chain signals on Base (e.g., OHLCV for a pair).

  • Score those signals with a transparent predictive rule (stat/ML).

  • Acts via smart-contract calls only if guards pass (fees, slippage, exposure caps).

You can integrate an LLM planner (e.g., LangChain.js or LangGraph) for multi-step tasks, which will be covered in later sections.

Where does GoldRush fit?

GoldRush is the data plane that enables the agent to “see” in real-time and “remember” accurately, even with sub-second events, allowing the loop to react to fresh signals instead of polling. For backtests, attribution, and audits, the Foundational API provides decoded historical data (transactions v3, logs, balances, traces) so you can measure what the agent actually did and why.

Beyond trading, you can automate risk by pausing or throttling when volatility or netflows spike, running inventory-aware market making (widen spreads as volatility or gas rises), and catching anomalies like unusual wallet bursts or contract call patterns for operations and governance. GoldRush’s streams supply the real-time signals these workflows need.

Agent Architecture: Roles, Interfaces & Guardrails

Structure the agent as a small set of responsibilities with clear interfaces. This keeps decisions explainable, failures easy to isolate, and upgrades (including LLM planning) low-risk. These parts include:

1) Stream Ingestion (real-time)

The primary task here is to maintain a single WebSocket connection to the GoldRush stream and normalize events into a compact envelope that your code understands. No model logic is needed here, just parsing, clocking, and handing off. Pipe these into a bounded ring buffer (e.g., 1–5 minutes of ticks). If the stream hiccups, you drop the oldest entry first and emit a “data_stale” flag (more on this in subsequent sections), so the rest of the system knows to stand down.

2) Feature Store

Converts raw events into lightweight features that the policy can consume. These include: rolling returns and volatility, liquidity depth/impact estimates, wallet net flows, and recent fill status. Version the feature set and snapshot state on a cadence, so any decision is reproducible.

3) Policy Engine

This is the brain, but it is still small. It consumes features and outputs intents with reasons attached. Intents are not transactions; they’re proposals with a confidence score and the exact checks they passed/failed. Later, when you add an LLM planner, it should write the same Intent shape so you can A/B the logic without touching execution.

4) Transaction Pipeline

A narrow service that turns a valid Intent into a signed transaction. It handles:

  • Idempotency: hash(intent) as a key; refuse duplicates within a window.

  • Quoting & slippage limits: fetches quotes just-in-time; aborts on drift.

  • Timeouts & backoff: If a transaction isn’t mined quickly enough, cancel or reduce its size.

  • Dry-run mode: same path, but it logs instead of sending—great for canaries.

5) Observability & Ledger

Record inputs, features, intent, reason codes, fees, fills, and realized PnL deltas in an append-only store. Publish live health metrics (hit rate, PnL per gas, stale-feed ratio, cancel rate). This layer serves as the basis for reviews, incident analysis, and model updates.

6) Safety layer

This is the global kill switch and the set of hard limits, including notional caps, maximum transactions per minute, allow-lists for routers/contracts, and the staleness interlock from the Eyes. When any guard trips, new intents are quarantined until conditions clear or a human approves an override.

How it runs, day to day.

A single process can host all six for small bots, but the interfaces let you scale: you can put the Ingestor on its own worker, shard by pair/wallet key, or swap the ring buffer for a queue. The boundaries stay the same, which keeps your debugging short and your future upgrades painless.

Building the Agent—Step-by-Step Process: Streaming Data → Deterministic Signal

1) Set up your environment

Prerequisites

  1. Node 18+ and a package manager (npm/yarn/pnpm). The GoldRush SDK recommends Node v18 or newer.

  2. RPC access for Base: use a provider for Base Mainnet (chainId 8453) or Base Sepolia (chainId 84532). For production, please refer to the base node providers' docs.

  3. Wallet & keys: a funded EOA/smart account (start on Base Sepolia to test safely).

  4. GoldRush API key for Streaming + Foundational.

  5. Packages: @covalenthq/client-sdk, ethers, and (if rolling your own WS) graphql-ws. The SDK manages WebSocket connections for Streaming.

Install:

npm i @covalenthq/client-sdk ethers graphql-ws npm i -D typescript tsx @types/node

Create .env:

GOLDRUSH_API_KEY=your_key BASE_RPC_URL=your_provider_url PAIR_ADDRESSL=0xYourBasePoolOrPairContract BOT_WALLET_PK=0xabc...   # test key; never commit

2) Live data: subscribe to OHLCV on Base

Before we dive into the actual code, it’s worth aligning on what you’re actually streaming, how GoldRush delivers it, and what “good” looks like in production.

What “OHLCV” means in this context

OHLCV stands for Open, High, Low, Close, Volume—a candlestick that aggregates many swaps into a single bar. On Base, these bars are derived from DEX pools (e.g., Uniswap v3). Each bar represents a fixed window (e.g., 1 minute) for a specific pool/pair contract.

GoldRush’s OHLCV Pairs stream expects a pool/pair contract address, not a token address. The pool address is typically listed on the DEX pool page, and it's a good idea to keep a small map of token addresses/decimals (such as WETH, USDC, or any other token you’re working with) for later trading logic.

How the stream behaves

When you connect, you choose an interval and a timeframe.  That initial backfill warms up your model, so you don’t start cold. Then, you switch to the live bar when you’re ready.

The GoldRush SDK multiplexes multiple subscriptions over a single WebSocket, allowing you you add Token Balances or Wallet Activity streams later without opening new sockets. Keep the event handler light—do heavy work off-thread if you scale up.

  1. Interval = the size of each candle (e.g., ONE_MINUTE).

  2. Timeframe = how many historical bars to send immediately on connect (e.g., one_hour gives ~60 bars).

Staying correct over time (ordering, staleness, replay)

You may occasionally see reconnects and duplicated or slightly out-of-order bars. Deduplicate by bar timestamp, and ignore out-of-order updates older than the last processed minute. Consider data stale if the latest bar is older than approximately twice your interval, and do not act until fresh bars are available.

Finally, some production hygiene to carry out:

  1. Use a real provider RPC URL in .env (check Base docs for official list), avoid the public, rate-limited endpoint.

  2. Confirm pool addresses on an explorer,

  3. Call unsubscribe() on shutdown to avoid dangling sockets, and

  4. Store only a bounded window (e.g., the last 60 bars) to keep memory predictable.

Tutorial

1

Create the file src/stream.ts inside your environment.

This is a small module that opens one GoldRush WebSocket and normalizes OHLCV bars (in this case, 1-minute) for the specified Base pool/pair. It deduplicates by timestamp and returns an unsubscribe() for clean shutdowns.
P.S.: You have already set up the SDK and .env files in the prerequisites. We’ll re-use GOLDRUSH_API_KEY, BASE_RPC_URL, and PAIR_ADDRESS from there.
Paste this code into src/stream.ts:
import 'dotenv/config';
import {
  GoldRushClient,
  StreamingChain,
  StreamingInterval,
  StreamingTimeframe,
} from '@covalenthq/client-sdk';
export type OhlcvBar = { t:number; o:number; h:number; l:number; c:number; v?:number };

/**

 * Subscribe to 1m OHLCV for a Base pool/pair.
 * - Dedupes by bar timestamp (drop replays/out-of-order).
 * - Emits only the latest bar via onCandle.
 * - Returns unsubscribe() for clean shutdown.
 */

export function startOHLCVStream(
  pairAddress: string,
  onCandle: (bar: OhlcvBar) => void
) {
  const client = new GoldRushClient(process.env.GOLDRUSH_API_KEY!, {}, {
    onConnecting: () => console.log('[goldrush] connecting…'),
    onOpened:     () => console.log('[goldrush] connected'),
    onClosed:     () => console.log('[goldrush] closed'),
    onError:      (e) => console.error('[goldrush] error', e),
  });
  let lastTs = 0;
  const unsubscribe = client.StreamingService.subscribeToOHLCVPairs(
    {
      chain_name: StreamingChain.BASE_MAINNET,              // Base mainnet (chainId 8453)
      pair_addresses: [pairAddress],                        // MUST be a pool/pair address
      interval: StreamingInterval.ONE_MINUTE,               // 1m candles
      timeframe: StreamingTimeframe.ONE_HOUR,               // ~60 bars backfill
    },
    {
      next: (evt) => {
        const bar = evt?.data?.ohlcv?.[0];
        if (!bar) return;
        if (bar.t <= lastTs) return;                        // drop duplicates/out-of-order
        lastTs = bar.t;
        onCandle({ t: bar.t, o: bar.o, h: bar.h, l: bar.l, c: bar.c, v: bar.v });
      }
    }
  );
  return () => unsubscribe();
}
2

Create the file src/agent.ts

This is a simple harness that imports startOHLCVStream, prints your subscription config(i.e, each 1-minute bar) so you can verify the target pair, and logs each bar (timestamp + close) with a basic staleness check when data falls behind.
Paste this code into src/agent.ts:
import 'dotenv/config';
import { startOHLCVStream } from './stream';
const PAIR = process.env.PAIR_ADDRESS ?? '';
if (!process.env.GOLDRUSH_API_KEY) throw new Error('Missing GOLDRUSH_API_KEY in .env');
if (!PAIR || !PAIR.startsWith('0x') || PAIR.length !== 42) {
  throw new Error('PAIR_ADDRESS must be a valid Base pool/pair contract address.');
}
// Log the exact config BEFORE connecting so you can verify it’s correct
const CONFIG = {
  chain: 'base-mainnet',
  interval: '1 minute',
  timeframe: 'last 1 hour (backfill)',
  pair: PAIR,
};
console.log('[config]', CONFIG);
let last = 0;
const TWO_MIN = 2 * 60_000;
// Start the subscription
const stop = startOHLCVStream(PAIR, (bar) => {
  const stale = last && (bar.t - last > TWO_MIN);
  last = bar.t;
  console.log('[bar]', {
    time: new Date(bar.t).toISOString(),
    close: bar.c,
    stale
  });
});
// Clean shutdown
process.on('SIGINT', () => { stop(); process.exit(0); });
3

(configure the TypeScript runner with tsx)

Tell Node how to run src/agent.ts using tsx so you can see your OHLCV subscription working. Create a file named package.json in the project root with the exact content below.
Note: tsx was installed as a prerequisite. If you see “tsx: not found,” install it with npm i -D tsx and try again.
{
  "name": "base-agent",
  "version": "1.0.0",
  "private": true,
  "type": "module",
  "scripts": {
    "dev": "tsx src/agent.ts",
    "typecheck": "tsc --noEmit"
  }
}
4

Run and confirm

Run the harness from the project root; you should see a single connection line, e.g., [goldrush] connected. Then one line per minute response as shown below:
If your connection pauses for about 2 minutes, the next line should display stale: true, then return to false once fresh bars are resumed.
npm run dev 
5

Error Debugging

  • “connected” but no bars: likely a token address; switch to a pool/pair contract address from the DEX pool page.
  • Auth error: confirm GOLDRUSH_API_KEY in .env.
  • Repeated reconnects: some networks/proxies block WebSockets—try another network.

3) Add a tiny predictive loop (z-score, deterministic)

Now that the stream is live, we’ll add a tiny, deterministic predictive loop: compute a rolling z-score on 1-minute returns and emit a per-bar signal (with a staleness guard). This remains simple by design and serves as the input to the cost gate in Part 2.

What is a z-score?

A z-score turns the latest log return into a standardized value relative to recent history:
z = (r_t − μ) / σ, where r_t = ln(close_t / close_{t−1}), and μ/σ are the rolling mean and standard deviation of returns over a fixed window (e.g., the last 60 one-minute bars).

Keep the window size fixed during a run for determinism; when σ ≈ 0 (indicating a flat market), treat the signal as neutral (z = 0). Anchor calculations to the bar timestamp from the stream and log (bar_time, z, stale) so every decision is replayable.

Why does this fit?

With GoldRush OHLCV, you already have clean, minute-bucketed candles on Base. That means no tick deduping or pool math in your model, push each candle, update μ/σ, and you have a consistent, latency-aware signal. Base’s fast confirmations make “reactive” strategies more viable; the z-score gives you a deterministic way to detect breakouts/regime shifts without heavy ML.

How it connects with the further parts

In Part 2, you’ll map |z| to an expected edge (bps), compare that to fees + slippage + a safety buffer, and only act when the inequality clears. In Part 3, you’ll record each (features → checks → action → PnL) in a decision ledger.

Tutorial

1

Create the file src/predict.ts (the model)

It’s a tiny, pure TypeScript class that maintains a rolling window of log returns and emits the current z-score. (No SDK calls here, just math.) It keeps a bounded window of returns and computes a per-bar z-score that is deterministic, testable, and requires no external libraries.
P.S.: You already created src/stream.ts and confirmed live bars in the previous section; we’ll reuse it.
Paste this into src/predict.ts:
// A unified candle shape we use across the project
// File: src/predict.ts
// Purpose: Rolling z-score over OHLCV candles (deterministic, no external deps).
// Where your values come from:
// - LIVE: GoldRush OHLCV stream provides these fields directly: { t, o, h, l, c, v }.
//         Map them as: timestamp=t, open=o, high=h, low=l, close=c, volume=v.
// - MANUAL TEST (optional): you can paste a candle from a DEX pool page (time + OHLC),
//         but in real use, you won't type these—stream supplies them.
export type Candle = {
  timestamp: number;   // ms since epoch (from the stream's bar.t)
  open: number;
  high: number;
  low: number;
  close: number;
  volume?: number;
};
export class ZScoreModel {
  private closes: number[] = [];
  private rets: number[] = [];
  constructor(private maxBars = 60) {} // e.g., 60×1m = 1h window
  // Push a new candle; updates rolling closes/returns
  push(c: Candle) {
    const prev = this.closes[this.closes.length - 1];
    this.closes.push(c.close);
    if (this.closes.length > this.maxBars) this.closes.shift();
    if (prev !== undefined) {
      const r = Math.log(c.close / prev);  // log return
      this.rets.push(r);
      if (this.rets.length > this.maxBars - 1) this.rets.shift();
    }
  }
  // Current z-score of the most recent return vs recent history
  z(): number {
    if (this.rets.length < 5) return 0; // need a little history
    const n = this.rets.length;
    const mu = this.rets.reduce((a,b)=>a+b,0) / n;
    const varsum = this.rets.reduce((a,b)=>a + (b - mu) * (b - mu), 0);
    const sd = Math.sqrt(varsum / n);
    if (!sd) return 0;
    return (this.rets[n - 1] - mu) / sd;
  }
}
2

Replace src/agent.ts (wire stream → model → signal)

Replace your current harness so that it imports the model, converts each streamed bar into a Candle, pushes it into the model, applies staleness, and logs a deterministic signal you can gate later.
If you want to keep the old harness for reference, rename it to src/agent.harness.ts. Your entry point remains src/agent.ts.
Paste the code block into src/agent.ts (replacing the previous harness):
import 'dotenv/config';
import { startOHLCVStream, OhlcvBar } from './stream';
import { ZScoreModel, Candle } from './predict';
const PAIR = process.env.PAIR_ADDRESS ?? '';
if (!process.env.GOLDRUSH_API_KEY) throw new Error('Missing GOLDRUSH_API_KEY in .env');
if (!PAIR || !PAIR.startsWith('0x') || PAIR.length !== 42) {
  throw new Error('PAIR_ADDRESS must be a valid Base pool/pair contract address.');
}

const ONE_MIN_MS = 60_000;
const STALE_MS   = 2 * ONE_MIN_MS;
// Model config (tune later per market)
const WINDOW_BARS = 60;       // 60×1m history
const Z_THRESHOLD = 2.0;      // example threshold for "interesting" moves
const model = new ZScoreModel(WINDOW_BARS);
let lastBarTs = 0;

// Helper: normalize a streamed bar to our Candle type
function toCandle(b: OhlcvBar): Candle {
  return { timestamp: b.t, open: b.o, high: b.h, low: b.l, close: b.c, volume: b.v };
}

// Log the config first so it’s clear what we’re running
console.log('[config]', {
  chain: 'base-mainnet',
  interval: '1 minute',
  window_bars: WINDOW_BARS,
  z_threshold: Z_THRESHOLD,
  pair: PAIR,
});

// Start the stream and compute z on each bar
const stop = startOHLCVStream(PAIR, (bar) => {
  const c = toCandle(bar);
  const stale = lastBarTs && (c.timestamp - lastBarTs > STALE_MS);
  lastBarTs = c.timestamp;
  if (stale) {

    console.log('[signal]', { time: new Date(c.timestamp).toISOString(), stale: true, z: 0 });
    return;
  }

  model.push(c);
  const z = model.z();
  const payload = {
    time: new Date(c.timestamp).toISOString(),
    z: Number(z.toFixed(3)),
    stale: false,
    crosses_threshold: Math.abs(z) > Z_THRESHOLD
  };
  console.log('[signal]', payload);
});

// Clean shutdown
process.on('SIGINT', () => { stop(); process.exit(0); });
3

Run the script

Run the output and expect one line per minute, as shown in the example output below.
After about 60 bars (warm-up), z becomes responsive. If connectivity pauses for ~2 minutes, you should see stale: true on the next line.
npm run dev 
4

Error Debugging

  • z sticks near 0: not enough history yet or σ≈0 (flat). Let it run.
  • Timestamps appear odd: ensure you’re logging the bar timestamp (from the stream), not your wall clock time.
  • No bars: double-check PAIR_ADDRESS is a pool/pair contract (not a token).

Closing thoughts

You’ve got the core loop running: a clean OHLCV stream on Base, a deterministic z-score over 1-minute returns, and a simple staleness guard so your agent only “thinks” on fresh data. It’s small on purpose—easy to reason about, trivial to replay, and ready to plug into execution logic without surprises.

What to do before Part 2

  • Swap PAIR_ADDRESS to a second pool and let both run for ~60 bars—watch how z behaves in different regimes.

  • Nudge WINDOW_BARS (e.g., 60 → 120) and Z_THRESHOLD (e.g., 2.0 → 2.5) to see how sensitivity changes.

  • Skim your logs: do the timestamps line up, does the stale flip as expected, and are the signals reproducible?

Next Step

When you’re satisfied the signal is stable, jump to Part 2: Cost-Gated Execution on Base. We’ll fetch firm quotes, convert fees and slippage to basis points (bps), enforce the one inequality that guards every trade, send transactions with Ether, and initiate a PnL backfill to verify results.

Get Started

Get started with GoldRush API in minutes. Sign up for a free API key and start building.

Support

Explore multiple support options! From FAQs for self-help to real-time interactions on Discord.

Contact Sales

Interested in our professional or enterprise plans? Contact our sales team to learn more.