Build a Real-Time Agent on Base with GoldRush (Part 3): Operational Hardening and Optional LLM Planning

Joseph Appolos
Content Writer
Harden your AI agent on Base with GoldRush—add monitoring, guardrails, and LangChain planning for smarter, safer on-chain execution.

By now, you’ve built a live agent that reads real-time OHLCV data from GoldRush Streaming, computes a z-score signal, checks cost and profitability, and executes trades safely on Base from our part one and part two guides.
That’s a complete working loop — but not yet a production-ready one.

This final part focuses on the missing layer that turns your agent from functional to reliable:
Observability, security, and resilience.

We’ll also explore an optional step — integrating LangChain or LangGraph — to make your agent smarter, capable of planning multi-step actions with human approval.

Why Observability and Security Matter

Real-world agents don’t just need to act; they need to explain why they acted, prove how they acted, and recover gracefully when something fails. When you’re dealing with live data streams and on-chain transactions, even a small oversight (like a missed heartbeat or stale feed) can mean real losses. That’s why monitoring, transparency, and control aren’t extras; they’re the foundation for trust.

Observability means the agent constantly monitors its own behavior — not just logging prices, but recording the “why” behind every decision: signal strength, fees, slippage, safety buffers, and transaction outcomes. These logs serve as a decision ledger, allowing you to audit, replay, or explain every action.

From previous occurrences, we’ve seen that Networks also hiccup. Even performant L2s see periodic congestion and elevated RPC latency, which can delay inclusion or out-of-order receipts; your monitoring needs to detect staleness and fail closed when that happens.

Security, on the other hand, is about protecting the system from both internal and external risks. That includes separating execution keys from logic, applying transaction caps, staleness guards, and allow-lists, and building a global kill switch that halts operations if things go wrong.

And the security backdrop is unforgiving. Independent reports show over $2B in losses from crypto hacks/scams in 2024–2025, with a large share tied to compromised wallets/keys—a reminder that key hygiene, allow-lists, and kill-switches aren’t optional. Pair that with well-known patterns like role-based access control and Pausable on your on-chain executors to contain blast radius. 

Monitoring in the Context of Agents

Monitoring an AI or trading agent isn’t the same as tracking a website or app. Monitoring uptime goes beyond simply checking for system functionality; it involves making important judgment calls.

A few key metrics matter:

  • Staleness ratio — how often data delays cause the agent to pause.

  • PnL per gas — how efficiently each transaction converts gas into value.

  • Hit rate — the percentage of signals that actually pass the profitability gate.

  • Guard trips — how often safety rules (caps, limits) stop execution.

By combining these with GoldRush Foundational APIs (Transactions v3, Balances, Logs), You get a continuous picture of your agent’s performance, from prediction accuracy to realized profit.

To put it simply, GoldRush helps your agent see and remember, while observability ensures you can trust what it sees and does.

Security in Practice

Most agents fail not because their models are bad, but because their safety assumptions are weak.

Here’s what security means in this context:

  1. Key hygiene: use a low-balance executor key, rotate it often, and never reuse the same signer across environments.

  2. Operational caps: limit how much notional value can be traded per epoch (hour, day, etc.).

  3. Staleness interlocks: if your data feed goes quiet for longer than 2× your interval, freeze execution.

  4. Allow-lists: explicitly approve the routers, tokens, and contracts your agent can interact with.

  5. Fail-closed design: if something breaks, the agent stops trading until reviewed.

You can even extend these into smart contracts: a guardian module that allows a human or DAO to pause the agent if conditions drift beyond safety bounds.

LangChain Integration (Optional)

Once your base agent is stable, you can layer on intelligence with LangChain.js or LangGraph. This process is about letting an AI planner sequence actions without replacing your z-score model.

For example:

“If gas fees spike and liquidity in the pool drops below 50%, pause new swaps; otherwise, rebalance positions across pools with the highest volume.”

LangChain lets you wrap your agent’s existing tools (read_stream_state, estimate_edge, submit_tx) in a natural-language planning layer. You can even keep a human in the loop for large notional decisions or contract upgrades that require manual approval before execution.

This hybrid design, including a deterministic policy for small trades, LLM oversight for large ones, is already being explored by major DeFi automation teams, and it’s a natural next step after you’ve hardened your agent.

Putting It All Together

So far, you’ve:

  • Built a reactive agent using GoldRush’s real-time data stream,

  • Added profitability checks and PnL backfill for validation, and

  • Understood why security and monitoring make or break real-world automation.

Now, in the steps that follow, we’ll make these ideas practical.
You’ll build:

  1. A decision ledger (logging.ts) to record every signal, check, and action.

  2. A guard module (guard.ts) to enforce limits and staleness checks.

  3. A heartbeat monitor (monitor.ts) that keeps the stream alive and halts execution if data stops.

  4. (Optional) a simple LangChain planner harness (planner.ts) for controlled multi-step logic.

By the end of this part, you’ll have an agent that’s not only fast and profitable but also safe, transparent, and resilient — ready for real deployment on Base.

Step-by-Step Tutorial — Operational Hardening & Optional LLM Planning

Dependencies & Setup (continuation of Parts 1 and 2)

You’re continuing the same project from Parts 1–2. Keep your existing environment:

  • Node 18+

  • Project structure under base-agent/

  • GoldRush SDK installed and working

  • Files from earlier parts: src/stream.ts, src/predict.ts, src/agent.ts, src/rpc.ts, src/quote.ts, src/costs.ts, src/trade.ts, src/pnl.ts

New packages to add for Part 3

These give us a decision ledger, guardrails, and health metrics:

npm i pino pino-pretty prom-client

New environment variables

Add these to your existing .env:

LOG_LEVEL=info # pino log level: trace|debug|info|warn|error METRICS_PORT=9100 # if you want a /metrics endpoint (Prometheus) KILL_SWITCH=false # emergency stop: "true" halts execution ALLOWED_ROUTERS=0xRouterA,0xRouterB DAILY_NOTIONAL_CAP_USD=5000 # hard daily cap in USD-equivalent STALE_MS=120000 # 2× your candle interval (e.g., 2 min) = stale ALERT_WEBHOOK_URL= # optional: your Slack/Discord webhook

Complete file structure for this project

base-agent/ ├─ .env # Env vars (API keys, ports, caps, guards) ├─ package.json # Scripts & deps (SDKs, ethers, logging) ├─ tsconfig.json # TS config (ESNext, strict) └─ src/ ├─ stream.ts # Part 1: GoldRush OHLCV stream (Base) ├─ predict.ts # Part 1: z-score model → signal ├─ agent.ts # Part 1: Wires stream + predict ├─ rpc.ts # Part 2: Base RPC provider (chainId 8453) ├─ quote.ts # Part 2: 0x firm quote (Base) ├─ costs.ts # Part 2: Fee/slippage math ├─ trade.ts # Part 2: Profitability gate + execute ├─ pnl.ts # Part 2: PnL backfill (GoldRush Tx v3) ├─ logging.ts # Part 3: Decision logs + metrics
1

Create the decision layer — src/logging.ts

This module provides you with a structured, append-only decision ledger that makes every action explainable and auditable. It doesn’t trade or stream; it records what happened and why:
  • Inputs: candles/features (z-score, regime, staleness), quote (slippage), cost math (fees, buffers), guard evaluations, and the action/result (skip/submitted/failed).
  • Output: compact JSON logs (to stdout) + Prometheus counters for live metrics (optional).
This is the foundation for your post-mortems, regulatory reviews, and PnL reconciliation later.
Create src/logging.ts and paste this code in it:
// src/logging.ts import pino from "pino"; import client from "prom-client"; const level = process.env.LOG_LEVEL ?? "info"; export const logger = pino({ level, base: undefined, // no pid/hostname noise transport: { target: "pino-pretty", options: { colorize: true } }, }); // ---- Prometheus metrics (optional; only used if you expose /metrics elsewhere) export const registry = new client.Registry(); client.collectDefaultMetrics({ register: registry }); export const metricSignalsTotal = new client.Counter({ name: "agent_signals_total", help: "Count of evaluated signals", labelNames: ["pair", "decision"], // decision: skip|proceed }); export const metricTradesSubmitted = new client.Counter({ name: "agent_trades_submitted_total", help: "Submitted on-chain tx count", labelNames: ["pair"], }); export const metricGuardTrips = new client.Counter({ name: "agent_guard_trips_total", help: "Count of times a guard blocked execution", labelNames: ["type"], // type: stale|caps|router|killswitch|costs }); export const metricStaleRatio = new client.Gauge({ name: "agent_stale_ratio", help: "Share of intervals detected as stale (0..1)", labelNames: ["pair"], }); export const metricPnlPerGas = new client.Gauge({ name: "agent_pnl_per_gas", help: "Realized PnL per gas unit (rollup windowed)", labelNames: ["pair"], }); registry.registerMetric(metricSignalsTotal); registry.registerMetric(metricTradesSubmitted); registry.registerMetric(metricGuardTrips); registry.registerMetric(metricStaleRatio); registry.registerMetric(metricPnlPerGas); // ---- Decision entry shape export type DecisionLog = { ts: string; // ISO timestamp pair: string; // pool/pair contract features: { z: number; regime: "low" | "normal" | "high"; stale: boolean; }; costs: { fee_bps: number; slippage_bps: number; safety_bps: number; hurdle_bps: number; }; edge_bps: number; guard: { killswitch: boolean; staleness_clear: boolean; within_caps: boolean; router_allowed: boolean; costs_clear: boolean; }; action: "skip" | "submit" | "failed"; tx?: { hash?: string; blockNumber?: number }; reason?: string; // for skip/failed }; // ---- Public helpers export function logSignalEvaluated(pair: string, decision: "skip" | "proceed") { metricSignalsTotal.labels({ pair, decision }).inc(1); } export function logGuardTrip(type: "stale" | "caps" | "router" | "killswitch" | "costs") { metricGuardTrips.labels({ type }).inc(1); logger.warn({ at: "guard", type }, "guard tripped"); } export function writeDecision(entry: DecisionLog) { logger.info({ at: "decision", ...entry }); if (entry.action === "submit") { metricTradesSubmitted.labels({ pair: entry.pair }).inc(1); } } // Optionally expose /metrics in your main app: // import http from "http"; // http.createServer(async (req, res) => { // if (req.url === "/metrics") { // res.setHeader("Content-Type", registry.contentType); // res.end(await registry.metrics()); // } else { // res.statusCode = 404; res.end("not found"); // } // }).listen(Number(process.env.METRICS_PORT ?? 9100));

What output to expect

  • Console: pretty JSON logs with at: "decision" entries when you call writeDecision(...).
  • Prometheus: counters/gauges increment when you call the metric helpers (you’ll wire them from guards/executor in later steps).
No direct trading here—this file doesn’t produce trades; it records them.
2

Create Safety Caps, Allow-lists, Kill Switch, and Staleness — src/guard.ts

What this file does
This module is the bouncer. Before any trade goes out, it checks four things fast:
  1. Kill switch is OFF,
  2. Your data isn’t stale,
  3. The router is allowed, and
  4. You’re under your daily notional cap.
 It returns a single verdict that your executor must obey. It uses the .env you already set above in this part: KILL_SWITCH, STALE_MS, ALLOWED_ROUTERS, DAILY_NOTIONAL_CAP_USD.
Think of this file as the firewall layer for your trading agent, in which every trade idea (or “intent”) goes through before execution. It doesn’t decide what to trade — your z-score and signal model already did that. It determines whether you’re even allowed to trade right now.
Summarily, this file src/guard.ts does the following to our harness.
Uses four simple guardrails to keep the agent safe: a kill switch lets you pause everything via .env without redeploying (e.g., flip KILL_SWITCH=true during a volatility spike); a staleness check stops trading if data is outdated (e.g., the GoldRush stream drops for ~3 minutes); a router allow-list ensures swaps only go through trusted contracts (e.g., blocks a rogue 0x router address); and a daily notional cap limits total USD volume per day (e.g., halt after $5,000 in test runs).
Paste this code in src/guard.ts
// src/guard.ts import { logger, logGuardTrip } from "./logging"; /** * Runtime config pulled from .env (with sane defaults). */ function parseList(v?: string) { return (v ?? "") .split(",") .map(s => s.trim().toLowerCase()) .filter(Boolean); } export const GuardConfig = { killSwitch: (process.env.KILL_SWITCH ?? "false").toLowerCase() === "true", staleMs: Number(process.env.STALE_MS ?? 120_000), // 2 min default allowedRouters: new Set(parseList(process.env.ALLOWED_ROUTERS)), // 0x, your router(s) dailyCapUsd: Number(process.env.DAILY_NOTIONAL_CAP_USD ?? 5_000), // USD }; /** * Simple in-memory daily counter (UTC). For prod, persist (Redis/DB). */ let dayKey = new Date().toISOString().slice(0, 10); // YYYY-MM-DD UTC let spentTodayUsd = 0; function rotateDayIfNeeded(now = Date.now()) { const k = new Date(now).toISOString().slice(0, 10); if (k !== dayKey) { dayKey = k; spentTodayUsd = 0; } } export function addNotionalUsd(amount: number, now = Date.now()) { rotateDayIfNeeded(now); spentTodayUsd += Math.max(0, amount); } export function getSpentTodayUsd(now = Date.now()) { rotateDayIfNeeded(now); return spentTodayUsd; } /** * Inputs you provide at decision time. */ export type GuardInputs = { nowMs: number; // Date.now() at decision time lastBarMs: number; // timestamp of latest OHLCV bar from stream router?: string; // destination contract (0x allowance-holder or your router) notionalUsd: number; // trade size in USD terms }; export type GuardVerdict = { ok: boolean; // true -> may proceed to cost gate / execution checks: { killSwitch: boolean; stalenessClear: boolean; routerAllowed: boolean; withinDailyCap: boolean; }; reasons: string[]; // non-empty when ok=false }; /** * Evaluate all guards. If any fails, ok=false with reasons populated. */ export function evaluateGuards(inp: GuardInputs): GuardVerdict { const reasons: string[] = []; // 1) Kill switch const killSwitch = GuardConfig.killSwitch === false; if (!killSwitch) { reasons.push("kill_switch_on"); logGuardTrip("killswitch"); } // 2) Staleness const age = inp.nowMs - inp.lastBarMs; const stalenessClear = age <= GuardConfig.staleMs; if (!stalenessClear) { reasons.push(`stale_feed_${age}ms`); logGuardTrip("stale"); } // 3) Router allow-list let routerAllowed = true; if (inp.router) { const r = inp.router.toLowerCase(); if (GuardConfig.allowedRouters.size > 0 && !GuardConfig.allowedRouters.has(r)) { routerAllowed = false; reasons.push("router_not_allowed"); logGuardTrip("router"); } } // 4) Daily notional cap const spent = getSpentTodayUsd(inp.nowMs); const withinDailyCap = spent + inp.notionalUsd <= GuardConfig.dailyCapUsd; if (!withinDailyCap) { reasons.push(`daily_cap_exceeded_${spent.toFixed(2)}+${inp.notionalUsd.toFixed(2)}>${GuardConfig.dailyCapUsd.toFixed(2)}`); logGuardTrip("caps"); } const ok = killSwitch && stalenessClear && routerAllowed && withinDailyCap; // Log a compact summary for the decision ledger if (!ok) { logger.warn({ at: "guards", ok, age_ms: age, spent_today_usd: spent, notional_usd: inp.notionalUsd, reasons, }, "guard_block"); } else { logger.debug({ at: "guards", ok, age_ms: age, spent_today_usd: spent }, "guard_pass"); } return { ok, checks: { killSwitch, stalenessClear, routerAllowed, withinDailyCap }, reasons, }; }
How it plugs into your flow
  • Before asking 0x for a quote, you can check the kill switch and staleness (cheap, local).
  • After you know notional and router, call evaluateGuards(...).
After a successful submit, call addNotionalUsd(notional) to accrue your daily spend.

Verifying src/guard.ts

To verify that your new guard.ts file works correctly, update the existing harness from Part 1 (src/agent.ts).  This lets you run the agent and confirm the guard checks are firing as expected. Add the following block inside your existing signal or on-bar handler, right after your z-score calculation:
Insert after z-score computation in src/agent.ts
// Insert after z-score computation in src/agent.ts import { evaluateGuards, addNotionalUsd } from "./guard"; const verdict = evaluateGuards({ nowMs: Date.now(), lastBarMs: bar.timestamp, router: "0xdef1c0ded9bec7f1a1670819833240f027b25eff", // 0x router on Base notionalUsd: 100, }); if (!verdict.ok) { console.warn("[guard] BLOCKED", verdict.reasons.join(", ")); return; // stop here — do not trade } console.log("[guard] PASS", verdict.checks); addNotionalUsd(100);
Then rerun the harness with
npm run dev
If everything is wired correctly, your console should print one of the following:

Or this:

3

Heartbeat monitor, tracking staleness & fail-closed —src/monitor.ts

When you’re streaming candles, the most significant silent risk is acting on stale data (WS hiccup, network pause, provider blip). This step adds a tiny heartbeat watcher that tracks the last bar time, flips a stale flag when you cross a threshold, and lets the rest of the agent fail-closed until the feed is fresh again.
What it does: keeps an internal clock for the last-seen bar and periodically decides whether it's fresh or stale, firing callbacks on state changes.
Paste the code below into src/monitor.ts
// src/monitor.ts export type MonitorOpts = { /** How often to check (ms). Example: 5_000 */ checkEveryMs: number; /** Mark stale if now - lastBarMs > staleAfterMs. Example: 120_000 for 1m bars */ staleAfterMs: number; /** Called when state flips from fresh -> stale */ onStale?: (ageMs: number) => void; /** Called when state flips from stale -> fresh */ onFresh?: () => void; }; export class StreamMonitor { private timer?: ReturnType<typeof setInterval>; private lastBarMs = 0; private stale = false; constructor(private opts: MonitorOpts) {} /** Call this on each new bar with the bar's timestamp (ms since epoch) */ updateLastBar(ms: number) { this.lastBarMs = ms; } /** Returns current view of freshness */ isStale(): boolean { return this.stale; } /** Start periodic checks */ start() { if (this.timer) return; this.timer = setInterval(() => this.tick(), this.opts.checkEveryMs); } /** Stop periodic checks */ stop() { if (this.timer) clearInterval(this.timer); this.timer = undefined; } private tick() { if (!this.lastBarMs) return; const age = Date.now() - this.lastBarMs; const shouldBeStale = age > this.opts.staleAfterMs; if (shouldBeStale && !this.stale) { this.stale = true; this.opts.onStale?.(age); } else if (!shouldBeStale && this.stale) { this.stale = false; this.opts.onFresh?.(); } } }

Verify src/monitor.ts inside your existing harness

Open src/agent.ts (from Part 1) and wire the monitor with your bar handler. Paste these additions (keep all existing code intact):
// at top: import { StreamMonitor } from "./monitor"; // near startup (once): const STALE_MS = Number(process.env.STALE_MS ?? 120_000); // ~2× 1m bars const monitor = new StreamMonitor({ checkEveryMs: 5_000, staleAfterMs: STALE_MS, onStale: (age) => console.warn(`[monitor] STALE feed (age=${age}ms)`), onFresh: () => console.log("[monitor] feed fresh again"), }); monitor.start(); // inside your existing onBar / message handler: monitor.updateLastBar(bar.timestamp); // bar.timestamp must be ms

Errors & quick fixes (unique to this step)

  • Stale triggers immediately: Your bar.timestamp is likely in seconds, not ms. Convert: * 1000.
  • Flapping fresh/stale: Increase STALE_MS (e.g., from 120_000 → 150_000) or reduce checkEveryMs.
No callbacks ever fire: You forgot to call monitor.start() or monitor.updateLastBar(...) on each bar.
Expected output (examples):
4

Decision Logging & Metrics—logging.ts

Now that the agent can monitor and enforce safety, it needs to record why it acts. This step introduces a unified decision ledger using structured logs (pino) and Prometheus metrics (prom-client). Together, these form the observability layer, which is the backbone of transparency, debugging, and long-term performance evaluation.
In simpler terms:
  • pino → readable + JSON-structured logs (great for audits or dashboards).
  • prom-client → metrics endpoint you can scrape (Grafana, Prometheus, or even curl).
This layer will track:
  • Signals: how strong was the z-score?
  • Decisions: what did the guard say?
  • Outcomes: PnL, fees, slippage, stale-feed ratio, etc.
Paste this into src/logging.ts
// src/logging.ts import pino from "pino"; import client from "prom-client"; // Structured logger setup export const logger = pino({ level: process.env.LOG_LEVEL || "info", transport: { target: "pino-pretty", options: { colorize: true, translateTime: "SYS:standard" }, }, }); // Prometheus metrics registry export const register = new client.Registry(); // Core metrics export const tradeCounter = new client.Counter({ name: "agent_trades_total", help: "Total number of trades attempted by the agent", labelNames: ["status"], }); export const staleFeedGauge = new client.Gauge({ name: "agent_stale_feed_ratio", help: "Fraction of time the stream was marked stale", }); export const pnlGauge = new client.Gauge({ name: "agent_realized_pnl_usd", help: "Cumulative realized PnL (in USD)", }); register.registerMetric(tradeCounter); register.registerMetric(staleFeedGauge); register.registerMetric(pnlGauge); // Expose metrics via local server (optional) export async function startMetricsServer(port = Number(process.env.METRICS_PORT) || 9464) { const express = await import("express"); const app = express(); app.get("/metrics", async (_req, res) => { res.set("Content-Type", register.contentType); res.end(await register.metrics()); }); app.listen(port, () => logger.info(`[metrics] Server running on port ${port}`)); } // Example structured logging export function recordDecision(event: { signal: number; edge_bps: number; fee_bps: number; slippage_bps: number; safety_bps: number; decision: string; txHash?: string; }) { logger.info({ event }, `[decision] ${event.decision.toUpperCase()}`); }
Add to .env
Append these new variables to your existing .env file:
LOG_LEVEL=info METRICS_PORT=9464

How to Verify

You can verify this module by adding it to your existing agent.ts harness. After your existing imports in agent.ts, add:
import { startMetricsServer, recordDecision, tradeCounter } from "./logging"; // Start Prometheus metrics server startMetricsServer(); // Example log call recordDecision({ signal: 2.5, edge_bps: 75, fee_bps: 5, slippage_bps: 20, safety_bps: 10, decision: "executed", }); tradeCounter.inc({ status: "executed" });

Expected Output

When you run npm run dev -- src/agent.ts, you’ll see something like:

and visiting http://localhost:9464/metrics in your browser will return Prometheus-formatted metrics:

# HELP agent_trades_total Total number of trades attempted by the agent # TYPE agent_trades_total counter agent_trades_total{status="executed"} 1 # HELP agent_realized_pnl_usd Cumulative realized PnL (in USD) # TYPE agent_realized_pnl_usd gauge agent_realized_pnl_usd 0

Error & Troubleshooting

If you get “Cannot find module 'pino'”, you’re missing deps—run npm install pino pino-pretty prom-client express. If the metrics endpoint isn’t reachable, the port is probably in use—change METRICS_PORT in your .env to a free port and restart. If logs aren’t showing, your log level is too strict—set LOG_LEVEL=info (or debug) and rerun.

5

(optional): Hooking LangChain into Your Agent

Now that your base agent can see, decide, and defend, you can teach it to plan with LangChain.js. The idea isn’t to replace your z-score logic — it’s to let an AI planner decide which tool to call (stream, estimate, or trade) and when. This makes your agent more autonomous while keeping humans in the loop for high-impact calls.
What This Step Does
  • Wraps your existing deterministic functions as LangChain Tools
  • Creates a simple planner that reasons over those tools using natural language
  • Adds a human-approval hook for large notional trades
 You already have the Part 1–3 environment.  Install LangChain and an LLM wrapper (e.g. OpenAI or Ollama) first:
npm i langchain @langchain/openai
Create src/planner.ts inside your project folder and paste this content inside it
// src/planner.ts import { initializeAgentExecutorWithOptions } from "langchain/agents"; import { ChatOpenAI } from "@langchain/openai"; import { DynamicTool } from "langchain/tools"; import { tryTradeIfProfitable } from "./trade"; import { startOHLCVStream } from "./stream"; import { ZScoreModel } from "./predict"; import 'dotenv/config'; // 1) Wrap existing functions as tools const readStreamTool = new DynamicTool({ name: "read_stream_state", description: "Read the latest price bar from GoldRush stream", func: async () => { const model = new ZScoreModel(); const unsubscribe = startOHLCVStream(process.env.PAIR_ADDRESS!, (bar) => model.push(bar)); await new Promise(r => setTimeout(r, 3000)); // wait 3s for first bar unsubscribe(); return "Latest z-score: " + model.z().toFixed(2); } }); const tradeTool = new DynamicTool({ name: "submit_trade", description: "Attempt a USDC→WETH trade if profitable", func: async () => { const res = await tryTradeIfProfitable({ sellAmountUSDC: "100000000", // 100 USDC expectedEdgeBps: 80, ethUsdPrice: 3500, botPrivateKey: process.env.BOT_WALLET_PK!, }); return JSON.stringify(res, null, 2); } }); // 2) Initialize LLM planner export async function runPlanner() { const llm = new ChatOpenAI({ temperature: 0.2, modelName: "gpt-4o-mini", }); const executor = await initializeAgentExecutorWithOptions( [readStreamTool, tradeTool], llm, { agentType: "chat-zero-shot-react-description", verbose: true } ); console.log("[planner] ready"); const res = await executor.run( "Check the latest stream state and decide if a trade should be made." ); console.log("[planner output]", res); } // optional manual guard for high-value trades if (process.env.HUMAN_APPROVAL === "true") { console.log("Awaiting human approval before executing large trades…"); }
The run it using npm run dev -- src/planner.ts
Expected Output in terminal
[planner] ready [planner output] Latest z-score: 2.87 → executing trade ✅
If the planner determines that conditions aren’t favorable, it’ll respond instead with:
[planner output] Latest z-score: 0.95 → skipping trade ❌

Error & Troubleshooting

  • OpenAI API error → Check your OPENAI_API_KEY in .env.
  • Agent stuck → Lower temperature or shorten the prompt.
Circular logs → Confirm only one WebSocket stream is open at a time

Bringing It All Together

Over three parts, we’ve built a full real-time agent on Base — not just a demo, but a working foundation you can extend into production.

In Part 1, we focused on perception: setting up GoldRush’s OHLCV stream, building a z-score signal, and creating a deterministic harness to test live data in motion. That gave the agent eyes — a way to “see” what’s happening on-chain in real time.

In Part 2, we moved from seeing to acting. We introduced cost-aware execution on Base, combining 0x firm quotes with fee, slippage, and safety buffers to decide when a trade was truly worth it. Then we backfilled outcomes using GoldRush Transactions v3 — turning trades into measurable profit and loss.

Now, in Part 3, we’ve made the agent accountable and resilient. You added observability through a decision ledger and metrics, guardrails that enforce safety limits, and even optional LangChain planning for human-supervised automation. The agent can now explain what it did, prove why it acted, and shut itself down safely when conditions go wrong.

Together, these three layers — Data → Decision → Defense — form a blueprint for agentic systems on Base. Every module builds on the last, keeping the architecture simple, auditable, and production-ready.

Closing Thoughts: From Stream to Strategy

At this point, your agent isn’t just reacting — it’s thinking ahead, protecting itself, and explaining why. You’ve gone from subscribing to real-time data with GoldRush, to scoring signals, managing cost-aware execution, tracking PnL, and finally, enforcing safety and observability at scale.

What started as a streaming demo on Base is now a miniature version of how serious on-chain trading systems run: deterministic inputs, explicit guardrails, and a verifiable decision trail. You can extend this into more sophisticated setups — like connecting your logs to Grafana dashboards, exposing metrics to Prometheus, or adding a LangChain planner that evaluates scenarios before execution.

GoldRush remains your foundation here: real-time streaming for what’s happening, historical APIs for proving what happened, and the flexibility to serve agents anywhere on-chain. Whether you’re building for trading, risk management, or treasury automation, this same “see, decide, act, explain” loop applies.

The next frontier is scaling this safely — multiple pairs, multiple signals, multiple agents — all working together under the same transparent architecture you’ve just built.

Get Started

Get started with GoldRush API in minutes. Sign up for a free API key and start building.

Support

Explore multiple support options! From FAQs for self-help to real-time interactions on Discord.

Contact Sales

Interested in our professional or enterprise plans? Contact our sales team to learn more.