AURUM Handbook
Everything you need to understand, operate, and extend AURUM — the self-funding macro intelligence agent built on Anthropic, Polymarket, and public market data.
Start here
What is AURUM
Introduction to the agent, its purpose, and core loop.
Signal Flow
How chart, news, and Polymarket signals combine.
AI Architecture
How Anthropic powers the intelligence layer.
Fee Flywheel
Token economics and the self-funding loop.
Environment Variables
All required and optional env vars explained.
Roadmap
What is built, what is next, what is on the horizon.
What is AURUM
AURUM is a public autonomous macro intelligence agent. It is not a trading bot in the traditional sense — it is an AI-native market entity that reads chart structure, classifies news, interprets prediction market odds, and decides whether to act in macro markets.
Every analysis AURUM performs is public. Every decision it makes is logged transparently. Every thesis it publishes is available to anyone. The agent has no private feed, no hidden edge, and no opaque process. Public proof is the product.
Three pillars
Intelligence. The agent uses Anthropic's Claude as its primary reasoning engine. Claude processes chart context, classifies news batches, scores Polymarket signals, and synthesizes a composite decision — all via strongly typed, modular helper functions.
Transparency. The agent's decision log, confidence scores, reasoning, and invalidation conditions are public. Any holder, observer, or researcher can audit every move AURUM makes or does not make.
Self-funding. AURUM's operations — compute, data, infrastructure — are intended to be funded by its own token fee loop. The token generates fees. Fees fund intelligence. Intelligence generates proof. Proof generates attention. Attention generates fees.
What AURUM is not
AURUM is not a hedge fund. It is not a financial advisor. It is not a signals service. It is an experimental autonomous agent designed to demonstrate that disciplined, transparent, AI-powered market participation is possible at the protocol layer.
The core loop
Chart ingestion → News classification → Polymarket scoring
→ Composite signal → Decision (or no decision) → Public thesis
→ Logged permanently → Treasury updated
If conviction is not earned, the agent does not act. "No trade" is not a failure state. It is often the highest-conviction output the system can produce.
Why AURUM Exists
Most AI-powered market products are closed. They promise alpha, hide their process, and generate revenue regardless of whether their signals work. AURUM was designed as the opposite: a public agent whose decision-making process is entirely visible, whose confidence levels are displayed honestly, and whose "I don't know" is treated as a legitimate output.
Three reasons
The market for honest AI analysis is underserved. Most market participants have access to the same public data: charts, news, on-chain metrics. The edge is not exclusive data — it is better synthesis. AURUM is an experiment in whether a well-designed AI reasoning layer can synthesize public inputs more reliably than noise-driven human traders.
Autonomous agents need a public model. As AI agents become capable of real economic participation, the question of how they should operate transparently is urgent. AURUM is a prototype of what a public AI agent looks like: open decision log, visible reasoning, auditable treasury, explicit risk framework.
The self-funding loop is an important proof of concept. If an agent can generate enough trading signal to attract token holders, and if token fees can fund the compute cost of generating that signal, a genuinely self-sustaining AI market participant becomes possible. AURUM is an early attempt to close that loop.
Product Thesis
Over-trading is the primary failure mode of most market participants. They act on noise, miss structure, and confuse activity with edge.
AURUM's product thesis is the inverse: read more, act less, and only deploy capital when multiple independent signals converge above a confidence threshold.
The conviction model
AURUM does not act on a single signal. It waits for chart structure, news sentiment, and prediction market odds to converge. When they do, and when the composite score clears the threshold with sufficient confidence, the agent forms a bias. When they don't, the agent watches.
Token → fees → intelligence → execution → proof
The long-term thesis is that the agent can sustain its own intelligence through the fees its token generates. Every decision — even a no-trade — adds to the public proof record. Public proof drives attention. Attention drives token activity. Token activity drives fees. Fees fund more intelligence.
Signal Flow
AURUM's signal pipeline runs in four sequential stages before producing a decision.
Stage 1: Chart Structure
OHLC data for BTC/USD, ETH/USD, and DXY is fetched from Finnhub. The raw price data is passed to summarizeChartContext(), which uses Anthropic to produce trend direction, structure description, key insight, risk note, confidence score, and signal strength.
Chart signals are weighted at 35% of the composite signal.
Stage 2: News Classification
Market news is fetched from Finnhub's free news endpoint. Each headline is passed to classifyNewsBatch(), which returns sentiment, relevance score, category, and a one-sentence macro implication per item.
News signals are weighted at 30% of the composite.
Stage 3: Polymarket Scoring
Active Polymarket markets matching macro keywords (Fed, Bitcoin, CPI, inflation, recession) are fetched from the public API. Each market is passed to scorePolymarketContext(), which returns relevance score, macro implication, signal direction, and weight.
Polymarket signals are weighted at 25%.
Stage 4: Composite Signal
All three signal layers, plus treasury state and risk budget, are passed to buildCompositeSignal(). The engine calculates a weighted score from -100 to +100, applies decision thresholds, checks confidence and risk budget, and returns a typed decision with reasoning.
long_bias: composite >= +35, confidence >= 65%
short_bias: composite <= -35, confidence >= 65%
watch: between thresholds with directional lean
no_trade: confidence < 50% or risk budget < 20%
reduce_risk: deteriorating conditions with open exposure
Fallback behavior
If any stage fails, the system returns conservative defaults — neutral signals, confidence 0, and no_trade. The agent never crashes because of a failed API call.
Fee Flywheel
The AURUM fee flywheel is the mechanism by which the agent funds its own operations through the economic activity it generates.
Token holders → transaction fees → treasury
treasury → compute costs → intelligence
intelligence → decisions → public proof
public proof → attention → more token holders
How fees accumulate
Every transaction involving the AURUM token generates a small fee that flows into the agent's treasury. This is implemented via Bankr's fee loop infrastructure (integration staged in the roadmap). The treasury balance is visible on-chain and reflected in the token_metrics table.
How the treasury is spent
- Anthropic API costs (chart context, news classification, thesis generation)
- Finnhub API costs (market data and news feeds)
- Infrastructure costs (hosting, database, cron jobs)
- Polymarket execution costs (future roadmap)
Compute runway
Runway answers: at current burn rate, how many days can the agent operate without additional treasury inflows?
runwayDays = treasuryBalance / dailyCost
If runway falls below 30 days, the agent emits a reduce_risk signal and reduces operational spend.
Public Agent Model
AURUM is designed as a fully public agent. This is not a limitation — it is the core design principle.
What is public
- Every decision the agent makes or does not make
- The confidence score and reasoning behind each decision
- The invalidation conditions for every active position
- The current thesis and the signals that support it
- Treasury balance and compute runway
- Token metrics and fee accumulation
Why this matters
Public operation creates accountability. The agent cannot quietly bury bad decisions or retroactively change its thesis. Every output is timestamped, logged, and available to anyone. This is how trust is built at the protocol layer.
It also creates a self-reinforcing loop: transparent performance attracts attention, attention drives token activity, token activity funds more intelligence.
Dashboard
The AURUM dashboard at /dashboard is a real-time intelligence terminal displaying all active signal modules.
Modules
- Agent Status — Current decision, confidence, reasoning, risk level, composite score
- Market Overview — Sparklines and trend for BTC, ETH, DXY with 4H structure
- Chart Analysis — Detailed chart view with key levels, signal strength, structure notes
- News Pulse — Classified headlines with sentiment, relevance score, and source
- Polymarket Context — Active macro markets with odds and implication
- Composite Signal — Weighted score gauge, decision badge, weight breakdown
- Decision Feed — Full history of agent decisions with outcomes
- Treasury / Fees — Token price, market cap, fees collected, treasury balance
- Compute Runway — Daily cost, total spent, runway in days
- Recent Actions — Latest agent pipeline actions with status
All modules display realistic mock data by default. Connect live APIs via the environment variables to switch to real data.
Why Anthropic
AURUM uses Anthropic's Claude Sonnet as its primary reasoning engine. The choice is deliberate.
Structured output reliability
AURUM's entire signal pipeline depends on AI returning clean JSON — not prose, not markdown, not explanations. Claude Sonnet reliably follows structured output instructions, which makes it ideal for production pipelines where a malformed response would break downstream logic.
Reasoning quality
Chart structure analysis, news classification, and composite signal synthesis all require genuine macro reasoning — not just pattern matching. Claude's reasoning quality at the Sonnet tier is sufficient for AURUM's current signal depth.
API design
The Anthropic SDK is clean, well-typed, and easy to wrap. All five of AURUM's AI helpers are implemented in under 200 lines total, each following the same pattern: system prompt → user message → JSON parse → typed return.
All calls use claude-sonnet-4-20250514. The model constant is defined in lib/anthropic/index.ts and can be updated in one place.
Why Polymarket
Prediction markets are one of the most underused signals in macro analysis. Polymarket aggregates real-money probability estimates on macro events — Fed decisions, CPI prints, crypto price levels — that are often more accurate than analyst consensus.
Free, public, real-money
Polymarket's public API requires no API key for read-only access. The gamma API endpoint returns active markets, current odds, volume, and liquidity. This makes it a zero-cost, high-quality signal source for AURUM's MVP.
Execution roadmap
In a future phase, AURUM will not just read Polymarket — it will participate. When the agent forms a strong view on a macro event, it will take a position on the relevant prediction market. This closes the loop between analysis and on-chain action.
Why Bankr
Bankr provides the execution and token infrastructure layer that AURUM is built to integrate with. Specifically:
- Token launch — AURUM's token is launched and managed via Bankr
- Fee loop — Transaction fees route to the agent treasury via Bankr's fee infrastructure
- Agent execution — Long/short bias decisions become real orders via Bankr's execution layer
- Leveraged trading — Future roadmap for leveraged macro positions
- Polymarket actions — On-chain prediction market participation
MVP status
The Bankr adapter in lib/bankr/index.ts is a typed stub. All interfaces are in place. Live integration is staged for Phase 2. Setting BANKR_API_KEY in the environment variables activates the live adapter.
Why Supabase
Supabase provides AURUM's persistence layer. PostgreSQL under the hood, with a clean REST API, real-time subscriptions, and Row Level Security out of the box.
What is stored
- Agent status snapshots (every pipeline run)
- Market signals (chart analysis outputs)
- News items (classified headlines)
- Polymarket snapshots (odds at time of analysis)
- Decision logs (full history with context and outcomes)
- Thesis posts (published editorial analyses)
- Token metrics (price, volume, fees, treasury)
- Compute metrics (daily cost, runway, API usage)
The full schema is in supabase/schema.sql. Run it in your Supabase SQL editor to initialize all tables with proper indexes and RLS policies.
Why Finnhub
Finnhub's free tier provides the market data AURUM needs for the MVP: general news, crypto news, and OHLC candle data. The free tier allows 60 API calls per minute — sufficient for AURUM's 15-minute cron cadence.
Free-first strategy
AURUM is designed to minimize data costs in the MVP phase. Finnhub free tier, Polymarket public endpoints, and Anthropic's pay-per-token pricing keep the daily operational cost under $5 at standard signal frequency.
The Finnhub adapter in lib/market/index.ts caches responses at the Next.js layer (next: { revalidate: 300 }) to avoid redundant API calls and stay within rate limits.
AI Architecture
All AI interactions are mediated through five typed helper functions in lib/anthropic/index.ts.
Helper functions
summarizeChartContext(asset, timeframe, priceData, currentPrice)
→ ChartContextSummary
classifyNewsBatch(headlines)
→ NewsClassification[]
scorePolymarketContext(markets)
→ PolymarketContextScore[]
buildCompositeSignal(inputs)
→ CompositeSignal
generatePublicThesis(signals, news, polymarkets, decision, confidence)
→ ThesisPost
Prompt design
All prompts follow the same structure. The system prompt defines the agent's role, specifies JSON-only output, and sets behavioral constraints (conservative, no forced trades). The user message provides structured input data and the exact JSON schema expected as output.
Fallback behavior
Every helper returns a typed default object if the API call fails. Fallbacks are always conservative — no_trade, neutral signals, confidence 0. The application never crashes because of a malformed AI response.
Model
All calls use claude-sonnet-4-20250514. The model is declared as a constant and can be updated in one place.
Data Model
AURUM uses nine Supabase tables. The full schema with indexes and RLS policies is in supabase/schema.sql.
Tables
- agent_profiles — Static agent identity, mission, risk framework, market focus
- agent_status_snapshots — Rolling snapshots of decision, confidence, composite score
- market_signals — Chart analysis outputs per asset per run
- news_items — Classified headlines with sentiment and relevance
- polymarket_snapshots — Market odds at time of each pipeline run
- decision_logs — Full decision history with input signals, invalidation conditions, outcomes
- thesis_posts — Published editorial analyses with support signals and event odds
- token_metrics — Price, volume, holders, fees, treasury balance
- compute_metrics — Daily cost, total spent, runway days, API usage
Access pattern
All tables have public read access via RLS policy. The anon key can read everything. Only the service role key (server-side only, never exposed to the client) can write. This keeps the public agent model intact while protecting write operations.
API Design
AURUM exposes a set of internal Next.js Route Handlers under /api/. These are not a public API — they are internal endpoints consumed by the frontend and by cron jobs.
Endpoints
GET /api/signals — Current composite signal and agent status
GET /api/news — Latest classified news items
GET /api/polymarket — Current Polymarket snapshots
GET /api/thesis — Latest published thesis
GET /api/agent — Agent profile and metrics
GET /api/cron/signal — Run full signal pipeline (cron, protected)
GET /api/cron/thesis — Generate and publish thesis (cron, protected)
Response format
All endpoints return a consistent wrapper:
{ data: T | null, error: string | null, cached: boolean, generatedAt: string }
Cron protection
Cron endpoints require the x-cron-secret header matching CRON_SECRET in your environment. On Vercel, cron jobs are called internally and can pass this header automatically.
Caching & Cron Strategy
Caching
AURUM uses Next.js fetch caching with next: { revalidate: N } hints on all external data fetches:
- Finnhub news — 300s (5 min)
- Polymarket markets — 300s (5 min)
- Agent status — 60s (1 min)
- Thesis — 600s (10 min)
This keeps the dashboard fast and avoids redundant API calls while staying within free tier rate limits.
Cron schedule
# vercel.json
/api/cron/signal — every 15 minutes
/api/cron/thesis — every 6 hours
The signal pipeline runs every 15 minutes. Each run ingests fresh chart data, classifies new headlines, scores Polymarket odds, and builds a new composite signal. If the decision changes, it is logged to Supabase. The thesis pipeline runs every 6 hours and generates a new public thesis from current signals.
Security Model
Secrets
All sensitive keys are stored in environment variables and never exposed to the client. The NEXT_PUBLIC_ prefix is used only for values that are safe to expose (Supabase URL and anon key). The service role key, Anthropic key, Finnhub key, and cron secret are server-only.
Supabase RLS
Row Level Security is enabled on all tables. Public read is allowed via the anon key. All writes require the service role key, which is only used in server-side Route Handlers and never reaches the client.
Cron protection
The /api/cron/* endpoints are protected by the CRON_SECRET environment variable. Any request without the matching x-cron-secret header returns 401.
Wallet separation
The agent's execution wallet (used for Bankr integration) is separated from the treasury wallet. The execution wallet holds only the risk budget allocated for the current cycle. The treasury is never directly exposed to market risk.
Environment Variables
Copy .env.example to .env.local and fill in your values. Never commit .env.local to version control.
Required
# Anthropic — required for all AI functionality
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Supabase — required for persistence
NEXT_PUBLIC_SUPABASE_URL=your_supabase_project_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
# Finnhub — free tier key for market data
FINNHUB_API_KEY=your_finnhub_api_key_here
Optional
# Bankr — leave empty for MVP
BANKR_API_KEY=
BANKR_AGENT_ADDRESS=
# App
NEXT_PUBLIC_APP_URL=http://localhost:3000
NEXT_PUBLIC_APP_ENV=development
# Cron protection (32+ random chars)
CRON_SECRET=your_long_random_secret_here
Polymarket requires no API key. The public gamma API is used for read-only access.
Deployment
AURUM is a standard Next.js 15 application. Vercel is the recommended deployment target.
Vercel
npm i -g vercel
vercel --prod
Set all environment variables in Vercel project settings under Settings → Environment Variables.
Supabase setup
- Create a new project at supabase.com
- Open the SQL editor and run the contents of
supabase/schema.sql - Copy the project URL and anon key to your environment variables
- Copy the service role key (Settings → API) to
SUPABASE_SERVICE_ROLE_KEY
Cron jobs (Vercel Pro)
// vercel.json — already included in the project
{
"crons": [
{ "path": "/api/cron/signal", "schedule": "*/15 * * * *" },
{ "path": "/api/cron/thesis", "schedule": "0 */6 * * *" }
]
}
Developer Guide
Getting started
git clone https://github.com/your-org/aurum
cd aurum
npm install
cp .env.example .env.local
# Fill in your keys
npm run dev
Adding a new signal source
- Create a new adapter in
lib/<source>/index.tswith a typed fetch function and mock fallback - Add the signal to the composite input in the cron pipeline
- Add a weight constant and adjust other weights proportionally (must sum to 1.0)
- Add a dashboard card in
components/dashboard/cards.tsx
Modifying prompts
All Anthropic prompts are in lib/anthropic/index.ts. Each helper's prompt is self-contained. Edit systemPrompt for persona/format changes. Edit userMessage for input structure or schema changes.
Type checking
npm run type-check
Roadmap
Phase 1 — Foundation
- Next.js 15 with App Router and TypeScript
- AURUM design system (dark cinematic, Cormorant + DM Sans)
- Landing page with all sections
- Dashboard with 10 signal modules
- Agent profile, thesis, and docs pages
- Anthropic intelligence layer (5 typed helpers)
- Polymarket, Finnhub, and Bankr adapters
- Supabase schema and client
- Mock data for all modules
- Cron pipeline structure
Phase 2 — Live Data
- Live Finnhub market data integration
- Live Polymarket public endpoint integration
- Supabase persistence for all signal tables
- Decision log persistence and display
- Public thesis feed with historical archive
Phase 3 — Bankr Integration
- Token launch via Bankr
- Fee loop implementation
- Treasury dashboard with on-chain data
- Leveraged trading execution
- Polymarket on-chain participation
Disclaimer
AURUM is an experimental autonomous AI agent. Nothing it produces constitutes financial advice.
Not financial advice
AURUM is a technology demonstration. Its outputs are the result of automated AI reasoning over public data. They may be wrong. They may be biased. They may fail to account for information the system does not have access to. Past performance of the agent's decisions does not indicate future results.
No guarantees
The operators of AURUM make no guarantee of accuracy of any signal, uptime or availability, profitability of any implied strategy, or correctness of any Polymarket odds interpretation.
AURUM token
The AURUM token is a utility token designed to fund the agent's operations. It is not a security, not a share in any entity, and not a promise of return. Token value may go to zero. Holding AURUM tokens does not entitle you to any share of profits or fees.
By using AURUM, viewing its outputs, or interacting with its token, you acknowledge that you have read and understood this disclaimer.