LLM inference for agents
LLM API Infrastructure for Web3 AI Agents
Build onchain analyst agents, DAO assistants, crypto research bots, and smart contract coding workflows without managing five LLM providers. Router One handles model access, routing, budgets, and traces; your app owns wallets, tools, and agent orchestration.
- Layer
- LLM inference gateway
- Agent controls
- Per-key budget caps and traces
- Data source
- Use your own indexer or Web3 API
- Billing
- Stablecoin-friendly wallet top-ups
Build Web3 agents without managing model providers
Router One sits between your agent framework and upstream LLMs. It does not fetch blockchain data or execute wallet actions; it reasons over the data your app provides.
One endpoint for model choice
Call GPT, Claude, Gemini, DeepSeek, Mistral, and Llama from one OpenAI-compatible endpoint.
Fallback when providers degrade
Smart routing can shift traffic when latency, timeouts, or 5xx rates spike on an upstream provider.
Budget caps before autonomy
Set spending limits before an agent enters a retry loop or starts processing a large backlog.
Observability for every run
Trace model, provider, tokens, status, latency, and cost for every LLM call made by an agent.
Agent use cases
Wallet and portfolio assistants
Explain wallet activity, summarize portfolio changes, and generate human-readable reports from indexed chain data.
Onchain analyst agents
Combine protocol metrics, token flows, governance events, and alerts into research summaries.
DAO governance summarizers
Summarize proposals, voting history, forum threads, Discord discussions, and treasury updates.
Smart contract review agents
Pair Claude Code or Codex CLI with Router One for Solidity review, test generation, and deployment checklists.
Telegram and Discord community bots
Answer protocol questions, explain docs, and summarize updates inside community channels.
Customer support for Web3 apps
Route support prompts to major LLMs while keeping usage, latency, and cost visible to the team.
Reference architecture
Use Web3 data providers for chain data and Router One for LLM reasoning over that data.
Data layer
Moralis, The Graph, Dune, Alchemy, QuickNode, Etherscan, or your own indexer supplies normalized onchain context.
Agent layer
LangGraph, Inngest, Mastra, MCP tools, or your own service decides what to fetch and when to call the model.
LLM layer
Router One routes prompts to the right model, applies budget controls, and records per-request traces.
Product layer
Telegram, Discord, dashboards, internal tools, or customer-facing apps receive the final response.
Agent call pattern
Fetch chain context with your preferred provider, then send the normalized context through Router One for analysis.
// 1. Fetch onchain data from your data provider
const walletSummary = await getWalletActivity(address);
// 2. Ask a model through Router One
const answer = await client.chat.completions.create({
model: "auto",
messages: [{ role: "user", content: summarize(walletSummary) }],
});
// 3. Return to Telegram, Discord, or your appFAQ
- Is Router One a Web3 data provider?
- No. Use Moralis, The Graph, Dune, Alchemy, QuickNode, Etherscan, or your own indexer for onchain data. Router One is the LLM gateway that reasons over that data.
- Does Router One execute wallet transactions?
- No. Router One does not control wallets or execute blockchain transactions. Your app or agent framework owns those actions.
- Can each agent have a different budget?
- Yes. Create separate API keys and budget caps for individual agents, bots, environments, or customers.
- Can I use Claude Code for Solidity through Router One?
- Yes. Claude Code can point at Router One's Anthropic-compatible base URL, while Codex and OpenAI SDK clients use /v1.
- Can I pay with stablecoins?
- Yes. Router One supports USDT/USDC top-ups on supported networks for teams that prefer stablecoin billing.
- Does Router One provide financial advice?
- No. Router One is infrastructure. Your product is responsible for user-facing disclaimers and regulated workflows.