OpenAI's Codex CLI is one of the most exciting developer tools to come out of 2026. It brings the power of OpenAI's reasoning models directly into your terminal — autonomous code generation, multi-file edits, test writing, all from a single command. If you have used it, you know how productive it can be.
But there is a catch. To use Codex CLI, you need an active OpenAI API subscription. That means committing to a monthly plan, managing payment methods that may not be available in your region, and dealing with rate limits that can throttle you mid-session. If OpenAI's API has an outage, your entire workflow stops. There is no fallback, no alternative — you just wait.
What if you could use the exact same Codex CLI, with the exact same models, but without the subscription lock-in? That is what Router One enables.
What Router One Changes
Router One sits between Codex CLI and the model providers. Instead of pointing Codex at OpenAI's API directly, you point it at Router One's OpenAI-compatible endpoint. From Codex's perspective, nothing changes — it sends the same requests and gets the same responses. But behind the scenes, you get several things that the direct connection does not offer:
- Pay-per-token billing — no subscription, no monthly commitment. You pay only for the tokens you actually use.
- Chinese payment support — top up your balance with WeChat Pay or Alipay. No international credit card required.
- Automatic failover — if one upstream provider has issues, Router One routes your request to an alternative provider transparently. Your Codex session continues uninterrupted.
- Budget controls — set hard spending limits per API key, per project, or per month. No surprise bills.
- Real-time usage tracking — see every request, every token, every cost in your dashboard as it happens.
None of this requires any changes to Codex CLI itself. You change two environment variables and everything else stays the same.
How It Works
Codex CLI reads its configuration from environment variables, specifically OPENAI_BASE_URL and OPENAI_API_KEY. By default, these point to OpenAI's API. Router One provides a fully OpenAI-compatible endpoint, so you simply redirect these variables:
export OPENAI_BASE_URL=https://api.router.one/v1
export OPENAI_API_KEY=sk-your-router-one-key
That is the entire setup. Codex CLI sends requests to Router One, Router One forwards them to the optimal upstream provider, and the response comes back through the same path. The models available — including o3, o4-mini, and gpt-4.1 — are the same ones you would access directly.
For a detailed walkthrough, see the Codex CLI use case page.
Cost Comparison: Subscription vs Pay-Per-Use
Here is where the math gets interesting. Let us compare what a typical developer actually pays.
OpenAI Direct (API subscription model):
Most developers using Codex CLI will consume between 500K and 2M tokens per month, depending on usage intensity. With OpenAI's API pricing for o4-mini (the default Codex model):
- Input: $1.10 per 1M tokens
- Output: $4.40 per 1M tokens
A moderate user might spend around $5–15 per month on actual token usage. But with OpenAI, you also need a funded API account, and you are locked into their payment infrastructure — which means an international credit card and potential friction with billing in certain regions.
Router One (pay-per-token):
- Same models, same token pricing (with Router One's transparent margin)
- Top up any amount with WeChat Pay or Alipay
- No minimum balance, no monthly commitment
- Unused balance carries forward indefinitely
For light users who run Codex a few times a week, the savings are significant — you might spend $2–5 per month instead of managing a subscription. For heavier users, the cost is comparable, but you gain failover, budget controls, and usage visibility that the direct API does not provide.
The bottom line: you never pay for capacity you do not use, and you gain operational features that would otherwise require building your own proxy infrastructure.
Setup in 2 Minutes
Getting started takes less time than reading about it:
Step 1 — Sign up at router.one and create an API key.
Step 2 — Add funds to your account using your preferred payment method (including Alipay and WeChat Pay).
Step 3 — Set the environment variables in your shell:
export OPENAI_BASE_URL=https://api.router.one/v1
export OPENAI_API_KEY=sk-your-router-one-key
Step 4 — Run Codex as usual:
codex "refactor this function to use async/await"
To make the configuration persistent, add the export lines to your ~/.zshrc or ~/.bashrc.
Verify it works by checking the Router One dashboard after running a Codex command — you should see the request logged with model, tokens, and cost.
What You Get for Free
Beyond the basic proxy functionality, routing through Router One gives you capabilities that would take weeks to build yourself:
Automatic failover. If an upstream provider returns an error or times out, Router One retries against an alternative provider. Your Codex session does not crash or hang — the request simply takes a moment longer while it reroutes. This is especially valuable for long-running Codex tasks that can take minutes to complete.
Budget guardrails. Set a daily or monthly spending cap on your API key. When you hit the limit, requests are rejected gracefully instead of silently accumulating charges. This is critical for teams where multiple developers share the same billing account.
Per-project tracking. Create separate API keys for different projects and see exactly how much each one costs. No more guessing which project consumed your API credits.
Real-time observability. Every Codex request shows up in your Router One dashboard with full details — model used, input and output tokens, latency, cost, and status. Over time, this data reveals usage patterns that help you optimize how you work with Codex.
Model flexibility. While Codex CLI defaults to o4-mini, you can use any model available on Router One, including models from other providers. If you want to try running Codex with a different model for comparison, the infrastructure supports it without any additional setup.
For a full breakdown of supported models and pricing, visit the model marketplace.
Who This Is For
Router One as a Codex gateway makes the most sense for:
- Developers in China who cannot easily access OpenAI's payment system or face network reliability issues with direct API calls.
- Individual developers who want pay-as-you-go pricing without committing to a subscription or maintaining a funded API account.
- Teams that need budget controls, per-project cost tracking, and centralized usage visibility across multiple developers using Codex.
- Anyone who has been burned by API outages and wants automatic failover so their coding sessions are not interrupted.
Get Started
Sign up at router.one, create an API key, set two environment variables, and start using Codex CLI with pay-per-token billing, Chinese payment support, and automatic failover.
For step-by-step setup instructions, visit the Codex CLI use case page. To understand how Router One can reduce your overall LLM costs, read our guide on reducing LLM API costs.