All posts
·7 min read

Claude Mythos Is $25/$125 per Million Tokens. The Frontier Just Got Locked Behind a $100M Partnership.

Anthropic gated its strongest model behind Project Glasswing, available to roughly 50 partner organizations at $25 input / $125 output per million tokens. Most engineering teams will never see this tier. The pricing still matters.

claude mythosanthropic api costfrontier llm pricingmodel routingai infrastructure cost

On April 7, 2026, Anthropic announced Claude Mythos Preview and Project Glasswing. The model is roughly 50% available: a closed research-preview tier accessible only to about 50 launch partners including AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The pricing is $25 per million input tokens and $125 per million output tokens. Anthropic committed $100M in usage credits to seed the program.

For context, Anthropic's previously top model, Opus 4.6, sits at $5/$25 — a price it inherited from Opus 4.5, which took a 67% cut when it launched in November 2025. Mythos costs 5x that. OpenAI's o3-pro, currently the most expensive widely-available model, sits at $20/$80. Mythos is roughly 25-50% more expensive on output than the most expensive other API on the market.

If you are an engineering team running a normal SaaS production stack, you will not be calling Mythos this quarter. Your cost-routing decisions will not change because of this release. But the announcement says something specific about where frontier-model pricing is heading, and it does change two things you should plan for.

What the gating actually signals

Anthropic's framing of Project Glasswing is cybersecurity-focused: the model identified thousands of zero-days across major operating systems and browsers in pre-release testing. The capability story is real. The pricing and gating story is more interesting.

Three things stand out.

Pricing tiers are bifurcating. Through 2024 and most of 2025, frontier-model pricing followed a relatively smooth curve: every six months a new top model launched at roughly the price of the previous top model, and the previous tier dropped 30-50%. Opus 4.5's 67% price drop fit that pattern. Mythos at $25/$125 breaks it. Anthropic is signaling that the most capable models will live in a separate pricing tier accessible only via partnership, not via the public API.

The frontier is being underwritten by partners, not customers. $100M in usage credits across ~50 organizations works out to roughly $2M per org. That is closer to a strategic partnership budget than a typical enterprise API spend. The model is not paying for itself per token; it is paying for itself by anchoring a security ecosystem.

The dialect matters. Mythos is available through Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry, but only to participating partners. That means the OpenAI-format proxy layer most production stacks rely on will not see Mythos as a routing target for the foreseeable future. If your routing logic assumes "any Anthropic model can be a target," that assumption now needs an asterisk.

What this changes for normal teams

Two practical things.

1. Opus 4.6 is going to stay at $5/$25 for longer than you might expect.

Historically, when a new frontier model launches, the previous frontier drops 30-50%. Opus 4.5 took that drop in November 2025, from $15/$75 down to $5/$25, and Opus 4.6 launched at the same price in February 2026. With Mythos taking the new "frontier" position behind a partnership wall, Anthropic has no public-API competitor pulling Opus 4.6 down further. The next price drop on Opus 4.6 will likely come when GPT-6 or a publicly-released Anthropic model competes with it on the open API, not from Mythos.

For routing decisions, this means the cost ceiling on Anthropic's public-API tier is stable for the next two-to-three quarters. You can plan budgets against $5/$25 without worrying that a sudden price drop will reshuffle your routing logic.

2. The "best available model" routing pattern needs to be redefined.

A common production routing pattern is "send the hardest 5% of requests to whatever the best frontier model is right now." If that pattern's definition of "best" tracks benchmark leaderboards, it will start trying to route to Mythos and failing. The routing layer needs to treat "best available to me" as the routing target, not "best in existence."

This is a small but real change. We laid out the broader version of this trap in our complete LLM model routing guide: routing should be defined by what the customer can actually call, not by what exists. Mythos makes this distinction load-bearing.

The cost calculation if you could access Mythos

Suppose for the sake of argument that you could call Mythos. Should you? Take a workload of 10M monthly tokens (5M input, 5M output) routed to a frontier model:

  • On Opus 4.6: $25 input + $125 output = $150/month
  • On o3-pro: $100 input + $400 output = $500/month
  • On Mythos: $125 input + $625 output = $750/month

That is 5x Opus 4.6 for capability gains that, outside of cybersecurity-specific tasks, are unverified. For non-security workloads, the per-token cost-quality ratio almost certainly favors Opus 4.6 or DeepSeek R2 reasoning. We walked through the equivalent math on the reasoning side in our DeepSeek R2 vs o3 analysis: the cheaper reasoning model is sometimes also the better one for the specific task.

The honest read on Mythos for general engineering teams: even if access opened up tomorrow, most production routing logic should keep the model out of the hot path. The ROI math favors mid-tier models for almost everything except security-vulnerability hunting.

What partners are getting

For the ~50 organizations in Project Glasswing, the math is different. The deal is:

  • Access to a model meaningfully ahead of public frontier on at least one capability axis (security)
  • Pricing that, while expensive per-token, comes with $2M-ish in credits that effectively make the first quarter free
  • Bedrock, Vertex AI, and Microsoft Foundry availability, so the integration cost is bounded
  • A coordinated industry response to the security-AI race that reduces the chance of the model being misused by competitors

The non-partners also get something, indirectly: the security industry overall benefits from coordinated zero-day discovery and patching. That is the public-good framing.

What partners do not get: the right to redistribute Mythos access via their own products. If you are an AWS customer, you cannot call Mythos through Bedrock unless you are also a Glasswing partner. The partnership is upstream of your API key.

The two routing implications worth tracking

One: watch for tier consolidation. Mythos at $25/$125 is the highest-priced widely-discussed model. If OpenAI responds with a similarly-gated tier ("GPT-6 Foundation," available only to enterprise partners at premium pricing), the public-API price ceiling could solidify around $5-$10 input. That is good news for cost-optimizing teams: it caps the worst-case routing target, and makes mid-tier models genuinely competitive on a wider range of tasks.

Two: watch for capability bifurcation. If the gated tier consistently leads on specialized capabilities (security, biology, physics simulation), the routing decision for those specialized workloads becomes "build it ourselves with mid-tier models" or "partner up." For most B2B SaaS teams, the answer will be the former. The mid-tier models are good enough for 95% of workloads, and the operational cost of partnering for 5% rarely pencils out.

We covered the broader pattern of "the cheapest routing target that meets the quality bar wins" in our cross-provider LLM routing post. Mythos is the high end of that pattern: a tier that is technically a routing target for ~50 organizations, and a non-target for everyone else.

How PromptUnit handles this

PromptUnit's router only routes to models the customer can actually call. The quality-fingerprint signal that drives routing decisions is built from the models in active rotation: Claude Opus 4.6, GPT-5.4, Gemini 2.5 Pro, DeepSeek R2, and the open-weight tier. Mythos is not in the routing graph until customer access exists, which keeps the routing logic honest. If access opens up to a customer in the future, the router incorporates Mythos as another node and lets the quality-fingerprint signal decide whether the per-token cost is worth it for any specific workload.

If you are planning your 2026 LLM budget against the published price tiers, the practical advice is to anchor against Opus 4.6 and GPT-5.4 as the realistic ceiling, not Mythos. The math at $5/$25 still has plenty of room for routing optimization. Start the free observation period at promptunit.ai and see how much of your current frontier-tier spend belongs on a cheaper route.

Start your 14-day observation period

See exactly how much you'd save before paying anything. Zero risk. if we save you $0, you pay $0.

Get started free →