Claude Mythos: Anthropic's April 2026 AI Preview
Posted: April 14, 2026 to Technology.
On April 7, 2026, Anthropic confirmed what researchers had already extracted from a short-lived staging endpoint: Claude Mythos Preview is real, it is more capable than Opus 4, and almost nobody can use it. This guide covers the verified facts about Mythos Preview, where it fits in the Claude family alongside Opus 4, Sonnet 4.6, and Haiku 4.5, what it changes for cybersecurity and compliance teams, and exactly how to get building with the Claude models you can use today.
I lead Petronella Technology Group, a cybersecurity, digital forensics, and private-AI firm in Raleigh, NC. We run Claude Opus 4 for agentic work, Sonnet 4.6 for high-volume analysis, and Haiku 4.5 for low-latency tool calls, day in and day out. This post is written from inside that stack, not from a press release summary.
What Is Claude Mythos Preview
Claude Mythos Preview is an Anthropic frontier research model positioned above the public Claude 4 family. Anthropic has publicly described it as a "step change" in capability, particularly for long-horizon reasoning and software security tasks. As of April 23, 2026, the model is not available through the public API. Access is gated through an invitation-only consortium that Anthropic calls Project Glasswing.
In practical terms, Mythos Preview is what Anthropic uses to stress-test the safety envelope before a general-release model inherits those capabilities. If you follow the public Claude roadmap, expect features, tool-use patterns, and guardrails seen in Mythos to show up in the next Opus release long before the Mythos name appears on a price sheet.
How Mythos Was Revealed
On March 28, 2026, an unannounced model identifier surfaced in responses from a staging cluster Anthropic had briefly exposed to a partner. Within hours, independent researchers had archived sample outputs. The story moved from Discord to the security press over the following 48 hours. On April 7, Anthropic published a statement acknowledging the model, confirmed its name, outlined the Glasswing program, and declined to release benchmark numbers.
For our purposes the revelation is less interesting than the consequence: a higher-capability tier exists, regulators are now aware of it, and compliance frameworks will be updated to reflect that yesterday's assumption about model ceilings is no longer valid.
Mythos vs Opus 4, Sonnet 4.6, and Haiku 4.5
For the models you can actually call today, here is the practical breakdown based on documented Anthropic specs as of April 2026. We use all three in production and the guidance below reflects how they behave under our real workloads, not benchmark tables.
| Model | API Status (Apr 2026) | Primary Job | When to Reach For It |
|---|---|---|---|
| Claude Opus 4 | Generally available | Agentic work, long-horizon planning, code | Multi-step refactors, complex incident response walk-throughs, deep research |
| Claude Sonnet 4.6 | Generally available | Balanced speed plus quality | Ticket triage, log summarization, default chat workloads |
| Claude Haiku 4.5 | Generally available | Fast, cheap, structured output | Classification, tool-call routing, latency-sensitive endpoints |
| Claude Mythos Preview | Consortium only (Glasswing) | Frontier research, security review, red-team | Not available to most teams. Plan around its eventual descendants. |
The honest takeaway: you do not need Mythos Preview to build real AI products today. Opus 4 handles almost every agentic workload we throw at it, Sonnet 4.6 is the workhorse, and Haiku 4.5 keeps costs sane for high-traffic endpoints. For deeper coverage of the public Claude family, see our Claude by Anthropic Enterprise AI overview.
Project Glasswing: Who Has Access and Why
Project Glasswing is Anthropic's name for a small, NDA-bound consortium of companies running Mythos Preview against internal workloads in exchange for safety telemetry. Anthropic has not published a full list of members. Public reporting names a mix of large cloud, security, and defense-adjacent organizations, plus a small number of independent safety evaluators.
Your company is almost certainly not in it. That is fine. The program exists so Anthropic can observe how a frontier model behaves on real, consequential tasks before capabilities propagate to the general-availability tier. The program also gives regulators a visible structure to engage with, which matters as frameworks like NIST AI RMF and EU AI Act enforcement mature.
For businesses outside the consortium, the Glasswing structure tells you two useful things. First, frontier capability is now being rehearsed against real operational workloads, not just lab benchmarks. Second, the results of that rehearsal will shape the next Opus release, the next set of Anthropic safety requirements, and the compliance language regulators adopt. If you map your AI strategy to public benchmarks alone, you will be twelve months behind the teams reading Anthropic's responsible-scaling updates every month.
Cybersecurity and Compliance Implications
Here is where Mythos Preview becomes directly relevant to teams that are not in the consortium. Three shifts matter:
1. The vulnerability-discovery bar has moved
Anthropic has stated publicly that Mythos Preview is being used to surface previously unknown vulnerabilities in widely deployed software. Whether or not your stack has been reviewed by a Mythos-class model, you should assume that your adversaries will have access to comparable capability on a one to three year horizon. Patch cadence, minimum-viable-secure baselines, and attack-surface reduction are no longer optional quarterly exercises.
2. AI-assisted red team is now table-stakes
If your penetration-testing vendor is not using LLMs to expand coverage, their report is already behind where defender tooling is heading. For clients in regulated industries we pair human-led testing with Claude Opus 4 for triage, reachability analysis, and exploit-chain hypothesis generation. That is today's floor, not a differentiator. Our network forensics practice covers how we fold AI-assisted analysis into real incident work.
3. Compliance documentation has to name the model
Auditors under CMMC, HIPAA, and SOC 2 scopes are starting to ask which AI models touch covered data and how those models are isolated. "We use an LLM" is not an answer. You need a named model, a deployment pattern (hosted API vs private inference), a data-flow diagram, and a retention stance. Petronella Technology Group builds that documentation into every AI engagement by default, and if you are pursuing CMMC compliance the AI section is now a routine line item.
API Access, Pricing, and Availability for Today
Mythos Preview has no public price. The models you can buy through the Anthropic API as of April 2026 follow the pattern below. Check https://docs.anthropic.com/en/docs/about-claude/models for the live price sheet before you build pricing into a product, because Anthropic adjusts it periodically.
- Opus 4: highest price tier, aimed at agentic and long-context work.
- Sonnet 4.6: mid tier, the default for most production workloads.
- Haiku 4.5: lowest tier, built for high-volume structured calls.
Prompt caching, batch processing, and the 1M-token context window (currently available on Sonnet and Opus tiers) can cut your effective cost dramatically. Most of the optimization work we do for clients is not about picking a cheaper model, it is about structuring prompts so caching actually lands.
A concrete example: one of our managed-IT clients runs Claude on every inbound support ticket for triage. By lifting their 8,000-token runbook out of the per-request prompt and into a cached system block, their Sonnet 4.6 spend dropped by roughly two-thirds in the first week. No model change, no quality change, just an architecture tweak. That kind of win is available on the public Claude API today; it does not need a Mythos-class model to deliver real dollars back to the budget.
Getting Started With the Claude API Today
If Mythos Preview inspired you to finally commit to Claude in production, the good news is that the API you will ship on is available right now. Below are the minimum runnable examples we use to bootstrap new client projects.
Step 1: Install the SDK
The Python SDK is the fastest path. A TypeScript SDK exists too.
pip install anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
Step 2: First API call (Sonnet 4.6 as the sensible default)
from anthropic import Anthropic
client = Anthropic()
resp = client.messages.create(
model="claude-sonnet-4-5",
max_tokens=1024,
messages=[
{"role": "user", "content": "Summarize this firewall log entry for a tier-1 analyst: ..."}
],
)
print(resp.content[0].text)
Note the model ID format. Anthropic uses claude-sonnet-4-5, claude-opus-4-1, and claude-haiku-4-5-style strings. Always check the live docs for the exact current ID before you pin it in production.
Step 3: Extended thinking for security review
For tasks like CVE triage or compliance-control gap analysis, extended thinking gives the model room to reason before answering. Keep max_tokens generous.
resp = client.messages.create(
model="claude-opus-4-1",
max_tokens=8192,
thinking={"type": "enabled", "budget_tokens": 4096},
messages=[
{"role": "user", "content": (
"Review this Terraform module for AWS IAM least-privilege violations. "
"List each violation, severity, and a fix."
)},
],
)
for block in resp.content:
if block.type == "text":
print(block.text)
Step 4: Tool use for incident-response automation
Tool use is how you turn Claude from a chat endpoint into an agent. Below is the minimal shape we use when wiring Claude into an internal SOAR.
tools = [
{
"name": "lookup_host",
"description": "Fetch asset-inventory record for an internal hostname.",
"input_schema": {
"type": "object",
"properties": {"hostname": {"type": "string"}},
"required": ["hostname"],
},
},
{
"name": "isolate_host",
"description": "Quarantine a host at the EDR level. Requires approval.",
"input_schema": {
"type": "object",
"properties": {"hostname": {"type": "string"}, "reason": {"type": "string"}},
"required": ["hostname", "reason"],
},
},
]
resp = client.messages.create(
model="claude-sonnet-4-5",
max_tokens=2048,
tools=tools,
messages=[
{"role": "user", "content": "Host FIN-WRK-042 is beaconing to a new C2 IP. Triage."}
],
)
Your SOAR loops on resp.stop_reason == "tool_use", runs the tool, appends the result, and calls messages.create again until the model returns a normal text response. That is the entire agent pattern in 30 lines.
Step 5: Cost-aware model routing
The fastest way to cut an AI bill in half is to stop sending classification tasks to Opus. We use a router that looks like this in most client builds.
MODEL_FOR_TASK = {
"classify": "claude-haiku-4-5",
"summarize": "claude-sonnet-4-5",
"analyze": "claude-sonnet-4-5",
"investigate":"claude-opus-4-1",
"plan": "claude-opus-4-1",
}
def route(task_type, prompt, max_tokens=1024):
return client.messages.create(
model=MODEL_FOR_TASK[task_type],
max_tokens=max_tokens,
messages=[{"role": "user", "content": prompt}],
)
When Mythos-class capability becomes publicly available, this pattern is how you absorb it: add one row, shift the investigate and plan classes up, keep the rest on cheaper tiers.
How Petronella Deploys Claude for Clients
For regulated clients we almost always run Claude through one of three patterns, depending on their data-sensitivity and compliance posture.
- Direct API with zero retention. Anthropic's zero-retention endpoint for eligible workloads, plus our own prompt logging for audit. This is the default for most SMB and mid-market clients.
- Enterprise AI through AWS Bedrock or Google Vertex. Keeps data inside an existing cloud tenancy with its own BAA or compliance wrapper. Common for healthcare and financial services.
- Hybrid with private inference for the hottest data paths. Claude handles generalist reasoning; an in-house open-weight model on our enterprise AI workstations or rack systems handles the classified-adjacent slice. This is the only pattern we recommend for firms inside the CMMC Level 2 or Level 3 scope where CUI must never leave controlled infrastructure.
In every pattern the same governance scaffolding applies: named model IDs in the risk register, logged prompts and outputs in a tamper-evident store, human approval on any high-impact tool call, and a documented rollback if the model is deprecated. For a deeper walk-through of how we design these deployments, see AI Services Raleigh NC.
How to Prepare Your Organization
You cannot get Mythos Preview. You can get ready for what comes next. Five moves worth making this quarter:
- Tighten patch cadence. Move critical-severity windows from 30 days to 7 days or better. Assume attackers will have near-frontier discovery capability soon.
- Fund an AI-assisted red team. Either pick a vendor that has already integrated LLMs into their workflow or run an internal one. If you need a Raleigh-area partner, see our network forensics and penetration-testing practice.
- Stand up model governance now. Named models, deployment diagrams, prompt logging, and a deprecation runbook. Your auditor will ask within a year.
- Write a model-flexibility clause into AI vendor contracts. No hard dependency on a single model ID. Give yourself freedom to upgrade when the next tier ships.
- Invest in your people. Pay for Anthropic's AI Fluency course, Claude Code CLI hands-on time, and an internal AI-review channel. Engineers who can ship Claude code today will be the ones who adopt Mythos-class capability productively when it arrives.
Frequently Asked Questions
Can I use Claude Mythos Preview right now?
Not unless your organization is a member of Project Glasswing. As of April 2026 Mythos Preview has no public API, no pricing sheet, and no published timeline for general availability. Build on Opus 4, Sonnet 4.6, and Haiku 4.5 today; the next public model in the Claude family will inherit a meaningful share of the Mythos capability improvements.
How does Mythos Preview compare to GPT and Gemini models?
Anthropic has not published benchmark numbers for Mythos Preview, so any comparison chart circulating online is speculative. What is known is that Anthropic positions Mythos above Opus 4, which itself sits at or near the top of the public Claude family for software-engineering and long-horizon reasoning tasks. Treat OpenAI and Google frontier models as peers until rigorous third-party benchmarks exist.
When will Mythos Preview be generally available?
Anthropic has not committed to a public release date. Based on past Claude releases, frontier-tier capabilities typically show up in a generally available Opus refresh within six to twelve months of first consortium access. Plan for that window but do not build a product around it.
What is the context window for Mythos Preview?
Anthropic has not disclosed the context window for Mythos Preview. For the models you can actually use, Opus 4 and Sonnet 4.6 currently support context windows up to 1M tokens for eligible workloads, and Haiku 4.5 supports 200K tokens. Always check the official Anthropic docs for the current, binding figure because these limits evolve.
Is Claude Mythos Preview dangerous?
Anthropic's published stance is that the model is capable enough that gated access is the responsible path. The practical risk for a business that cannot use Mythos Preview is strategic, not operational: the capability bar for offensive security is moving, and defenders who are not investing in AI-assisted tooling today will be behind within eighteen months.
Does Petronella Technology Group have access to Mythos Preview?
No. Petronella Technology Group is not a member of Project Glasswing. We run Claude Opus 4, Sonnet 4.6, and Haiku 4.5 in production today for our clients and we will adopt the next publicly available Claude tier on the day it ships.
Should I switch AI vendors because of Mythos Preview?
No. The Mythos announcement does not change the rational vendor-selection criteria for the next twelve months: API stability, compliance posture, context window, pricing, tooling maturity, and engineering fit. Anthropic, OpenAI, and Google are all shipping frontier-class capability at a similar cadence. Pick the model whose ergonomics your team enjoys and whose compliance story fits your data.
The Bottom Line
Claude Mythos Preview is a genuine capability leap that almost nobody can actually call. The people who benefit most from this announcement are the teams who treat it as a deadline: patch faster, adopt AI-assisted red team, formalize model governance, and ship a real Claude integration on Opus 4 or Sonnet 4.6 this quarter. When the Mythos-class capabilities roll into the next public Opus release, those teams will absorb it by changing one model ID. Everyone else will spend six months catching up.
If you want help doing any of that, Petronella Technology Group builds Claude-powered cybersecurity, compliance, and custom AI systems for businesses across Raleigh, NC and the United States. Call us at (919) 348-4912 or visit our AI Services page to start a conversation. We will tell you honestly whether AI belongs in the problem you are trying to solve.
Petronella Technology Group
5540 Centerview Dr., Suite 200
Raleigh, NC 27606
Phone: (919) 348-4912
Founded 2002. BBB A+ since 2003. CMMC-AB Registered Provider Organization (RPO) #1449. Team CMMC-RP certified. Craig Petronella: CMMC-RP, CCNA, CWNE, DFE #604180.