Hey {{first_name}},
Anthropic had the most consequential week in its history. A new Claude model is autonomously hacking the world's most secure software to protect it. The Pentagon banned Claude from the U.S. military - and the courts just let it stand. And in the middle of all of it, Anthropic shipped more product than most companies do in a quarter.
Here's everything you need to know.
In today’s AI debrief:

The model that breaks everything, so nothing else can

The debrief: Anthropic quietly launched Claude Mythos Preview - a new model built specifically for cybersecurity. It's not available to the public. It's invitation-only. And it just found a 27-year-old vulnerability in OpenBSD and multiple zero-days in the Linux kernel - the software running most of the world's servers.
The details: Anthropic simultaneously announced Project Glasswing - a partnership with Apple, Nvidia, Amazon, Google, and JPMorgan to use Mythos to find and fix critical vulnerabilities in their foundational systems before bad actors do. The model runs autonomously: it reads code, hypothesizes vulnerabilities, tests them, and files detailed bug reports - no human in the loop.
Why it matters: This is the first time an AI has demonstrably matched - and likely exceeded - the world's best human security researchers at finding flaws. Every company running software (every company) is now operating in a world where AI can find the holes in their walls faster than they can patch them. The only question is whether the AI doing the finding is working for you or against you.
Anthropic won't let the military use Claude for killer drones. The courts just sided with the Pentagon

The debrief: The D.C. Circuit Court of Appeals denied Anthropic's emergency request to halt the Pentagon's blacklist of Claude from all U.S. military contracts. The ban stays in force while litigation continues.
The details: The dispute started when the Pentagon demanded Anthropic remove two things from its terms of service - a ban on fully autonomous weapons (armed drone swarms with no human oversight) and a prohibition on mass surveillance of U.S. citizens. Anthropic refused. Defense Secretary Pete Hegseth called those guardrails "irrational obstacles." Anthropic called them safety requirements. The result: Claude is now designated a supply chain risk, meaning major DoD contractors, including Amazon, Microsoft, and Palantir, cannot use it. Oral arguments are set for May 19.
Why it matters: This is the defining tension in AI right now - safety guardrails vs. government power. Anthropic built the most capable security model on the planet this week and was simultaneously banned by the world's largest military for refusing to remove its ethics policies. How this case resolves in May will set the precedent for every AI company's relationship with the U.S. government.
New model, new tools, new problems - Claude's biggest product week ever

The debrief: Buried under the Mythos news, Anthropic had a massive product week. Claude Sonnet 4.6 launched as their new everyday model - faster, smarter on agentic tasks, and supports a 1M token context window. Claude Managed Agents hit public beta, letting developers run Claude as a fully autonomous agent with sandboxing and built-in tools. And Claude Code shipped major updates, including a Focus view, Google Vertex AI setup wizard, and subprocess sandboxing.
The details: The practical implication: Claude can now run longer, think deeper, and operate more autonomously than ever - all in the same week. Meanwhile, back-to-back outages on April 7 and 8 hit thousands of users, exposing just how dependent developers and professionals have become on a single AI platform. Anthropic fixed both within hours, but the pattern raises a real infrastructure question as demand accelerates.
Why it matters: Claude is quietly becoming the operating system for knowledge work. Sonnet 4.6 plus Managed Agents plus Claude Code means entire workflows - research, coding, analysis, communication - can now run on Claude end to end with minimal human input. The outages were a reminder that when that's your stack, uptime isn't a feature. It's everything.

You’re reading about AI.
AI Insider members are using it.
Every week I go live and break down one AI tool or workflow - exactly how I'm using it, what's working, what's not. You ask questions, I answer live. Everything gets saved to the vault.
Join today, and you unlock instantly:
20+ recorded live sessions in the vault
Weekly live breakdowns with Q&A
AI tools database + step-by-step courses
Early access to new AI companies
300+ founders and builders doing the same thing
Total value: $7,482 - yours for $97/month

Also This Week.
Claude Haiku 3 is being retired April 19 - if you're building on the API, migrate to Haiku 4.5 now or your calls will break
Anthropic's Messages API is now on Amazon Bedrock as a research preview - same request shape as the native API, running on AWS infrastructure
Project Glasswing partners include Apple, Nvidia, Google, JPMorgan, and Palo Alto Networks - the coalition defending global software infrastructure just got an AI upgrade
May 19 is the date to watch - oral arguments in the Pentagon vs. Anthropic case that will define AI's relationship with U.S. government contracts

Everything in one place:
🧰 AI Toolkit - our curated stack of the best AI tools
🛒 Marketplace - prompts, templates, and workflows
💸 Referrals - refer friends, earn real rewards
🎨 Creators - build and sell in our ecosystem

This week’s top referrer:
Jonathan S. - Founder, Green Bay WI
Jonathan found The AI Debrief through a Google search two weeks ago. Since then he's referred 10 readers and earned $100 back.
"I wasn't even looking for a newsletter. Now I read it every week and I've already sent it to everyone on my team."
These are the readers building with AI - not just reading about it.
First person to refer someone next week gets shouted out to 25,000 readers just like this.

Share the AI Debrief. Get paid.
Refer 1 → your name shouted out to 25,000 readers
Refer 5 → mystery reward with real dollar value
Refer 10 → $100 back, every time. No cap. Forever.
Your referral link is below - one share is all it takes.
P.S. - Jonathan from Wisconsin just hit 10 referrals and got $100 back.
You're one share away from seeing your name in next week's edition - 25,000 people will see it.
Thanks for reading. Our mission is to educate as many people as possible around AI literacy - see you next week.
- Drew & the rest of the humans behind The AI Debrief



