Good morning, AI Enthusiast!

In the last seven days, Anthropic accidentally exposed its most powerful unreleased model, then accidentally leaked 500,000 lines of source code. Mercor - the company that trains AI models for OpenAI and Anthropic - publicly confirmed it was breached by one of the most notorious hacking groups in the world. And OpenAI closed the largest private funding round in history at an $852 billion valuation.

This is the week the AI industry's security perimeter broke down - at the exact moment the stakes have never been higher.

In today’s AI debrief:

  • Anthropic's Claude Mythos leak: a new model tier above Opus, unprecedented cybersecurity capabilities, a market crash, and a second breach that exposed Claude Code's entire source code

  • Mercor confirms 4TB breach via LiteLLM supply chain attack - Lapsus$ claims your passport, source code, and VPN credentials are now for sale on the dark web

  • OpenAI closes $122 billion at $852 billion valuation - the largest private funding round in history, anchored by Amazon, Nvidia, and SoftBank

  • Quick hits on the Anthropic injunction ruling, VC hitting a record $297B in Q1, Perplexity's data lawsuit, and the week's model wars

CLAUDE MYTHOS

The debrief: On March 26, a misconfiguration in Anthropic's content management system left nearly 3,000 internal files publicly accessible - including a draft blog post describing a new AI model called Claude Mythos that Anthropic describes as "by far the most powerful AI model we've ever developed." The company confirmed the leak is real, calling Mythos a "step change" that is currently in testing with early access customers. Cybersecurity stocks immediately flash-crashed, with CrowdStrike, Palo Alto Networks, and Zscaler each dropping 6-7%. Then on March 31, Anthropic accidentally leaked 500,000 lines of Claude Code's source code in a separate incident - its second major data breach in five days.

The details: The leaked draft blog post revealed several things Anthropic wasn't ready to announce. First, a new model tier called "Capybara" - positioned above Opus, which was previously Anthropic's most capable tier. Mythos and Capybara appear to refer to the same underlying model, with Capybara being the tier name and Mythos the specific model name. The leaked document says Capybara delivers "dramatically higher scores" than Claude Opus 4.6 on software coding, academic reasoning, and cybersecurity tasks. The most alarming detail is the cybersecurity capability. The leaked draft describes Mythos as "currently far ahead of any other AI model in cyber capabilities" - capable of autonomously discovering and exploiting software vulnerabilities at speeds that outpace human defenders. Anthropic's own draft language warned the model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." Stifel analyst Adam Borg put it plainly: Mythos has "the potential to become the ultimate hacking tool, and one that can elevate any ordinary hacker into a nation-state adversary." As a result, Anthropic is pursuing a deliberately slow rollout - seeding the model to enterprise security teams first so defenders can use it before it reaches general availability. The draft also acknowledged Mythos is "very expensive for us to serve, and will be very expensive for our customers to use," with efficiency work needed before any broad release. The second breach - 500,000 lines of Claude Code source code exposed in a release packaging error - is potentially more strategically damaging. While it didn't expose model weights or customer data, it gives competitors a blueprint for reverse-engineering Claude Code's agentic harness, and security researchers have already confirmed the leaked code contains additional evidence that a "fast" and "slow" version of Capybara is in active preparation.

Why it matters: The Mythos story is three stories running simultaneously. The first is a capability story: Anthropic has apparently built something meaningfully more powerful than anything currently on the market, and the cybersecurity implications are severe enough that the company is privately briefing government officials. The second is a market story: the cybersecurity flash crash signals that Wall Street is now pricing AI-driven disruption of the $200B+ security industry in real time. Raymond James analyst Adam Tindle's note is worth reading - he warns that "defensive approaches based on known signatures or prior threat intelligence could be pressured as AI enables continuous discovery of unknown exploits." Traditional cybersecurity moats are starting to look fragile. The third is a trust story: Anthropic is simultaneously fighting the Pentagon over safety guardrails, privately warning governments about Mythos's dangers, and accidentally leaking both its most sensitive upcoming product and its flagship coding tool's source code in the same week. For a company whose entire brand is built on being the "safety-first" AI lab, these two data breaches are a serious reputational problem that its critics will use for months.

THE MERCOR BREACH

The debrief: Mercor - the 10 billion AI talent platform that trains models for OpenAI and Anthropic -publicly confirmed on March 31 that it was hit by a supply chain cyberattack through the compromised open-source LiteLLM library. Extortion group Lapsus 10 claimed responsibility, alleging it stole 4TB of data, including 939GB of source code, a 211GB database, 3TB of video and identity verification files, and full access via the company's Tailscale VPN. Mercor became the first company to publicly acknowledge being a victim of the broader TeamPCP supply chain campaign - a distinction that set off alarm bells across the entire AI startup ecosystem.

The details: The attack didn't start with Mercor. It started with LiteLLM - an open-source API gateway with 97 million monthly downloads that lets developers route calls to over 100 different AI models, including OpenAI and Anthropic. Hacking group TeamPCP first compromised Trivy, a widely used vulnerability scanner, through a misconfigured GitHub Actions workflow. That gave them the PyPI publishing credentials for LiteLLM, and they pushed two malicious versions - 1.82.7 and 1.82.8 - directly to the public registry. Any developer who updated LiteLLM during that window had malware running on their infrastructure that automatically harvested SSH keys, .env files, cloud provider credentials, cryptocurrency wallets, and AI API keys. Mercor was using LiteLLM in production. The company confirmed the breach on X, posting that it had "moved promptly to contain and remediate the incident" and was working with third-party forensics experts. But Lapsus$ had already listed Mercor on its dark web site as a live auction - "Make an offer" - with the full alleged 4TB dataset. The scope of what Lapsus$ is claiming is significant: Mercor manages 100,000+ domain experts, including physicians, lawyers, and PhD researchers, and works directly with every major AI lab. Its databases could contain sensitive contractor information, identity documents from verification processes, and - per the leaked video files - recordings of AI model training sessions with expert contractors. As of publication, Mercor has not disclosed how many users were affected or confirmed the specific data Lapsus$ claims to hold. A class action law firm investigation has already been launched.

Why it matters: This story is bigger than Mercor. Mercor is the first company to publicly confirm impact from the TeamPCP campaign - but SANS ISC's threat intelligence analysts are tracking AstraZeneca and Databricks as potential additional victims, with those organizations not yet acknowledging any breach. The LiteLLM attack vector is the nightmare scenario for the AI industry: a single compromised package with 97 million monthly downloads sitting at the exact point where every developer's API credentials flow. If you used LiteLLM versions 1.82.7 or 1.82.8 and haven't rotated credentials, SANS is explicitly warning that exploitation is actively underway. The broader context makes this week extraordinary: Anthropic accidentally exposed Mythos in a data misconfiguration, then accidentally exposed 500,000 lines of Claude Code source code in a packaging error, and now Mercor - which trains Anthropic's own models - confirmed a major supply chain breach in the same seven-day window. The AI industry is moving fast and breaking things, and the things breaking this week are security perimeters. For founders: conduct an immediate audit of every open-source dependency in your production stack. The TeamPCP campaign specifically targeted developer credential stores - the exact infrastructure that most AI startups have deprioritized securing.

OPENAI'S $122 BILLION

The debrief: While the AI security world was on fire, OpenAI quietly closed the largest private funding round Silicon Valley has ever seen - $122 billion in committed capital at an $852 billion post-money valuation. The company is generating $2 billion in revenue per month, serves more than 900 million weekly active users on ChatGPT, and is now describing itself not as an AI lab but as "the core infrastructure for AI." With a Q4 2026 IPO expected, this round is as much about anchoring public market expectations as it is about the capital itself.

The details: The round topped OpenAI's previously announced $110 billion in commitments, with the additional capital coming from a broader investor pool. SoftBank co-led alongside Andreessen Horowitz, D.E. Shaw Ventures, MGX, TPG, and T. Rowe Price Associates. Amazon ($50B), Nvidia ($30B), SoftBank ($30B), and Microsoft all participated. In an unprecedented move, $3 billion came from individual retail investors through bank channels - the first time OpenAI has opened a round to individuals - and the company will now be included in multiple ARK Invest ETFs, giving retail investors pre-IPO exposure without an S-1. The numbers behind the rise are striking: Codex now serves 2 million weekly users, up 5x in three months, with usage growing 70% month over month. Enterprise revenue has climbed from 30% to 40% of total revenue and is on track to reach parity with consumer by year-end. OpenAI's APIs now process more than 15 billion tokens per minute - meaning the platform processes the equivalent of every book ever printed in human history roughly every 16 hours. The company is simultaneously planning a "superapp" combining ChatGPT, Codex, and agentic capabilities, and has already shut down the consumer-focused Sora video app to redirect resources toward enterprise productivity tools. Amazon's $50 billion investment includes a key condition: it's contingent on OpenAI either achieving AGI or completing an IPO - a clause that structurally accelerates the public market timeline regardless of Sam Altman's preferences.

Why it matters: At $852 billion, OpenAI is now valued higher than every public company except Apple, Nvidia, Microsoft, Alphabet, Amazon, Meta, and Berkshire Hathaway - without being public, without being profitable, and without a clear path to profitability until 2030. The round's structure tells you where the company is headed: ARK ETF inclusion and retail investor access are not things you do when you plan to stay private for years. This is IPO preparation dressed as a funding round. For founders and investors, the most important number is not the valuation - it's the 40% enterprise revenue share growing toward 50%. That's the signal that OpenAI is successfully transitioning from a consumer chatbot to enterprise infrastructure, the only business model that can justify a $1 trillion public market valuation. The risk is real: OpenAI is burning cash at record rates toward a 2030 profitability target, in a market where Anthropic, Google, and open-source competitors are closing the capability gap. The question isn't whether OpenAI is impressive - it clearly is. The question is whether the moat is durable enough to earn the multiple. Watch the IPO filing for the answer.

Global AI Quick Hits

Federal judge rules in favor of Anthropic - preliminary injunction granted - Judge Rita F. Lin sided with Anthropic, pausing the Pentagon's supply-chain risk designation while the full lawsuit proceeds. The ruling calls out the government's conduct as likely unconstitutional retaliation and gives Anthropic breathing room to fight from a position of relative strength. Watch the full case closely - the ruling will set the precedent for every AI company's government contract negotiations going forward.

Global VC hits a record $297 billion in Q1 2026 - AI captures 81% of it - Crunchbase reported that Q1 2026 set a record for global venture investment, with AI startups capturing more than four out of every five dollars deployed. Just four companies raised 64% of the total. The AI funding boom is not slowing - it's concentrating.

Anthropic issues 8,000+ copyright takedowns after Claude Code source leak - After the Claude Code source code spread across GitHub and developer forums, Anthropic's legal team began issuing DMCA takedown requests to remove copies. Developers noted the leaked code contains anti-distillation systems Anthropic built into Claude Code - including one that injects fake tool calls into sessions to corrupt competitor training data.

Perplexity faces a class action lawsuit for allegedly sharing user data with Meta and Google - Bloomberg reported that Perplexity is being sued for allegedly sharing personal user data from chat sessions with Meta and Google. The case is the latest sign that AI companies' data practices are entering a new phase of legal scrutiny - and that the "privacy by default" positioning many AI tools claim may not hold up under examination.

Most people are reading about AI. AI Insider members are using it to work faster and stay ahead.

Every week I go live and break down one AI tool or workflow - exactly how I'm using it, what's working, what's not. You ask questions, I answer live, and everything gets saved to the vault.

Join today, and you'll unlock 20+ past sessions instantly, plus weekly live breakdowns, an AI tools database, step-by-step courses, early access to new AI companies, and a community of 300+ founders and builders actually putting this stuff to work.

Total value is over $7,400 - click below to unlock the vault today.

Thanks for reading. Our mission is to educate as many people as possible around AI literacy - see you next week.

- Drew & the rest of the humans behind The AI Debrief

Keep Reading