This website uses cookies

Read our Privacy policy and Terms of use for more information.

Hey, debriefer!

The White House just compared AI regulation to the FDA - and they're not joking. Google got caught sneaking a 4-gigabyte AI model onto your computer while you weren't looking. And the EU pulled an all-nighter to rewrite the rules that govern every AI company on Earth. Let's get into it.

In today’s AI debrief:

  • The White House wants to vet AI models before they're released to the public

  • Google Chrome is silently installing a 4GB AI on your computer

  • The EU just rewrote its AI Act in an all-night negotiation

A Policy Earthquake

THE WHITE HOUSE WANTS AN "FDA FOR AI"

Image Source: aol.com

The debrief: The Trump administration is actively considering an executive order that would create a government review process for new AI models before they're released to the public - essentially an FDA-style approval system for artificial intelligence.

The details: Kevin Hassett, director of the National Economic Council, laid it out on Fox Business on Wednesday: "We're studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AI that also potentially create vulnerabilities should go through a process so that they're released in the wild after they've been proven safe, just like an FDA drug." The comments came days after the New York Times reported that White House officials had already briefed executives from Anthropic, Google, and OpenAI on the plans during private meetings last week. The proposed executive order would create a government-industry "AI working group" bringing together tech executives, the NSA, the Office of the National Cyber Director, and the Director of National Intelligence. The catalyst is impossible to ignore: Anthropic's Claude Mythos Preview - a model so powerful it autonomously discovered thousands of zero-day vulnerabilities in every major operating system and web browser, some hiding in plain sight for over 25 years. Then the UK's AI Security Institute tested OpenAI's GPT-5.5 on the same gauntlet and found it scored 71.4% on expert-level cybersecurity tasks versus Mythos's 68.6% - essentially a tie. AISI's conclusion was blunt: these offensive capabilities aren't unique to one lab. They're an emergent byproduct of general improvements in reasoning, coding, and autonomy across all frontier models. A White House official told Reuters they wouldn't confirm or deny the report: "Any policy announcement will come directly from the president."

Why it matters: This is a 180-degree turn from January 2025, when Trump revoked Biden's AI safety executive order on his first day in office. The irony is hard to miss - the administration that dismantled the last round of AI guardrails is now proposing something potentially more aggressive. An FDA-style review process would fundamentally change the release cadence for every frontier lab. Open-source projects face the hardest enforcement questions. And the global implications are massive: the US would be moving closer to the regulatory posture the UK's AI Security Institute already operates, narrowing one of the biggest transatlantic policy gaps. For builders and investors, the message is clear - the era of "ship it and see what happens" with frontier models may be ending. The question is whether government review can move at the speed of AI development, or whether it becomes the bottleneck that pushes the next breakthrough overseas.

Your browser just got 4 gigabytes heavier. Google didn't ask.

GOOGLE CHROME IS SECRETLY INSTALLING AI ON YOUR COMPUTER

Image Source: tech.yahoo.com

The debrief: Security researcher Alexander Hanff discovered that Google Chrome is silently downloading a 4GB AI model called Gemini Nano onto users' devices - without notification, without consent, and if you delete it, Chrome downloads it again.

The details: Hanff, a computer scientist and lawyer who publishes as "That Privacy Guy," ran a controlled test using a fresh Chrome profile on macOS with zero human input. The browser evaluated the system's hardware, deemed it eligible, and downloaded the full 4GB payload during what appeared to be idle time - completing in just over fourteen minutes. The file is called "weights.bin" and sits in a folder labeled OptGuideOnDeviceModel. It's been getting bigger over time - 3GB in April 2025, 4GB by November 2025. But here's the kicker Hanff flagged: Chrome 147 now renders an "AI Mode" pill in the address bar that looks like it's powered by the local model. It's not. Every query typed into AI Mode is sent to Google's cloud servers for processing. The local model sits on your disk unused while the cloud-backed version collects your data. At Chrome's global scale - roughly 3.5 billion devices - the mass distribution generates an estimated 6,000 to 60,000 tonnes of CO2-equivalent emissions. Hanff argues the behavior violates the EU's ePrivacy Directive and GDPR, and noted that Anthropic's Claude Desktop app pulled a similar move - quietly installing a browser integration bridge across multiple Chromium browsers without asking.

Why it matters: This is what happens when the AI race hits distribution. Google doesn't need you to opt in to Gemini - it just needs you to not opt out, and most people won't because they don't know it's happening. The pattern is becoming familiar: ship AI features as defaults, bury the toggle, and count on user inertia to build the install base. If you're using Chrome right now, there's a good chance 4GB of your storage is occupied by an AI model you never asked for and can't permanently remove without disabling experimental flags. The broader signal for the industry is clear - the consent model around on-device AI is broken, and regulators in the EU are watching. For anyone building AI-powered tools, this is a case study in how not to deploy: even if the technology works, silent installation erodes the trust you need to keep users long-term.

The rules that govern every AI company in Europe just got rewritten.

THE EU JUST PULLED AN ALL-NIGHTER TO REWRITE THE AI ACT

Image Source: atlanticcouncil.org

The debrief: The EU Council and European Parliament struck a provisional deal at dawn on May 7 to simplify the AI Act - pushing back deadlines, extending SME exemptions, and adding a new ban on AI-generated deepfake pornography and child abuse material.

The details: The deal, reached after negotiations that ran through the night, is part of the EU's Omnibus VII simplification package. The biggest change: high-risk AI rules that were scheduled to take effect on August 2, 2026, are now delayed. Stand-alone high-risk systems get until December 2, 2027. High-risk systems embedded in products like medical devices, machinery, and toys get until August 2, 2028. The watermarking deadline for AI-generated content transparency moved to December 2, 2026. SME exemptions now extend to small mid-cap enterprises. And the deal clarifies that the AI Office handles oversight where a single provider builds both the general-purpose model and the AI system on top of it - but national authorities keep jurisdiction over law enforcement, border management, courts, and financial institutions. The provision that broke new ground: a prohibition on AI systems that generate child sexual abuse material or create non-consensual intimate imagery of identifiable people. The European Commission proposed this package just five months ago and called it the fastest legislative turnaround on a major digital file in EU history.

Why it matters: The AI Act was supposed to be done. Companies had been scrambling to meet the August 2, 2026 deadline. Now they have 16 extra months for stand-alone systems and 24 months for embedded ones - because the harmonized standards and conformity assessment tools simply weren't ready. For US companies operating in Europe, this is a reprieve, but not a pardon. The architecture of the law stands. The penalties still reach EUR 35 million or 7% of worldwide turnover. And the nudifier ban signals that Parliament will keep adding obligations to any legislative vehicle that moves. If you build AI products that touch Europe, the playbook hasn't changed - prepare for compliance, just on a slightly longer timeline. The real takeaway: even regulators are admitting they can't keep pace with AI development. The law designed to govern the most powerful technology of the century needed a rewrite before it even took effect.

  • Anthropic is now the highest-revenue AI company in the world. Anthropic hit $30 billion in annualized recurring revenue in April, passing OpenAI's $24 billion - while spending roughly 4x less on model training. The jump from $9B to $30B happened in about eight weeks. Over 1,000 companies now spend more than $1 million per year on Claude. OpenAI disputes the figure, arguing it overstates revenue by roughly $8 billion due to accounting differences. Both companies are targeting IPOs in the second half of 2026.

  • Only 17.8% of the world's working-age population uses AI. Microsoft's new Global AI Diffusion Report shows AI adoption rose 1.5 percentage points in Q1 2026. The UAE leads at 70.1%. The US ranks 21st at 31.3%. The surprise: software developer employment hit a record 2.2 million in 2025, up 8.5% year over year - suggesting AI coding tools may be increasing demand for developers, not replacing them.

  • OpenAI shelved plans to spin out its robotics and hardware divisions before its IPO. The Wall Street Journal reported that OpenAI considered separating the units into independent entities but concluded they'd remain on its balance sheet regardless. The decision keeps OpenAI's IPO story simpler but signals the company's hardware ambitions are real enough to warrant a spinout discussion.

  • IREN is acquiring cloud infrastructure firm Mirantis in an all-stock deal worth roughly $625 million. The deal adds Kubernetes orchestration and enterprise support to IREN's AI cloud platform. The AI infrastructure build-out continues to accelerate as compute demand outstrips capacity.

  • Trump's White House is weighing government reviews of AI models before public release - and the move was prompted by Anthropic's Mythos model demonstrating offensive cybersecurity capabilities that matched or exceeded human experts. The UK's AI Security Institute found that GPT-5.5 hit similar levels, suggesting the risk is industry-wide.

Share the AI Debrief. Get paid.

Refer 1 → your name shouted out to 25,000 readers

Refer 5 → mystery reward with real dollar value

Refer 10 → $100 back, every time. No cap. Forever.

Your referral link is below - one share is all it takes.

Big thanks to these 151 readers who referred at least one person this week. You're the reason this newsletter keeps growing:

Amara T. · Jensen K. · Priya S. · Beckett L. · Zuri M. · Cade R. · Nalini W. · Otis F. · Celeste D. · Ibrahim G. · Wren H. · Marcelo V. · Tessa J. · Kian P. · Rosalie B. · Dimitri N. · Sable E. · Arlo C. · Juniper A. · Raheem S. · Lyra K. · Camden W. · Bianca F. · Soren D. · Mila T. · Jasper H. · Aniyah R. · Callum L. · Daphne G. · Eshan M. · Frankie P. · Hadley B. · Idris C. · Jolene N. · Kenji V. · Leona S. · Matteo A. · Nia K. · Omar W. · Poppy J. · Quinn F. · Reina D. · Samir T. · Thalia H. · Ulysses R. · Vera L. · Winston G. · Xiomara M. · Yosef P. · Zola B. · Adrian E. · Brielle C. · Cyrus N. · Dahlia V. · Emmett S. · Freya A. · Gage K. · Harlow W. · Inara F. · Jude D. · Kamila T. · Lachlan H. · Maeve R. · Niko L. · Opal G. · Phoenix M. · Raya P. · Stellan B. · Tatum E. · Uma C. · Viggo N. · Waverly V. · Xander S. · Yasmin A. · Zeke K. · Alessia W. · Byron F. · Corinne D. · Declan T. · Elara H. · Finley R. · Gianna L. · Heath G. · Ilana M. · Joaquin P. · Kaia B. · Lennox E. · Miriam C. · Nash N. · Orla V. · Penn S. · Ramona A. · Silas K. · Tamsin W. · Uri F. · Valentina D. · Wells T. · Ximena H. · Yael R. · Zander L. · Amira G. · Bodhi M. · Cleo P. · Dashiell B. · Eloise E. · Fletcher C. · Gemma N. · Henrik V. · Isadora S. · Jules A. · Khalil K. · Liora W. · Magnus F. · Nola D. · Orion T. · Petra H. · Ronan R. · Sage L. · Theron G. · Vienna M. · Wilder P. · Zadie B. · Alistair E. · Belen C. · Cassian N. · Dove V. · Elio S. · Fiona A. · Greyson K. · Hana W. · Iver F. · Jaya D. · Kenzo T. · Lila H. · Mercer R. · Nadia L. · Oakley G. · Paloma M. · Rio P. · Sloane B. · Tobias E. · Vesper C. · Willa N. · Yannis V. · Zinnia S.

Want your name here next week? One referral is all it takes.

Thanks for reading. Our mission is to educate as many people as possible around AI literacy - see you next week.

- Drew & the rest of the humans behind The AI Debrief

Keep Reading