THE SWELL #16
SAIL Weekly Digest May 4–8, 2026 | Issue #16
China is the primary focus this week. Ten members of the SAIL team recently returned from a ten-day trip to China for an inside look at Moonshot, MiniMax, Z.ai, Unitree, and ByteDance. Observations included researchers sleeping on cots through Labor Day and foundation models wired for monetization long before reaching scale. The local research culture remains surprisingly humble, moving well beyond the standard geopolitical script.
ICYMI Nathan Lambert and Florian Brand unpacked the trip on yesterday’s SAIL live along with some insight on the current state of open models.→ watch the replay.
The Week in AI
Anthropic’s annualized revenue overtook OpenAI’s, hitting roughly $30B against OpenAI’s $24B — and in the same window Anthropic committed $200B over five years to Google Cloud while OpenAI raised $122B at an $852B valuation, formalizing the multi-cloud, multi-backer era of frontier scaling.
The Center for AI Standards and Innovation finalized pre-release evaluation agreements with Google DeepMind, Microsoft, and xAI on May 5 — extending the framework already in place with OpenAI and Anthropic and putting all five frontier labs under government red-teaming before launch.
Anthropic announced an enterprise AI services joint venture with Blackstone, Hellman & Friedman, and Goldman Sachs on May 4 — a signal that the next layer of frontier-lab revenue moves from APIs to full-stack deployment inside large institutions.
This Week from SAIL Authors
Notes from China
Notes from inside China’s AI labs — In China the LLM community feels more like an ecosystem than battling tribes; Nathan visits Z.ai, Moonshot, Tsinghua, Meituan, Xiaomi, and 01.ai across 36 hours and reports humble engineer-led research, students integrated as peers, and a build-everything mentality where Meituan and Xiaomi train their own LLMs because they can. — Interconnects
Exponential View #572: AI’s moats, myths and moral loopholes — Azeem reports from China alongside Hannah Petrovic — Zhipu serving 5.5T tokens a day, developers joining the platform at ten a minute, Claude as the de facto coding tool inside labs that nominally can’t access it — and pairs the trip with Jasmine Sun’s NYT essay on Silicon Valley’s private fatalism over AI’s labor effects. — Exponential View
We Spent 10 Days Touring Chinese AI Labs. Here’s What We Saw. — Lily Ottinger and Kai Williams recap visits to Unitree, MiniMax, Moonshot, and Z.ai, where AGI-as-religion meets monetization-first product strategy: MiniMax’s biggest revenue line is AI companions, Galbot’s pharmacy robots ship a million orders a year, and ByteDance’s Doubao has 350M MAU largely from older users displaced by Baidu’s decline. — SAIL Exclusive
The Compute Equation
The distillation panic — Anthropic’s “distillation attacks” framing risks branding a standard industry technique as criminal; Nathan argues the policy response forming around it — a House bill, an executive order, and congressional oversight — could effectively wall the U.S. off from Chinese open-weight models without producing a domestic substitute for 6+ months. — Interconnects
Data to start your week: AI boom, nowhere near the ceiling — B200 rental prices up 114% in six weeks, Lightning AI customers seeking 10x their current GPU fleet, and Sightline estimating that 30–50% of 2026 hyperscale capacity is delayed; the demand crunch hasn’t even arrived because most enterprises still aren’t buying agents. — Exponential View
AI and the Knowledge Frontier
Doing Vibe Physics — Alex Lupsasca, OpenAI — Breakthrough-Prize physicist Alex Lupsasca walks Swyx through how GPT-5.x reproduced his hardest paper in eleven minutes and then derived 110 pages of novel quantum-gravity calculations over three days — including a year-old graviton problem solved before his advisor’s plane landed. — Latent Space
I don’t think we are close to “AI scientists” — Timothy unpacks Marc Andreessen’s “your agent is just its files” claim and argues that’s exactly the problem: OpenClaw-style agents can’t form implicit knowledge from inference-time observations, which is the raw material humans use to fashion original insight. — Understanding AI
Ken Liu on AI and Freedom — The Three-Body Problem translator and Dandelion Dynasty author joins Jordan and Irene to argue technology is the most human thing we do — and that the real AI risk isn’t sci-fi superintelligence but humans gradually treating other humans as machine components. — ChinaTalk
Full Library
Access the complete, searchable archive of SAIL Media in our Sitemap.
We Spent 10 Days Touring Chinese AI Labs. Here’s What We Saw. — Lily Ottinger & Kai Williams | SAIL
Notes from inside China’s AI labs — Nathan Lambert | Interconnects
I don’t think we are close to “AI scientists” — Timothy B. Lee | Understanding AI
Ken Liu on AI and Freedom — Jordan Schneider & Irene Zhang | ChinaTalk
Doing Vibe Physics — Alex Lupsasca, OpenAI — Swyx | Latent Space
Exponential View #572: AI’s moats, myths and moral loopholes — Azeem Azhar & Hannah Petrovic | Exponential View
The distillation panic — Nathan Lambert | Interconnects
Data to start your week: AI boom, nowhere near the ceiling — Azeem Azhar, Greg Williams & Nathan Warren | Exponential View
Join us next Thursday for our Weekly Substack LIVE!

