THE SWELL #10
SAIL Weekly Digest · Week of March 22–27 | Issue #10
This week, the AI industry drew sharp lines — between training and inference, between focus and distraction, between building models and actually deploying them. OpenAI killed Sora to go all-in on coding and enterprise. Jensen Huang declared the inference era at GTC. And Nathan Lambert made the case that recursive self-improvement is real but lossy — the singularity has friction.
The Week in AI
OpenAI shuts down Sora to refocus on enterprise — The move was so abrupt that Disney found out hours before the public announcement. CEO of Applications Fidji Simo told staff: “We cannot miss this moment because we are distracted by side quests.” The company is pivoting entirely to coding agents and business users as Anthropic’s annualized revenue hits $19 billion.
GTC 2026: Jensen declares the inference era — NVIDIA unveiled the Vera Rubin + Groq architecture, promising a 35x improvement in throughput per megawatt over Blackwell. Jensen’s thesis: every company needs an “OpenClaw strategy” — the harness for deploying AI agents at planetary scale matters more than the model itself.
Iran strike threatens global helium supply — An Iranian attack on Qatar’s Ras Laffan facility damaged 14% of global helium exports. Liquid helium spoils in 35–48 days and escapes Earth’s atmosphere forever. South Korea — home to 80% of the world’s HBM production — sourced 64% of its helium from Qatar, but leading chipmakers say supply is diversified and secure.
This Week from SAIL Authors
The Inference Economy & Agent Platforms
Jensen’s OpenClaw Thesis — Azhar argues that the training era is over and the inference economy has arrived. The harness — not the model — is what drives adoption at scale. He draws on his own token usage (870 million in a single day) to illustrate what agentic deployment looks like when the tooling crosses the reliability threshold. Exponential View → Read more
Dreamer: the Personal Agent OS — David Singleton — Swyx and Alessio interview the former Stripe CTO behind /dev/agents, now rebranded as Dreamer — a consumer-first platform for discovering, building, and using AI agents. The episode was recorded just before the team announced they were joining Meta Superintelligence Labs. Latent Space → Read more
AI Industry Shakeout
OpenAI Is Shutting Down Sora, Its AI Video App — Lee traces OpenAI’s decision to kill Sora back to intensifying competition from Anthropic and Google. With Claude commanding nearly 70% of US business AI subscriptions and Gemini at 650 million users, OpenAI needs every GPU it can free up for GPT-5. Understanding AI → Read more
How to Think About AI Company Finances — Lee builds the framework for why OpenAI and Anthropic losing more money every year is the standard tech playbook, not a warning sign — and why Amazon lost money for nine years before becoming one of the most valuable companies in the world. Understanding AI → Read more
Science, Models & the Frontier
Why There Is No “AlphaFold for Materials” — AI for Materials Discovery with Heather Kulik — Kulik, one of the first material scientists to combine computational tools with data-driven modeling, explains why biology’s AlphaFold moment can’t simply transfer to materials. The datasets are noisy, the design space is enormous, and LLMs still can’t design a 22-atom ligand. Latent Space → Read more
A Visual Guide to Attention Variants in Modern LLMs — Raschka compiles a visual gallery of 45 LLM architectures and walks through every major attention variant — from MHA and GQA to MLA, sparse attention, and hybrid designs — explaining why DeepSeek chose MLA as a quality-preserving efficiency move at scale. Ahead of AI → Read more
Lossy Self-Improvement — Lambert makes the case that recursive self-improvement is real but won’t produce a fast takeoff. Instead of an exponential intelligence explosion, expect “lossy self-improvement” — models becoming core to the development loop but friction breaking down every assumption of the singularity thesis. Interconnects → Read more
AI & the Creative Process
Why LLMs Are Bad Writers but Good Editors — Sun’s new Atlantic essay investigates why models that can “fix the climate” still can’t write a good poem. The answer: verifiability, misaligned post-training, and a lack of grounding in real life. She then shares step-by-step how she built a custom Claude editor using a personal rubric — and argues it’s as good as many human editors. Jasmi.News → Read more
The AI Economy & Geopolitics
Data to Start Your Week — Helium Special — The weekly data roundup breaks down why Iran’s strike on Qatar’s Ras Laffan facility matters for chips: 34% of global helium comes from Qatar, liquid helium expires in 48 days, and South Korea’s HBM fabs are directly exposed — but the cost impact on semiconductors remains modest at 0.5–1% of fab costs. Exponential View → Read more
Full Library
Access the complete, searchable archive of SAIL Media in our Sitemap
Dreamer: the Personal Agent OS — David Singleton — Swyx & Alessio | Latent Space
OpenAI Is Shutting Down Sora, Its AI Video App — Timothy B. Lee | Understanding AI
Why LLMs Are Bad Writers but Good Editors — Jasmine Sun | Jasmi.News
A Visual Guide to Attention Variants in Modern LLMs — Sebastian Raschka | Ahead of AI
Why There Is No “AlphaFold for Materials” — Heather Kulik — Swyx & Alessio | Latent Space
How to Think About AI Company Finances — Timothy B. Lee | Understanding AI
Data to Start Your Week — Helium Special — Azeem Azhar & Hannah Petrovic | Exponential View
Jensen’s OpenClaw Thesis — Azeem Azhar | Exponential View
Lossy Self-Improvement — Nathan Lambert | Interconnects
Join us next Thursday for our Weekly Substack LIVE!

