SAIL Media

SAIL Media

đŸŒ» why LLMs are bad writers but good editors

my new Atlantic essay + Claude editor setup

Jasmine Sun
Mar 25, 2026
∙ Paid
Illustration of a robotic hand writing with a pencil
source: The Atlantic / Alicia Tatone
This post originally appeared in Jasmine’s Susbtack.

“One problem: today’s AI chatbots don’t come with good taste.”

There’s a weird asymmetry between how tech people talk about AI’s incredible technical prowess and its attenuated capacity for art. Sam Altman has predicted that large language models will soon be capable of “fixing the climate, establishing a space colony, and the discovery of all of physics,” yet in an October interview with Tyler Cowen, guessed that even GPT-7 might be able to extrude only something equivalent to “a real poet’s okay poem.” Cowen himself is sunnier on LLM poetry, but not on visuals. In his “New Aesthetics” grant, co-funded with Patrick Collison (also an AI writing skeptic), the two note that “we haven’t seen much great work that only uses AI.” Neither Altman nor Cowen nor Collison is known for either understatement or techno-pessimism. So—what gives?

I tumbled down an investigative rabbit-hole to answer this question: Why don’t large language models model language very well? Is it something about the way models are trained? The companies’ business priorities? Consumers’ bad taste? Or is literature really that special? I talked to a slew of researchers, engineers, authors, and data labelers; and tinkered relentlessly with the models myself. In my new essay for The Atlantic, I argue that the answer is something like D: All of the above. I think you should read the whole thing, but to boil it down to three brief reasons:

Keep reading with a 7-day free trial

Subscribe to SAIL Media to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 SAIL media, LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture