đ» why LLMs are bad writers but good editors
my new Atlantic essay + Claude editor setup
This post originally appeared in Jasmineâs Susbtack.
âOne problem: todayâs AI chatbots donât come with good taste.â
Thereâs a weird asymmetry between how tech people talk about AIâs incredible technical prowess and its attenuated capacity for art. Sam Altman has predicted that large language models will soon be capable of âfixing the climate, establishing a space colony, and the discovery of all of physics,â yet in an October interview with Tyler Cowen, guessed that even GPT-7 might be able to extrude only something equivalent to âa real poetâs okay poem.â Cowen himself is sunnier on LLM poetry, but not on visuals. In his âNew Aestheticsâ grant, co-funded with Patrick Collison (also an AI writing skeptic), the two note that âwe havenât seen much great work that only uses AI.â Neither Altman nor Cowen nor Collison is known for either understatement or techno-pessimism. Soâwhat gives?
I tumbled down an investigative rabbit-hole to answer this question: Why donât large language models model language very well? Is it something about the way models are trained? The companiesâ business priorities? Consumersâ bad taste? Or is literature really that special? I talked to a slew of researchers, engineers, authors, and data labelers; and tinkered relentlessly with the models myself. In my new essay for The Atlantic, I argue that the answer is something like D: All of the above. I think you should read the whole thing, but to boil it down to three brief reasons:
Keep reading with a 7-day free trial
Subscribe to SAIL Media to keep reading this post and get 7 days of free access to the full post archives.

