SAIL Media

SAIL Media

I don’t think we are close to “AI scientists”

Today's AI agents are not designed to extract deep insights from new observations.

Timothy B. Lee
May 06, 2026
∙ Paid
This post originally appeared in Understanding AI.

“At the end of the day, your agent is just its files.”

In February, my colleague Kai Williams pointed out that LLMs have an uncanny ability to recognize authors based on their unpublished prose. In recent weeks, journalists like Megan McArdle and Kelsey Piper have confirmed this.

I decided to try it out for myself. Back in 2012, a friend paid me $500 to write an essay about the Great Canadian Maple Syrup Heist. It never got published. So on Friday, I opened ChatGPT in incognito mode and pasted in five paragraphs from the essay.

ChatGPT said it wasn’t sure who the author was, guessing that it might be Nate Silver or my former Vox.com colleague Matthew Yglesias. When I added four more paragraphs, the chatbot responded: “This one I can identify pretty confidently—it’s by Timothy B. Lee.”

But when I asked ChatGPT why it thought the essay was written by me, it couldn’t give me a specific reason. “Even though Timothy B. Lee often writes clear, explanatory pieces, there’s nothing here that acts like a fingerprint—no recurring phrases, specific policy framing, or known article structure that ties it definitively to him.”

I think there’s a lesson here that goes well beyond identifying authors.

Keep reading with a 7-day free trial

Subscribe to SAIL Media to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 SAIL media, LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture