AI Beats the Clock

In 1997, IBM’s Deep Blue beat Garry Kasparov, who was the world’s best chess player at the time. Almost 30 years later, the insight that sticks with me is this: Deep Blue didn’t defeat Kasparov. It defeated the clock. Deep Blue did not understand chess in any meaningful way. Under the same clock, it just searched orders of magnitude more positions than Garry could. That is where we are with LLMs. ...

May 12, 2026 · 1 min

The Plateau Doesn't Matter

The people who focus on the limitations of LLMs might be right. But it’s an academic discussion. The utility we’re getting from them is clearly accelerating. A simple thought experiment makes this clear. The human brain hasn’t meaningfully evolved in 2,000 years. That hasn’t stopped progress in science, technology, art, or literature. What changed was everything around the brain: tools, technology, institutions, society and accumulated knowledge. LLMs are at the same kind of inflection point. The model is the brain; the agents, tools, and workflows around it are where the next decade of innovation happens. ...

May 12, 2026 · 1 min

Coding Agents and AGI

I’m impressed every time I use Claude Code. The stochastic-parrot people are right in a narrow sense. Even the latest models, if you use them directly, feel like using ChatGPT in 2023. But couple that capability in an agent that can read, research, reason, prioritize, manipulate files and run tools the result is something else entirely. My reaction is invariably: “Holy shit. This thing is really good at problem solving.” ...

May 12, 2026 · 1 min