Karpathy: AGI Is Still a Decade Away
AI
AGI
agentic systems
LLMs
coding
links
Four takeaways from Karpathy’s chat with Dwarkesh — the decade of agents, RL through a straw, and why coding LLMs still aren’t reliable collaborators.
Takeaways from Andrej Karpathy’s recent chat with Dwarkesh Patel [1]:
- He believes full-on AGI is still about a decade away, and instead of declaring this is “the year of agents”, it’s more like this is the beginning of “the decade of agents”. Don’t expect a paradigm shift overnight.
- He thinks today’s RL (reinforcement learning) “sucks supervision through a straw” — rewarding or punishing an entire trajectory with a single number. “You’ve done all this work only to find at the end you get a single number… it’s just stupid and crazy.”
- He’s skeptical about “reflective” data loops where models train on their own outputs, warning that purely synthetic self-training could collapse into low-entropy, repetitive behavior. True reflection, he notes, is still uniquely human.
- He thinks coding LLMs are smart, but not yet reliable collaborators. They are struggling to reason about a new codebase, or defaulting to boilerplate when nuance matters.
References
[1] Patel, Dwarkesh. “Andrej Karpathy — AGI is still a decade away.” Dwarkesh Podcast. https://www.dwarkesh.com/p/andrej-karpathy
Originally posted on LinkedIn.
