A take on Artificial General Intelligence I found informative and helpful in framing all the discourse around the term
Robin Sloan arguing Artificial General Intelligence is already here while giving a big picture framing of the whole discourse around LLMs and their implications that I found pretty compelling:
The key word in Artificial General Intelligence is General. That’s the word that makes this AI unlike every other AI: because every other AI was trained for a particular purpose and, & even if it achieved it in spectacular fashion, did not do anything else. Consider landmark models across the decades: the Mark I Perceptron, LeNet, AlexNet, AlphaGo, AlphaFold … these systems were all different, but all alike in this way…
If you appeared in a puff of smoke before the authors of that paper, just after publication — a few months before half of them cleaved from OpenAI to form Anthropic — and carried with you a laptop linked through time to the big models of 2026, what would their appraisal be ? There’s no doubt in my mind they would say: Wow, we really did it ! This is obviously AGI!
…Pile up the tendencies: the Bay Area is the land of the overthinkers; a linguistic technology invites endless rumination about both language & intelligence; it’s more fun to define a cool new standard than go along with a boring old one; the feeling of every creative project, upon completion, is the same: It’s not quite how I imagined it … None of this should prevent us from using plain language to acknowledge an obvious capability.



