AI 2027 Uber Alles
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
The Good Judgment Project
12 min read
The article discusses Eli Lifland as 'one of the world's top forecasters' and mentions superforecaster techniques like trend-tracking. Understanding the methodology and research behind superforecasting provides crucial context for evaluating the AI 2027 predictions.
-
Anthropic Bias
10 min read
The article jokes about Daniel Kokotajlo endorsing the 'self-sampling assumption' in 2017. This is a specific philosophical principle in anthropic reasoning that readers would benefit from understanding, especially given its relevance to AI safety discussions.
-
Moore's law
13 min read
The article's core argument relies on exponential growth patterns (task length doubling every seven months). Moore's law is the foundational concept for understanding such technological exponential trends and their historical accuracy and limitations.
1 AI 2027
AI 2027 is a model written up by Eli Lifland, one of the world’s top forecasters, Daniel Kokotajlo, who made a series of extremely accurate early predictions about AI, Scott Alexander, who I’m told has a blog or something, and various others. Its aim is to forecast how AI will go. The core claim of the forecast is that there is a reasonable probability that we will get AI that can do everything better than humans fairly soon, potentially in just a few years. The most likely scenario was AGI in 2027, while the median scenario was AGI in 2028.
Now, maybe that sounds outlandish. But remember, there has been a consistent trend where AI’s maximum task length has been doubling roughly every seven months. What does that mean? Take tasks that it takes humans some number of hours to perform on average. Then ask: can AIs do that task? AIs have been able to consistently do longer and longer tasks, so that about every seven months, AIs get the ability to do most tasks that take humans twice as long.
If you plot out a doubling every seven months, then over the course of 35 months (about three years) you get five doublings. Maximum task length increases by a factor of 32, and now, instead of consistently being able to do things that take a few hours, AI will be able to do things that take days. Double that a few more times, and you get the AI being able to do things that take months and then years. By that time, the AIs can do more programming of AIs, and will have far surpassed us.
Now, the AI 2027 people have a more sophisticated model. But hopefully the argument I’ve just given illustrates that something in the vicinity of AGI extremely soon is reasonable, even though the exact details are debatable. AI 2027 seems like a pretty decent forecast by some smart people and ought to be taken seriously.
Lots of people on the internet have taken to dunking on AI 2027. Some of these criticisms are reasonable. In their initial forecast, they made some subtle mathematical errors which, when corrected for, push their timeline back a bit. Another reasonable criticism is that Daniel Kokotajlo is #problematic because as of 2017, he endorsed the (VERY FAILED) self-sampling assumption and has YET to ...
This excerpt is provided for preview purposes. Full article content is available on the original publication.