← Back to Library

What If Most Longtermists Are Wrong About The Primary Aim?

1 The case for flourishing

I have gotten one-shotted by every Forethought (for whom I now work) report I’ve ever read. The first one I read was Preparing For The Intelligence Explosion, which very quickly convinced me that AI was likely to be a big deal soon, and this has totally worldview upending implications. The second one was Better Futures which pretty dramatically changed my mind on what the best thing to do was. The third one made the case for the realistic possibility of AI-enabled coups, which I agreed with, but it didn’t radically shatter my worldview, because my pre-existing worldview did not depend on the assumption that AIs wouldn’t be useful for coups (that would be a sort of weird pillar of a worldview).

In this article, I’m going to talk about the better futures series. I think the ideas in it are both:

  1. Hard to deny after you think about them.

  2. Not obviously the sorts of things you’d think about.

  3. Hugely important.

(I know I said both and then listed three things. Well fuck tha (grammar) police).

The core thesis of better futures is that we should be working more on trying to make good futures better. At the margins, that is a better thing to pursue than reducing existential risks. We should promote flourishing, not just survival. The core argument for this is very simple: most future value is lost from failure to get near-optimal futures, and yet almost no one is working to get a near-best future. If something is the source of the majority of lost future value and yet only like twelve people are working on it, it’s probably a pretty good thing to work on.

First, why think it’s where most value is lost? The answer is that value is fragile. Only a small fraction of futures are near-best. There’s no inevitable force that guarantees a really good future. Thus, we should be surprised to get a near-best future for the same reason we’d be surprised by any highly-specific future.

There’s also an inductive point: no society in history has been near-optimal. For most of history, people owned slaves, and this left society worse than it might have otherwise been (Source???). We’ve made giant torture farms where we mistreat hundreds of billions of animals. We’re not anywhere near trying to maximize value. So absent some highly-specific force driving ...

Read full article on →