Effective Altruism Has Very Good Epistemics
For most of my time as an effective altruist, I didn’t know any of the famous EAs. I was just a guy with a blog. But over time, I’ve gotten to know more high-up effective altruists, especially since I’ve started working for Forethought. And as I’ve done this, I’ve grown increasingly impressed with their decision-making. The community leaders who shape beliefs are hugely impressive.
I used to wonder: to what extent are the crucial EA ideas carefully vetted? Did adequate research go into deciding that engineered pandemics were a bigger existential risk than nuclear weapons? Did people think enough before deciding that AI was a major existential risk? To what extent were the ideas taken up for mimetic reasons, rather than because of their accuracy?
But I’ve grown a lot more confident in the decision-making process. A huge amount of analysis and discussion goes into people’s views. More than a decade of full-time work went into writing What We Owe The Future. All the arguments were carefully vetted. Many smart people discussed these ideas before any were made public. If Toby Ord or Will MacAskill says something in a talk, I am now very confident that a lot of thought went into it. You should treat it more as the takeaway from an expert after careful consideration, instead of a random throwaway take.
The big-name effective altruist public intellectuals that I’ve met are both very clever and very meticulous. If you ever have the good fortune to speak either with Will MacAskill or Toby Ord, it will very quickly become clear that they are quite brilliant. I’ve had a number of conversations where I mention some philosophy argument I’ve been thinking about for days to Will, only for him to raise a number of very strong objections that I’d never thought of.
Toby is similarly brilliant; he has publications in math, computer science, philosophy, and then is somehow also the person responsible for all the best arguments against an imminent intelligence explosion—even though he thinks one is reasonably likely. He’s just OP.
It isn’t just that they’re smart; they are also extremely meticulous. I am not this careful by default. So it’s nice to know that the community that I am part of is, in general, a lot more careful than I am. I wouldn’t trust myself unilaterally to form beliefs about which existential threats
...This excerpt is provided for preview purposes. Full article content is available on the original publication.
