AI and Folk Cartesianism - Part 1: Defining the Problem
“Let me see if I understand your thesis. You think we shouldn’t anthropomorphize people?” -Sidney Morgenbesser to BF Skinner
The 2020s have made my philosophy degree feel more worthwhile. Not only has social media made a ton of new people interested in philosophy, but the real world has also made many of the main questions in philosophy more relevant. AI especially has brought many philosophy topics into the mainstream, but it’s also polarized a lot of the debates. Back in 2011, I could say things like “The human mind is composed of natural processes, like everything else in the world, and therefore it could probably eventually be replicated by a machine, because in some sense it itself is a machine” and people would have long drawn-out conversations with me about whether that’s true or what that implies about language or experience or society. Now, when I say the same thing, it’s much more likely that I’ll get accused of “just buying into the recent AI hype” or that I’ve been tricked by secularism or capitalism or worse to deny the fundamental transcendent character of human minds. This issue used to be a pretty marginal question, but now it’s entered mainstream partisan debate. Sides have been taken.
What’s odd is that a lot of critics of the “minds are machines” view used to implicitly defend it. Back in the 2010s, there was a general worry about anthropocentrism. The belief that humans were in some way unique or separate from the natural world was regularly sneered at as “naive humanism” at best and a method of justifying exploitation of the natural world at worst. There was also a lot of talk about Cartesianism (the philosophy of René Descartes) as the origin of many of society’s problems. Specifically, mind-body dualism was accused of denigrating embodiment and elevating technocratic authoritarian reason. I always thought both criticisms went too far, but analytic philosophy had turned me into an anti-Cartesian naturalist, so I was able to nod along and contribute to these conversations where I could. Since then, attitudes toward Cartesianism have flipped. Many of the same people are now saying that AI cannot, by definition, ever have knowledge, because it does not have subjective first-person experience, or that it’s lacking some transcendent non-physical characteristic the human mind has; both implicitly Cartesian takes. Accusations of anthropocentrism have also been replaced with accusations of anthropomorphizing. ...
This excerpt is provided for preview purposes. Full article content is available on the original publication.