← Back to Library

AI Death Machines. No Human Oversight. What Could Go Wrong?

Deep Dives

Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:

Another day, another round of insane, racist Truth Social attacks against minority congresswomen from the president of the United States:

When you watch Low IQ Ilhan Omar and Rashida Tlaib, as they screamed uncontrollably last night at the very elegant State of the Union, such an important and beautiful event, they had the bulging, bloodshot eyes of crazy people, LUNATICS, mentally deranged and sick who, frankly, look like they should be institutionalized. When people can behave like that, and knowing that they are Crooked and Corrupt Politicians, so bad for our Country, we should send them back where they came from—as fast as possible.

Happy Thursday.


(Getty Images/Shutterstock)

Hegseth’s AI Ultimatum

by Andrew Egger

Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies?

This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro.

But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”1

Hegseth could simply drop Anthropic’s contract over this, pivoting instead to any of the AI labs—OpenAI, Google Gemini, Elon Musk’s xAI—that aren’t insisting on these contractual sticking points. But he doesn’t really want to. After all, Claude is supposed to be the best, and at any rate it’s already integrated into lots of DoD systems. It’d be a hassle.

So instead, Hegseth has issued Anthropic an ultimatum: Change your policy, or we’re going to start getting nasty. This could happen in a couple different ways. The Defense Department is threatening to use the Defense Production Act to compel Anthropic to drop its usage

...
Read full article on The Bulwark →