Is OpenAI Hellbent on Destruction? On the Privacy, Security, & Sociopolitical Nightmares of Atlas Browser & Sora 2
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
Deepfake
15 min read
The article centers on concerns about AI-generated synthetic media and its societal implications. Understanding the technical history, methods, and documented cases of deepfakes provides essential context for the author's concerns about Sora 2 and the 'Big Blur' phenomenon.
-
On Bullshit
1 min read
The author explicitly references 'bullshit' in the 'Frankfurtian sense' when discussing the volume of synthetic content. Harry Frankfurt's philosophical essay distinguishes bullshit from lying and provides the theoretical framework the author is invoking about truth indifference in media.
-
Hippolyte Bayard
10 min read
The article specifically mentions Bayard as the first known photo manipulator in 1839, making a historical point about image manipulation predating digital technology. His story of staging his own 'drowning' photograph is a fascinating early example of synthetic media.
It’s always worth checking yourself when you feel a sense of doom. As Panic! At the Disco reminds us, “It’s much better to face these kinds of things with a sense of poise and rationality.” Are your concerns founded? Are they priorities (because boy-o, we don’t lack for things to feel concern about these days). So I’ve been self-evaluating for a bit as my thoughts coalesced.
It started a few weeks ago, as my timeline was increasingly dotted with videos of folks deepfaking OpenAI chief Sam Altman into various scenarios, running the gamut of banal to silly to alarming. Then I started seeing friends and colleagues deepfaking their likenesses into popular IP. A whole host of other Sora-2 trends and viral videos sprung up. The latest is cats with guns.
This is the scenario that I—learning from security researchers much smarter than me—have been concerned about for years, dating back to my reporting on deepfakes and mis-/disinformation in the 2010s and early 2020s. Video generators have been getting exponentially more capable over the past few years, so I knew this moment was going to come at some point. Google’s release of Veo 2 was the first time it felt like it was happening, and Sora 2 has sealed it for me.
I don’t mean to condemn anyone who used Sora 2 this way, or who is curious to. It’s a perfect hack of evolutionary psychology—of course we’d want to try out a tool like this. Moreover, I continue to believe that even the most critical AI scholars have a responsibility to understand how tools work (i.e., ”know your enemy”), and some folks will learn best through direct experimentation. I discuss this in a bit more depth on a recent episode of Urgent Futures with media scholar Danny Snelson (for that and so many other reasons it’s worth a listen/viewing!):
Still, it has filled me with dread. And then a few days ago it got worse. OpenAI launched Atlas, it’s “agentic browser,” a chromium browser that has ChatGPT built in natively to do tasks for you. An example of what this looks like in practice: you find a recipe you like and then ask ChatGPT in the sidebar to assemble your Instacart order for you. Now apply this to every facet of your web experience and you have a sense of the allure of such a tool.
Like
...This excerpt is provided for preview purposes. Full article content is available on the original publication.
