OpenAI is a normal company now
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
Effective altruism
14 min read
OpenAI's original mission to 'benefit all of humanity' and its founding promise to help rival labs achieve safe AGI stems directly from effective altruism philosophy. Understanding EA explains why OpenAI's shift toward normal corporate behavior is so significant to observers.
-
Benefit corporation
13 min read
The article's central theme is OpenAI's transformation from a 'capped-profit' structure inside a nonprofit to a 'more normal' for-profit company. Understanding benefit corporations and alternative corporate structures illuminates what OpenAI gave up and why it matters.

This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here.
OpenAI turned 10 today.
For most of its life, the company has been defined by its weirdness.
There was that weird corporate structure — the world’s most valuable startup, tucked somehow inside a nonprofit organization. There was the tumultuous corporate history, with Sam Altman’s now-legendary firing and quick re-hiring. There was the series of high-profile departures, with Altman’s top lieutenants regularly leaving in frustration to found their own multi-billion-dollar AI ventures. And there was the unprecedented promise that the company will spend more than a trillion dollars on building infrastructure to serve its clients, long before such demand arrives.
Perhaps weirdest of all, though, was the series of promises the company made when it was founded. There was the mission to “ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.” (At the time of its founding in 2015, the suggestion that AGI would soon be possible was seen as quite weird.) And there were the promises it once made to achieve that mission, including that if a rival lab came close to safely achieving AGI, OpenAI would stop its own work and help them.
By the end of 2025, though, much of that weirdness has faded. The company has converted its for-profit arm from a “capped-profit” enterprise to a more normal one. It has a steady leader in Altman, who is building out a growing roster of seasoned corporate deputies, including most recently former Slack CEO Denise Dresser as chief revenue officer. (She is expected to push hard into enterprise sales, where Anthropic has gained an advantage.)
And while the company continues to acknowledge the peril that powerful AI models will bring, over the past year the company has transformed to focus on much more normal business risks: that revenue growth will slow; that engagement will decline; that a competitor will steal market share.
All of this has been evident in the run-up to ChatGPT 5.2, which OpenAI released today. It comes out a little over a week after Altman declared a “code red” at the company, instructing employees to put more focus on the core ChatGPT experience and to delay work on ads, e-commerce agents, and its Pulse daily news digest. Altman is concerned about ...
This excerpt is provided for preview purposes. Full article content is available on the original publication.