← Back to Library
Wikipedia Deep Dive

OpenAI

Based on Wikipedia: OpenAI

In November 2022, a small San Francisco company released a chatbot that would change how the world thinks about artificial intelligence. Within five days, ChatGPT had a million users. Within two months, it had a hundred million—the fastest-growing consumer application in history. The company behind it, OpenAI, had spent seven years in relative obscurity. Suddenly, everyone wanted to know: who are these people, and what exactly are they building?

The answer is complicated. And getting more complicated by the month.

A Nonprofit with a Trillion-Dollar Dream

OpenAI began in December 2015 as something unusual in Silicon Valley: a nonprofit. The founding team included Sam Altman, then running the prestigious startup accelerator Y Combinator, and Elon Musk, already famous for Tesla and SpaceX. They announced a billion dollars in pledged funding from a who's who of tech luminaries—Reid Hoffman of LinkedIn, Peter Thiel of PayPal, Amazon Web Services, and others.

The stated mission was ambitious to the point of audacity: to develop artificial general intelligence, commonly abbreviated as AGI, in a way that benefits all of humanity. AGI represents something fundamentally different from the AI we use today. Current AI systems are specialists—they can beat grandmasters at chess or generate impressive text, but they can't do both, and they certainly can't learn to do something entirely new without extensive retraining. AGI would be a generalist, capable of matching or exceeding human performance across essentially any intellectual task.

No one has built AGI yet. Many researchers doubt it's possible in the near term, or perhaps ever. But Altman and Musk believed it was coming, possibly within decades, and they worried about what would happen if it arrived in the wrong hands.

Their reasoning went something like this: if AGI is possible, it will be the most powerful technology ever created. A system that can outthink humans at everything—science, strategy, persuasion, invention—would reshape civilization. In the wrong hands, it could be catastrophic. Better, they argued, to have a nonprofit develop it with humanity's interests at heart than to leave it to profit-driven corporations or authoritarian governments.

There was just one problem. The billion dollars in pledges didn't materialize as cash. By 2019, OpenAI had received only $130 million of the announced funding. And the compute costs for training cutting-edge AI were astronomical and climbing fast.

The Pivot That Changed Everything

In 2019, OpenAI made a decision that would define its future—and ignite a controversy that hasn't stopped burning. It created a new entity called OpenAI Global, LLC, structured as what the company called a "capped-profit" subsidiary. Outside investors could now put money in and earn returns, but those returns were capped at 100 times their original investment.

A hundred times your money might not sound like much of a cap. But in the context of building something that could be worth trillions, it was meant to ensure the nonprofit's mission stayed paramount.

Microsoft stepped in with a billion-dollar investment. Then, in January 2023, they announced ten billion more. OpenAI's systems began running on Microsoft's Azure cloud infrastructure—a partnership that would give Microsoft the inside track on incorporating OpenAI's technology into everything from search engines to spreadsheets.

The money kept flowing. By October 2024, OpenAI completed a $6.6 billion fundraising round that valued the company at $157 billion. For context, that's more than the market capitalization of most Fortune 500 companies. For a nonprofit's subsidiary. Building technology that doesn't exist yet.

The Technology Behind the Hype

What exactly has OpenAI built that's worth all this money and attention?

The company is best known for three families of products. GPT—which stands for Generative Pre-trained Transformer—is a series of large language models. These are AI systems trained on vast amounts of text from the internet, books, and other sources, learning to predict what words come next in a sequence. The result is a system that can generate remarkably human-like text, answer questions, write code, and carry on conversations.

ChatGPT is essentially a conversational interface layered on top of GPT. When you type a question and get a response, you're interacting with a GPT model that's been fine-tuned to be helpful, harmless, and honest in dialogue.

DALL-E (a playful combination of the artist Salvador Dalí and the Pixar robot WALL-E) generates images from text descriptions. Type "an astronaut riding a horse on Mars in the style of a renaissance painting," and DALL-E will create exactly that. The implications for art, advertising, and visual communication are still being worked out.

More recently, OpenAI unveiled Sora, a text-to-video model. Feed it a description, and it generates video that looks increasingly realistic. The technology is newer and less refined than the others, but the trajectory is clear.

What makes these systems remarkable—and concerning to some—is that no one fully understands how they work. The models learn patterns from their training data in ways that emerge from the interaction of billions of mathematical parameters. The developers can observe what goes in and what comes out, but the reasoning process in between remains largely opaque. This is sometimes called the "black box" problem, and it has significant implications for questions of safety, bias, and accountability.

The Board Room Coup

If you wanted a symbol of the tensions roiling OpenAI, you couldn't do better than the five days in November 2023 that nearly destroyed the company.

On Friday, November 17th, OpenAI's board of directors fired Sam Altman as CEO. The announcement was terse and cryptic, citing only "a lack of confidence" in his leadership. No specifics. No warning. Altman learned of his termination via video call.

Within hours, the tech world erupted. OpenAI's president, Greg Brockman, resigned in protest. Rumors swirled about what Altman had done to warrant such dramatic action. Was there a safety concern? A financial scandal? A fundamental disagreement about the company's direction?

The board wasn't talking. But others were. Microsoft CEO Satya Nadella appeared on television to express his surprise and displeasure. More than 700 of OpenAI's roughly 770 employees signed a letter threatening to quit and join Altman at Microsoft unless the board resigned and reinstated him.

Five days later, it was over. Altman was back as CEO. Three of the four board members who had voted to fire him were gone. The nonprofit's ability to control its for-profit subsidiary had been severely tested—and found wanting.

What actually happened? The full story has never been publicly told. But the outlines suggest a fundamental tension at the heart of OpenAI: the board, tasked with ensuring the company prioritizes safety over profit, moved against a CEO who had become the face of AI's commercial promise. And the commercial interests won.

The Musk Factor

Elon Musk's relationship with OpenAI reads like a particularly bitter divorce.

He was there at the beginning, co-chairing the nonprofit with Altman, pledging money, lending his celebrity to the cause. But he departed from the board in 2018, citing potential conflicts of interest with Tesla's own AI work. Relations deteriorated from there.

By 2024, Musk was suing OpenAI and Altman personally, alleging that they had abandoned the nonprofit's founding mission in favor of profit maximization. He called the organization's evolution a betrayal of everything it was supposed to stand for. OpenAI called his lawsuit "incoherent" and "frivolous."

Then things got stranger. In February 2025, a consortium led by Musk submitted a $97.4 billion bid to buy the nonprofit that controls OpenAI. The offer was rejected—OpenAI declared it wasn't for sale—but the bid complicated ongoing efforts to restructure the company's convoluted corporate hierarchy.

OpenAI countersued, accusing Musk of "bad-faith tactics" designed to slow the company's progress and seize its innovations for his own benefit. They claimed he had previously supported creating a for-profit structure and had even expressed interest in controlling OpenAI himself.

The legal battles continue. But they illuminate something important: the people who built OpenAI can't agree on what it should become.

The Great Restructuring

By late 2024, OpenAI's corporate structure had become a Gordian knot. The nonprofit, OpenAI Inc., controlled a capped-profit subsidiary, OpenAI Global, LLC, which was responsible for the actual AI development. The board had fiduciary duties to the nonprofit's mission, but the company was raising billions from investors who expected returns.

In December 2024, OpenAI proposed cutting the knot. The plan: convert the capped-profit subsidiary into a Delaware public benefit corporation (PBC)—a relatively new legal structure that allows companies to pursue both profit and social mission—and release it from the nonprofit's control. The nonprofit would receive equity in exchange for giving up control, and would use that equity to fund separate charitable projects.

Critics were scathing. Former employees released a legal letter titled "Not For Private Gain," arguing that the restructuring was illegal and would strip away the governance safeguards that made OpenAI different from any other tech giant. The complex structure, they argued, wasn't a bug—it was a feature, deliberately designed to keep the company accountable to its mission.

By October 2025, the restructuring was complete. The attorneys general of California and Delaware approved the new arrangement. OpenAI's for-profit branch became OpenAI Group PBC. The renamed OpenAI Foundation retained a 26% stake. Microsoft held 27%. Employees and other investors held the remaining 47%.

There's a crucial detail in the new structure: the Foundation appoints all board members of the for-profit entity and can remove them at any time. Foundation board members also serve on the for-profit board. Whether this provides genuine mission protection or merely the appearance of it remains to be seen.

The Money Machine

The numbers involved in OpenAI's business are staggering, even by Silicon Valley standards.

In April 2025, the company raised $40 billion at a $300 billion valuation—the highest-value private technology deal in history. By July, it reported annualized revenue of $12 billion, up from $3.7 billion just a year earlier. ChatGPT had 20 million paid subscribers. Five million business users.

But the costs are equally staggering. Training large language models requires massive computing infrastructure—thousands of specialized chips running for weeks or months at a time. In 2017, OpenAI spent $7.9 million on cloud computing alone—a quarter of its entire budget that year. By 2018, training a single project (bots that could play the video game Dota 2) required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks.

The company projects an $8 billion operating loss in 2025. Its spending projections through 2029 total approximately $115 billion—$17 billion in 2026, $35 billion in 2027, $45 billion in 2028. Most of this goes toward compute infrastructure, proprietary AI chips, data centers, and training new models.

OpenAI hopes to become cash-flow positive by 2029 and projects revenue of $200 billion by 2030. These are projections, not guarantees. But they illustrate the bet the company is making: spend now, scale fast, and hope the technology delivers before the money runs out.

The Stargate Gambit

On January 21, 2025, President Donald Trump announced something called The Stargate Project—a joint venture between OpenAI, Oracle, SoftBank, and a UAE sovereign wealth fund called MGX. The goal: build an AI infrastructure system in conjunction with the US government, with an estimated cost of $500 billion over four years.

The name comes from an existing OpenAI supercomputer project. The scale is unprecedented. For comparison, the entire Apollo program that put humans on the Moon cost about $260 billion in today's dollars.

By July 2025, the Department of Defense had awarded OpenAI a $200 million contract for military AI applications—alongside contracts for competitors Anthropic, Google, and Musk's own company xAI. OpenAI also struck a deal with the UK government to deploy ChatGPT and other AI tools in public services.

The nonprofit that began in Greg Brockman's living room, worried about AI falling into the wrong hands, was now working directly with governments and militaries around the world.

The Safety Problem

Throughout 2024, something concerning happened at OpenAI: roughly half of its AI safety researchers left the company.

AI safety research focuses on ensuring that AI systems behave as intended, don't produce harmful outputs, and remain under human control. It's the field dedicated to making sure that as AI becomes more powerful, it doesn't become more dangerous.

The departing researchers cited the company's "prominent role in an industry-wide problem"—a diplomatic way of saying that the race to build and deploy ever-more-capable AI was outpacing the work to make it safe.

This is the central tension that has haunted OpenAI since its founding. The company was created explicitly because its founders worried about the risks of AGI. They believed a nonprofit focused on safety would be better positioned to develop powerful AI responsibly than profit-driven corporations would be.

But OpenAI is now one of those profit-driven corporations—or at least a public benefit corporation with investors expecting significant returns. It's racing to stay ahead of Google, Anthropic, and a growing field of competitors. It's deploying products to hundreds of millions of users before fully understanding how those products work or what harms they might cause.

The safety researchers who left didn't claim that OpenAI was uniquely reckless. They said it was part of an industry-wide problem. But for a company founded on the premise that it would be different, that's a damning assessment.

The Copyright Wars

There's another set of problems that has nothing to do with superintelligent AI destroying humanity and everything to do with present-day economics: OpenAI trained its models on enormous amounts of copyrighted material without permission or payment.

In 2023 and 2024, the lawsuits started piling up. Authors sued, claiming their books had been used to train GPT without consent. Media companies sued, arguing their journalism was being repurposed without compensation. The New York Times filed a particularly high-profile case, providing examples of ChatGPT reproducing near-verbatim excerpts from Times articles.

OpenAI's defense has rested largely on the doctrine of fair use—the legal principle that allows limited use of copyrighted material for purposes like criticism, commentary, or transformation. The company argues that training AI on copyrighted works is transformative and therefore legal. Courts haven't definitively ruled on this question yet, and the outcomes could reshape the economics of AI development.

If OpenAI and similar companies are required to license all their training data, the costs would be enormous. If they're not, creators face a world where their work can be used to train systems that might eventually replace them—without any compensation whatsoever.

The Original Sin

In its founding charter, OpenAI committed to making its research and patents publicly available, to collaborate openly with other institutions. The idea was that transparency would promote safety—if everyone could see how powerful AI was being developed, they could identify risks and contribute solutions.

That commitment has eroded. OpenAI now restricts access to its most capable models, citing both competitive concerns (other companies would copy their work) and safety concerns (bad actors might misuse the technology). The company that criticized AI development happening behind closed doors at Google and Facebook now keeps much of its own work behind closed doors.

This is perhaps the original sin from which all other criticisms flow. OpenAI asked the world to trust it with one of the most consequential technologies in human history. It claimed moral authority based on being a nonprofit focused on benefiting humanity. Then it became something else—something much more like the companies it defined itself against.

Whether that transformation was necessary or inevitable, whether the nonprofit structure was ever realistic for a project requiring billions of dollars, whether the current structure adequately protects the original mission—these are questions without easy answers.

What Comes Next

Sam Altman has said he expects developing AGI to be a decades-long project. But the capabilities of AI systems have advanced faster than almost anyone predicted. ChatGPT went from non-existent to ubiquitous in months. Each new version of GPT demonstrates capabilities that would have seemed like science fiction a few years earlier.

OpenAI is betting that this trajectory continues—that the systems will keep getting more capable, that AGI or something close to it will eventually emerge, and that when it does, OpenAI will be positioned to lead.

The company has the resources for that bet. Billions in funding. Partnerships with Microsoft and various governments. Some of the most talented AI researchers in the world. A user base of hundreds of millions who have already integrated AI into their daily lives.

What it doesn't have is the moral clarity of its founding vision. The nonprofit that would ensure AGI benefits all humanity has become a company that needs to show returns to investors. The research organization committed to transparency has become a corporation guarding trade secrets. The safety-first outfit has seen half its safety researchers walk out the door.

Maybe that's fine. Maybe the original vision was naive, and this is simply what it takes to actually build transformative technology. Maybe OpenAI will still achieve its founding goal, just through different means than its founders imagined.

Or maybe not. Maybe the compromises accumulated along the way will matter when the decisions matter most. Maybe the structure designed to keep humans in control will have been dismantled just as control becomes most important.

We're about to find out. One way or another, we're all about to find out.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.