Removal of Sam Altman from OpenAI
Based on Wikipedia: Removal of Sam Altman from OpenAI
It lasted five days. That's how long Sam Altman was actually fired from OpenAI—the company he co-founded and transformed into the most valuable artificial intelligence startup in history. From Friday afternoon to Wednesday morning, the tech world watched one of the most bizarre corporate dramas in Silicon Valley history unfold in real time.
What happened in those five days revealed something profound about the precarious balance between idealism and capitalism in artificial intelligence, about corporate governance structures that look clever on paper but crumble under pressure, and about just how much power a handful of determined employees can wield when they threaten to walk out the door.
The Friday Afternoon Massacre
On November 17, 2023, Sam Altman was watching the Las Vegas Grand Prix. He was on a Google Meet video call—nothing unusual for the CEO of a company valued at eighty billion dollars. Then, somewhere between five and ten minutes before it happened, he learned he was being fired.
The board of directors had concluded that Altman was not "consistently candid in his communications." That was the official language. What it actually meant would take months to fully understand.
Within thirty minutes, Ilya Sutskever—OpenAI's chief scientist and a co-founder of the company—invited Greg Brockman to another Google Meet call. Brockman was OpenAI's chairman and president. Sutskever informed him that Altman was out. The board announced it publicly thirty minutes after that.
The speed was intentional. OpenAI's bylaws, written back in January 2016, allow a majority of board members to remove any director without prior warning or a formal meeting—just written consent. It's the kind of provision that seems like good governance until someone actually uses it.
The Unusual Structure That Made This Possible
To understand why this firing could happen so abruptly, you need to understand OpenAI's deliberately strange corporate structure.
OpenAI was founded in December 2015 as a nonprofit organization. The explicit goal was to develop artificial intelligence safely, for the benefit of humanity, without being beholden to shareholders demanding quarterly returns. The founders—a group that included Elon Musk, Altman, and several prominent researchers—wanted to ensure that the people building potentially world-changing technology couldn't be pressured by investors to cut corners on safety.
But training large language models costs hundreds of millions of dollars. Running them costs even more. So in 2019, OpenAI created a complicated nested structure. A nonprofit, OpenAI Inc., sits at the top and controls everything. Below it sits a for-profit company, which in turn controls a "capped-profit" company called OpenAI Global. Microsoft invested billions of dollars into the capped-profit layer—reportedly around ten billion in cash and computing credits—but crucially, Microsoft has no voting control. The nonprofit board makes all the decisions.
This meant that when the board decided to fire Altman, Microsoft learned about it approximately one minute before the public announcement. Their billions bought them no advance warning, no veto power, nothing.
The Board That Pulled the Trigger
The board that fired Altman was remarkably small: just four people, plus Altman himself. There was Sutskever, the chief scientist who had been with the company since its founding. Adam D'Angelo, the CEO of Quora, the question-and-answer website. Tasha McCauley, an entrepreneur with a background in robotics. And Helen Toner, who worked at Georgetown University's Center for Security and Emerging Technology, a think tank focused on how emerging technologies affect national security.
Three other board members had recently resigned—Reid Hoffman, the LinkedIn co-founder; Shivon Zilis, a venture capitalist; and Will Hurd, a former Republican congressman from Texas. Their departures cleared the path for what happened next. With fewer people to convince, the remaining members could act decisively.
The Fault Lines
The conflict didn't emerge from nowhere. According to reporting from Kara Swisher and The Wall Street Journal, divisions within OpenAI had been growing for years.
The release of ChatGPT in November 2022 changed everything. Suddenly this nonprofit research lab had created a consumer product used by hundreds of millions of people. Money started flowing. Altman raised the company's valuation to eighty billion dollars through a series of tender offers—transactions where employees and early investors could sell some of their shares. He convinced Satya Nadella, Microsoft's CEO, to pour billions more into the company. He described OpenAI's relationship with Microsoft as the "best bromance in tech."
But as the money grew, so did the tension between two factions that Altman himself had identified in a 2019 email obtained by The Atlantic. He called them "tribes." One tribe saw OpenAI as a for-profit company that should move fast and ship products. The other tribe remembered why OpenAI was founded as a nonprofit in the first place—to develop artificial intelligence cautiously, with safety as the primary concern.
In the weeks before his firing, Altman was pursuing ambitious deals that put him firmly in the first camp. He was seeking billions from Middle Eastern sovereign wealth funds to develop artificial intelligence chips that could compete with Nvidia, the company whose graphics processors power most of the AI industry. He was courting Masayoshi Son, the chairman of SoftBank, about building AI hardware. He had been meeting with Jony Ive, the legendary former Apple designer, about creating AI-powered devices.
Sutskever and his allies saw these moves as using the OpenAI brand—built on promises of safety and nonprofit idealism—to chase profit and scale. In October 2023, just weeks before the firing, Altman had reduced Sutskever's role within the company. Sutskever appealed to other board members.
Then came DevDay.
The Straw That Broke the Camel's Back
On November 6, 2023, OpenAI held its first developer conference. Altman took the stage and announced a parade of new products: GPT-4 Turbo, a faster and cheaper version of their most powerful model. Custom GPTs, which would let anyone create their own specialized chatbots. A GPT Store, where people could sell these creations to each other. It felt less like a research presentation and more like an Apple product launch.
According to Swisher and Alex Heath of The Verge, this was too much for the safety-focused faction. The commercialization of OpenAI had reached a point where they felt they had to act.
Around this same time, reports later emerged about a project internally codenamed Q-star. According to sources who spoke to reporters after the firing, Q-star was aimed at developing AI capabilities in logical and mathematical reasoning—specifically, getting AI systems to perform math at the level of grade-school students. This might sound modest, but current large language models are notoriously unreliable at mathematics. Getting them to reason reliably through multi-step problems would represent a genuine breakthrough. Some board members allegedly believed Altman had mishandled safety concerns related to this research.
The Coup, If That's What It Was
The firing itself was surgical. Mira Murati, OpenAI's Chief Technology Officer, was immediately appointed interim CEO. But within hours, things started falling apart.
Brockman resigned as chairman. Jakub Pachocki, the director of research, walked out. Aleksander Mądry and Szymon Sidor, two prominent researchers, followed. During an all-hands meeting, Sutskever defended the decision and denied accusations of a hostile takeover. An OpenAI representative frantically tried to reach Will Hurd, who had just left the board, asking for his help.
The investors moved fast. Tiger Global Management and Sequoia Capital tried to get Altman reinstated. So did Microsoft and Thrive Capital. By Saturday, The Verge reported that the board was already discussing bringing Altman back.
They came close. The board agreed in principle to resign and let Altman return. But they missed their own deadline. Meanwhile, Altman was making clear that if he came back, he would want significant changes—including replacing the entire board.
The Microsoft Move
On Sunday, November 19, with negotiations still ongoing, Microsoft made a stunning announcement. They were hiring Altman to lead a new advanced AI research team within the company. Brockman would join him, along with Pachocki, Sidor, and Mądry—some of OpenAI's most talented researchers.
It was a masterful chess move. Microsoft had invested billions into OpenAI but had no control over its board. Now they could simply hire the CEO that board had just fired, along with key members of his team. If OpenAI wanted to implode, fine. Microsoft would build its own version with the same people.
The board responded by naming Emmett Shear, the former CEO of Twitch, as OpenAI's new leader. Other candidates had turned the job down. Nat Friedman, the former CEO of GitHub, said no. Alex Wang, who runs Scale AI, declined. Even Dario Amodei, the CEO of Anthropic—a company founded by former OpenAI employees specifically concerned about AI safety—refused to negotiate a deal that might have merged the two companies.
Shear expressed interest in commercializing OpenAI with the board's support and said he would investigate why Altman had been fired. It was too little, too late.
The Letter
The employees had been watching all of this unfold. On Monday, November 20, they made their move.
A letter signed by 745 of OpenAI's 770 employees threatened mass resignations if the board did not step down. Ninety-seven percent of the company was prepared to walk out the door. Among the signatories, remarkably, was Ilya Sutskever—the same board member who had engineered Altman's removal just three days earlier. Sutskever publicly apologized for his participation in the board's actions.
It was an extraordinary reversal. The chief scientist who had convinced the board to fire the CEO was now signing a letter demanding the board resign so that CEO could return.
The Return
On November 22, five days after his firing, Sam Altman was reinstated as CEO of OpenAI.
The terms of his return revealed how completely the board had lost. Altman came back with an entirely new board: Bret Taylor, a former Salesforce executive who had also served as Twitter's chairman, would chair it. Adam D'Angelo, the only original board member to survive, remained. Larry Summers, the economist and former Treasury Secretary, joined as well. As part of the deal, neither Altman nor Brockman would reclaim seats on the board—a small concession—but Altman would face an internal investigation into his conduct.
Helen Toner and Tasha McCauley were out. Ilya Sutskever's fate was left uncertain; he would eventually leave OpenAI entirely in May 2024.
What Actually Happened?
In the months that followed, more details emerged about why the board had acted.
In December 2023, The Washington Post reported that concerns about Altman's allegedly abusive behavior had been a major factor. The same publication reported that a pattern of what they called deception and subversiveness—behavior that had allegedly led to Altman's departure from Y Combinator, the startup accelerator he had led before OpenAI—had ultimately convinced the board they needed to act.
In May 2024, Helen Toner went public with the board's rationale. She said Altman had withheld information from the board—for example, he had not told them about the release of ChatGPT before it happened. She also alleged that he had not disclosed his ownership stake in OpenAI's startup fund, creating a potential conflict of interest. She said he had provided "inaccurate information about the small number of formal safety processes that the company did have in place."
Most damning, she alleged that two OpenAI executives had reported "psychological abuse" from Altman to the board, providing screenshots and documentation of what they described as lying and manipulative behavior. Many employees, she said, feared retaliation if they didn't support Altman publicly.
Toner also pointed out that Altman had been fired from his previous leadership role—at Loopt, a location-sharing startup he founded—because of what the management team had called "deceptive and chaotic behavior."
The Investigation and Its Aftermath
In March 2024, the internal investigation that Altman had agreed to as a condition of his return concluded. The finding: Altman's behavior "did not mandate removal."
Note the careful language. Not "Altman did nothing wrong." Not "The board was mistaken." Just that his behavior did not rise to the level that required firing him. It was the kind of finding that allowed everyone to save face while settling nothing.
A few months later, Taylor announced that Altman would rejoin OpenAI's board of directors—the same board that had fired him. He would be joined by Sue Desmond-Hellmann, the former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, the former general counsel of Sony; and Fidji Simo, the CEO of Instacart.
It was a complete vindication, or at least it looked like one on paper.
But in May 2024, after journalists exposed OpenAI's non-disparagement agreements—contracts that required departing employees to sign away their right to criticize the company in exchange for keeping their vested equity—Altman faced accusations of lying. He had publicly claimed to be unaware of the provision that would cancel departing employees' equity if they refused to sign. Former employees and journalists questioned whether that was true.
The Collateral Damage
While the five-day drama unfolded, the ripple effects spread far beyond OpenAI's San Francisco headquarters.
Microsoft's stock fell nearly three percent on the initial announcement—a significant move for one of the world's most valuable companies. After Microsoft announced it was hiring Altman, the stock rose more than two percent to an all-time high. The market had made its judgment clear about who it thought was the valuable asset.
Worldcoin, an iris-scanning cryptocurrency project that Altman had co-founded, saw its value drop twelve percent.
OpenAI's competitors saw opportunity. Anthropic, the AI safety-focused company founded by former OpenAI researchers, received inquiries from more than one hundred companies that had been using OpenAI's services. Google DeepMind saw a surge in job applicants. Cohere and Adept, smaller AI companies, actively recruited OpenAI employees. Some investors considered writing down their OpenAI investments to zero.
A share sale that had been in progress—led by Thrive Capital and valuing OpenAI at eighty-six billion dollars—was suddenly at risk. It would eventually proceed after Altman's reinstatement, but the uncertainty had been real.
OpenAI had to delay the launch of its online chatbot store, a product Altman had announced at DevDay just weeks earlier.
The Regulatory Fallout
The chaos drew the attention of regulators on both sides of the Atlantic.
In December 2023, Britain's Competition and Markets Authority announced it was beginning a preliminary investigation into Microsoft's relationship with OpenAI, including Microsoft's non-voting observer seat on OpenAI's board. Hours later, Bloomberg reported that the United States Federal Trade Commission was conducting its own examination of the relationship.
In February 2024, reports emerged that the Securities and Exchange Commission was investigating whether Altman's internal communications had been used to mislead investors. The U.S. Attorney's Office for the Southern District of New York—the federal prosecutors who handle Wall Street cases—had opened an investigation into Altman's statements back in November.
Microsoft's official position, stated in response to the regulatory inquiries, was that it did not hold a stake in OpenAI itself—only in the capped-profit subsidiary. It was technically accurate, even as it obscured the billions Microsoft had invested and the deep integration between the two companies.
The Commentary
The tech industry's reaction was swift and mostly sided with Altman.
Paul Graham, the co-founder of Y Combinator, called the board members "misbehaving children." Daniel Loeb, a Microsoft shareholder who runs the hedge fund Third Point, said OpenAI had "stunningly poor governance." Eric Schmidt, Google's former CEO, wrote that Altman was "a hero to me." Jean-Noël Barrot, France's digital transition minister, announced that Altman would be "welcome in France"—a not-so-subtle attempt to attract AI investment to Europe.
Elon Musk, who had co-founded OpenAI before resigning in 2018 amid disagreements with Altman, said the board should be transparent about its reasons. At The New York Times' DealBook Summit in November, Musk called the situation "troubling" and said he had "mixed feelings" about Altman.
Satya Nadella, despite being blindsided, publicly expressed confidence in OpenAI. Privately, according to Bloomberg, he was furious.
What It All Meant
The five-day crisis revealed something fundamental about the structure of OpenAI and perhaps about the AI industry more broadly.
The nonprofit governance structure that OpenAI's founders had designed—the structure meant to ensure that safety concerns could override commercial pressures—had been tested. In theory, a board focused on the long-term interests of humanity could remove a CEO who was moving too fast, commercializing too aggressively, taking too many risks. That's exactly what the board tried to do.
But the structure couldn't survive contact with economic reality. When 97 percent of employees threatened to walk, when investors mobilized, when Microsoft offered to hire the entire leadership team, the board's authority evaporated. The people with the leverage—the employees whose skills were in demand, the investors whose money the company needed, the partner whose computing power ran the models—sided with Altman.
The Economist wrote that the removal could slow down the entire artificial intelligence industry. But that didn't happen. If anything, the episode demonstrated that slowing down wasn't really an option. The market wouldn't allow it. The employees wouldn't accept it. The only direction was forward, faster.
Sam Altman had described OpenAI's relationship with Microsoft as the best bromance in tech. After the crisis, that bromance looked less like a partnership between equals and more like a marriage where one partner controls the money. Microsoft had invested billions, and when the board tried to exercise its theoretical independence, Microsoft's implicit threat—to simply hire everyone—proved more powerful than any bylaws.
The nonprofit structure survived, technically. But everyone now understood its limits. The board could fire the CEO, in theory. In practice, only if the CEO didn't have the support of the employees, the investors, and the partners who made the whole operation possible.
Sam Altman had all three. The board had a legal technicality. The outcome was never really in doubt.