Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
Based on Wikipedia: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
The Bill That Almost Changed Everything
In September 2024, California Governor Gavin Newsom killed what might have been the most ambitious artificial intelligence safety law ever proposed in the United States. With a single veto, he ended months of fierce debate over a question that has divided technologists, ethicists, Hollywood celebrities, and venture capitalists alike: How do you regulate something so powerful it might not even exist yet?
Senate Bill 1047, officially titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was designed to prevent catastrophic harm from AI systems that exceed current capabilities. Not the chatbots and image generators we have today. The ones coming next.
What Made This Bill Different
Most technology regulation responds to problems after they've occurred. We got seatbelt laws after decades of car crashes. Social media regulations emerged only after widespread documentation of their harms. SB 1047 attempted something unusual: regulating technology based on its potential future capabilities rather than demonstrated past harms.
The bill targeted what it called "frontier models"—AI systems trained using more than ten to the twenty-sixth power of computing operations. That's a one followed by twenty-six zeros. To put this in perspective, training such a model would cost over one hundred million dollars in computing resources alone.
These thresholds weren't arbitrary. They represented a deliberate attempt to focus only on the most powerful systems—the kind that could, theoretically, help someone design a biological weapon or launch devastating cyberattacks on power grids and financial systems.
The Fear Behind the Legislation
The bill emerged from a genuine anxiety spreading through parts of the AI research community. In November 2022, OpenAI released ChatGPT. Suddenly, millions of people experienced firsthand how capable these systems had become. The technology's rapid advancement startled even some of its creators.
By May 2023, hundreds of technology executives and AI researchers signed an extraordinary statement. They called for treating the "risk of extinction from AI" as a global priority, placing it alongside pandemics and nuclear war. Among the signatories were Geoffrey Hinton and Yoshua Bengio, two of the three researchers commonly called the "Godfathers of AI"—pioneers whose foundational work on neural networks helped create the current generation of AI systems.
When the people who invented a technology warn it might end civilization, regulators tend to pay attention.
But Is Extinction Really on the Table?
Not everyone agrees these fears are warranted. Many AI researchers view existential risk concerns as science fiction dressed up in academic language. They argue that focusing on hypothetical future catastrophes distracts from documented present harms: AI systems that discriminate against job applicants based on race, that spread misinformation, that enable new forms of fraud and harassment.
This tension—between those worried about AI ending humanity and those worried about AI making existing problems worse—shaped much of the debate around SB 1047.
What the Bill Would Have Required
Companies developing frontier AI models would have needed to create a "safety and security protocol" before training began. Before releasing their models to the public, they would have submitted compliance statements confirming they had taken reasonable care to prevent catastrophic misuse.
The bill defined "critical harms" in specific terms:
- Helping create chemical, biological, radiological, or nuclear weapons
- Enabling cyberattacks on critical infrastructure causing mass casualties or at least five hundred million dollars in damage
- Autonomous crimes resulting in mass casualties or equivalent financial harm
Starting in 2026, companies would have faced annual third-party audits. The bill also required what the press variously called a "kill switch" or "circuit breaker"—the technical ability to shut down a model if something went wrong.
Perhaps most controversially, SB 1047 included whistleblower protections. Employees who reported safety problems would have been shielded from retaliation. Given the secrecy surrounding AI development at major companies, this provision particularly alarmed some industry leaders.
CalCompute: The Lesser-Known Provision
Buried beneath the safety requirements was something genuinely novel: the creation of CalCompute, a public cloud computing cluster affiliated with the University of California. This would have given startups, academic researchers, and community organizations access to computing resources typically available only to well-funded corporations.
Training frontier AI models requires enormous computing power. By making that power publicly available, CalCompute aimed to democratize AI development—ensuring that safety research and innovative applications wouldn't be the exclusive province of a few wealthy companies.
The Political Journey
State Senator Scott Wiener introduced SB 1047 in February 2024, building on earlier legislative groundwork. California has a history of stepping into regulatory vacuums left by federal inaction. The state pioneered consumer privacy protections with the California Consumer Privacy Act. It enacted net neutrality rules after federal protections were rolled back.
Wiener explicitly modeled the bill on President Biden's October 2023 executive order on artificial intelligence, adapting federal principles into state law. Without unified federal legislation, California—home to most major AI companies—would set the de facto national standard.
The bill's trajectory through the legislature seemed promising. It passed the State Senate 32 to 1 in May. After significant amendments responding to industry feedback, it cleared the State Assembly 48 to 16 in late August. A final Senate vote approved the amended version 30 to 9.
Then it landed on Governor Newsom's desk.
Why Newsom Said No
On September 29, 2024, Newsom vetoed the bill. His reasoning centered on what he saw as a fundamental flaw in its design: the focus on large models based purely on computational size.
A model trained with one hundred million dollars might pose certain risks. But a smaller, cheaper model deployed in a hospital's diagnostic system or a self-driving car might pose greater immediate danger. By targeting only the largest models, Newsom argued, the bill could create a "false sense of security" while ignoring genuinely dangerous applications of smaller systems.
He also expressed concern about adaptability. AI technology evolves rapidly. A regulatory framework locked to specific computational thresholds might become obsolete before its ink dried.
Newsom committed to working with technology experts and research institutions, including Stanford's Human-Centered AI Institute, to develop more flexible approaches. Whether this represents a genuine commitment to alternative regulation or a polite way of killing the concept entirely remains to be seen.
The Battle Lines
The debate over SB 1047 produced unusual alliances and unexpected divisions.
The Supporters
Geoffrey Hinton and Yoshua Bengio—those Godfathers of AI—supported the bill. So did Elon Musk, whose company xAI is developing its own large language models. Stuart Russell, a leading AI researcher at Berkeley and author of a standard textbook on artificial intelligence, endorsed it.
Former New York Mayor Bill de Blasio signed on, as did over one hundred twenty Hollywood celebrities including Mark Hamill, Jane Fonda, and director J.J. Abrams. The Screen Actors Guild, still processing AI's implications for their industry after their 2023 strike, sent a letter of support to the governor.
Several whistleblowers from OpenAI publicly backed the bill, including Daniel Kokotajlo and William Saunders—insiders who had grown alarmed at what they witnessed during their employment.
Max Tegmark, an MIT physicist who has written extensively about AI risk, compared the bill's approach to the Food and Drug Administration requiring clinical trials before drug companies can release new medications. The analogy was apt: both frameworks require demonstrating safety before deployment, shifting the burden of proof from regulators to developers.
The Opposition
The opposition included some equally prominent names. Andrew Ng, a Stanford professor who helped lead AI efforts at Google and Baidu, argued for more targeted regulations—addressing specific harms like deepfake pornography rather than attempting to regulate an entire technology category.
Fei-Fei Li, another Stanford luminary who helped create ImageNet (the dataset that sparked the deep learning revolution), opposed the bill. So did Yann LeCun, the third Godfather of AI and Chief AI Scientist at Meta.
Perhaps most striking was the opposition from California's own congressional delegation. Nancy Pelosi, Ro Khanna, Anna Eshoo, and Zoe Lofgren—all Democrats representing districts with substantial tech industry employment—came out against the bill.
Major companies were divided. Meta and OpenAI opposed or raised concerns. Google, Microsoft, and Anthropic proposed substantial amendments rather than outright opposition. After the August amendments, Anthropic CEO Dario Amodei wrote that the revised bill's "benefits likely outweigh its costs"—hardly a ringing endorsement, but a notable shift from earlier skepticism.
The Open Source Controversy
One of the most heated debates concerned open source AI. Companies like Meta have released powerful AI models freely, allowing anyone to download, modify, and deploy them. This democratizes access but creates regulatory complications: if Meta releases a model and someone else fine-tunes it for malicious purposes, who bears responsibility?
Yann LeCun argued the bill would "kill open source AI models." The AI Alliance, a coalition of open source advocates, formally opposed the legislation. Their concern was straightforward: faced with potential liability, companies might simply stop releasing models publicly.
Lawrence Lessig, the Harvard professor who co-founded Creative Commons and has spent decades advocating for open knowledge sharing, disagreed. He argued that clear liability rules would actually make open source AI safer and more popular, since developers would face reduced risk when using properly tested models.
The Regulatory Capture Question
A subtler criticism came from those who worried about regulatory capture—the phenomenon where regulations ostensibly designed to protect the public end up serving the interests of dominant industry players.
Critics argued that only large companies could afford the compliance costs associated with training frontier models. Safety testing, annual audits, compliance documentation—these requirements would create barriers to entry that established players could absorb but startups could not.
Supporters countered that the bill's thresholds were deliberately set high. Models costing over one hundred million dollars to train are not startup projects. The requirements would apply to perhaps a handful of companies globally.
Interestingly, OpenAI—the company that might seem most protected by regulation limiting new entrants—opposed the bill. This complicated the regulatory capture narrative, though critics noted that OpenAI's opposition focused on specific provisions rather than the concept of regulation itself.
The Polling Wars
Both sides commissioned polls to demonstrate public support for their positions. The results revealed as much about polling methodology as public opinion.
The Artificial Intelligence Policy Institute, which supported regulation, found support ranging from 54 to 74 percent across three surveys. Their question described the bill as requiring safety tests and creating liability for developers who fail to take "appropriate precautions."
The California Chamber of Commerce, which opposed the bill, found only 28 percent support. But their question described a "new state regulatory agency" with the power to force small startups to "pay tens of millions of dollars in fines" based on orders from "state bureaucrats." Observers described this framing as "badly biased."
A YouGov poll commissioned by the Economic Security Project, a bill sponsor, found 78 percent national support and 80 percent agreement that Newsom should sign. Their question emphasized safety testing and prevention of catastrophic harms like "disrupting the financial system, shutting down the power grid, or creating biological weapons."
The divergent results demonstrate a fundamental truth about polling on complex policy issues: how you ask the question largely determines the answer you receive.
What Happens Now
The deadline for California legislators to override Newsom's veto passed on November 30, 2024. SB 1047 is dead.
But the questions it raised remain very much alive. As AI systems continue advancing, the pressure for some form of regulation will intensify. The European Union has already enacted the AI Act, a comprehensive regulatory framework. China has implemented its own AI governance rules. The United States remains an outlier among major powers in lacking federal AI legislation.
California will likely see new AI safety bills in future legislative sessions, perhaps designed to address Newsom's concerns about focusing on model size rather than deployment context. Federal legislation may eventually preempt state efforts, for better or worse.
The Deeper Question
Beyond the specific provisions of SB 1047, the debate illuminated a fundamental challenge in technology governance. How do you regulate something based on what it might become rather than what it currently is? How do you balance innovation against precaution when experts genuinely disagree about the nature and magnitude of potential risks?
The pharmaceutical industry offers one model: extensive testing before release, with liability for harms that occur despite precautions. The software industry has largely operated under a different paradigm: release early, patch problems as they emerge, accept that some harm is inevitable.
Which model should apply to AI systems that might—or might not—pose existential risks? The answer matters enormously. And we haven't settled it yet.
A Note on Uncertainty
Perhaps the most honest assessment of SB 1047 came from Anthropic CEO Dario Amodei, who wrote that the amended bill's benefits "likely" outweighed its costs—but added, "we are not certain of this."
That uncertainty pervades the entire field of AI governance. We don't know how capable future AI systems will become. We don't know whether the risks that motivate safety concerns will materialize. We don't know whether regulation will prevent harm or merely drive development to less regulated jurisdictions.
What we do know is that decisions made now—including the decision not to act—will shape how this technology develops. Governor Newsom's veto was itself a choice, one that preserved the status quo of minimal AI regulation in California.
Whether that choice was wise or foolish, we may not learn for years. By then, of course, it may be too late to choose differently.