Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill with the goal of reducing the risks of frontier artificial intelligence models, the largest and most powerful foundation models. If passed, the bill will also establish CalCompute, a public cloud computing cluster for startups, researchers and community groups.

Background
The bill was motivated by the rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022.

In May 2023, AI pioneer Geoffrey Hinton resigned from Google, warning that humankind could be overtaken by AI as soon as the next 5 to 20 years. Later that same month, the Center for AI Safety released a statement signed by Hinton and other AI researchers and leaders: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Governor Newsom and President Biden issued executive orders on artificial intelligence in late 2023. Senator Wiener says his bill draws heavily on the Biden executive order.

Provisions
SB 1047 initially covers AI models with training compute over 1026 integer or floating-point operations. The same compute threshold is used in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In contrast, the European Union's AI Act set its threshold at 1025, one order of magnitude lower.

In addition to this compute threshold, the bill has a cost threshold of $100 million. The goal is to exempt startups and small companies, while covering large companies that spend over $100 million per training run.

Developers of models that exceed the compute and cost thresholds are required to conduct safety testing for the following risks:
 * Creation or use of a weapon of mass destruction
 * Cyberattacks on critical infrastructure causing mass casualties or at least $500 million of damage
 * Autonomous crimes causing mass casualties or at least $500 million of damage
 * Other harms of comparable severity

Developers of covered models are required to implement reasonable safeguards to reduce risk, including the ability to shut down the model. Whistleblowing provisions protect employees who report safety problems and incidents.

The bill establishes a Frontier Model Division to review the results of safety tests and incidents, and issue guidance, standards and best practices. It also creates a public cloud computing cluster called CalCompute to enable research into safe AI models, and provide compute for academics and startups.

Reception
Supporters of the bill include Turing Award recipients Geoffrey Hinton and Yoshua Bengio. The Center for AI Safety, Economic Security California and Encode Justice are sponsors.

The bill is opposed by industry trade associations including the California Chamber of Commerce, the Chamber of Progress, the Computer & Communications Industry Association and TechNet. Companies Meta and Google argue that the bill would undermine innovation.

Public opinion
A David Binder Research poll commissioned by the Center for AI Safety Action Fund found that in May 2024, 77% of Californians support a proposal to require companies to test AI models for safety risks before releasing them. A poll by the AI Policy Institute found 77% of Californians think the government should mandate safety testing for powerful AI models.