California’s Focus on AI Development: An Analysis of SB 1047

Good morning, and welcome to multistate.ai. It’s been an eventful week in state AI policy as lawmakers are starting to move legislation through hearings and floor votes. A sexual deepfake bill was signed into law in South Dakota, and additional deepfake bills passed the full legislature in New Mexico and made it through their chambers of origin in South Dakota, Utah, and Wisconsin. Plus, we saw new AI executive orders in Massachusetts and Washington, D.C. But this week we’re going to focus our analysis on an important bill recently introduced in California, which might be the most comprehensive piece of state AI legislation to date. 

With Congress slow to move, states are looking to act quickly on artificial intelligence as the use of the technology becomes more widespread. California was the first state to enact comprehensive privacy legislation to protect consumer data, and it may become a trendsetter again with AI policy. State Sen. Scott Wiener (D) is hoping his new bill (CA SB 1047) can lead the way.

Cited as the “Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act,” the legislation attempts to set broad guardrails and establish a safety valve for large AI models, which the bill defines as those models “trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024.” This is the same definition used by the Biden Administration’s Executive Order on AI and would only apply to the most advanced foundation models today (i.e., Open AI’s GPT-4 and Google’s Gemini Ultra). 

The bill would establish a new state office called the Frontier Office Division to regulate large AI models and certify compliance. Before training an AI model, developers would have to implement certain cybersecurity protections, implement a safety and security protocol with assurances the model will not produce “hazardous capabilities,” and include the ability to fully shut down the covered model. The definition of “hazardous capability” is notable, and includes a model that can be used to enable “mass casualties” or otherwise damages of at least $500 million “in a way that would be significantly more difficult to cause without access to” the AI model.

Once training on the model is complete, the developer will need to perform capability testing to determine whether a positive safety determination can be made. Upon its release to the public, the developer will still need to implement safeguards that prevent users from causing critical harm.

Developers would submit an annual certification of compliance to the Division that includes the nature and magnitude of hazardous capabilities and an assessment of risk. Any safety incidents will be required to be reported to the Division within 72 hours.  

The bill would also promote AI development by establishing CalCompute, a public cloud computing cluster to give small businesses and researchers the computing power needed to harness AI technology. However, the bill does not allocate a specific budget amount to fund CalCompute. 

The proposed law highlights the tension lawmakers face as they want to protect against the potential harms of AI without stifling a nascent industry. The bill requires a certification process for the largest models while allowing smaller startups to develop models unconstrained by bureaucracy. “When we’re talking about safety risks related to extreme hazards, it’s far preferable to put protections in place before those risks occur as opposed to trying to play catch up,” said Sen. Wiener. 

Governor Gavin Newsom’s office issued a report last fall outlining some of the risks associated with AI. Sen. Wiener’s bill is one of several AI bills proposed early on in the California legislative session. An Assembly bill (CA AB 2013) would require developers to publicly disclose training data for their models. Other bills would establish study groups (CA AB 2652), require non-discrimination in AI systems used by the state (CA SB 892), and establish an AI research hub (CA SB 893). And, of course, the California Privacy Protection Agency circulated a draft regulation targeting “automated decision making technology” late last year. 

We recently examined the newest wave of legislative proposals that take a more comprehensive approach to regulating AI. However, Sen. Wiener’s bill proposes a framework for regulating AI systems during the development and testing stages. This is an approach many expected the federal government to take, but California announced with the introduction of this legislation that it has no intention of waiting around for Congress to act first. 

With much of the AI industry at its doorstep, California lawmakers will have a lot of parties interested in how it plans to regulate. But if the state’s landmark privacy law serves as a roadmap, any attempts to regulate AI this year will be just the first chapter of many to come.  

Recent Policy Developments

In the News

  • Copyright Court Battle: On Monday, a federal judge dismissed parts of a copyright lawsuit brought by authors against OpenAI over its alleged use of their books to train the AI company’s large language models. There are still several additional lawsuits filed against the company regarding copyright violations, including a major suit by the New York Times. 

  • AI Patents: On Monday, the U.S. Patent and Trademark Office clarified when it will grant patents for inventions created with the aid of AI, issuing new guidance that patents can cover AI-assisted inventions "for which a natural person provided a significant contribution."

  • Polling AI Policy: On Wednesday, our friends at Seven Letter released public polling results on AI policy. A bipartisan 77% of respondents told pollsters that government should be doing more to regulate AI with majority support for job training, combating fraud, and preventing discrimination in AI models. However, when asked specifically the entity best equipped to regulate AI, the highest response (34%) was that “AI cannot be regulated effectively.”

Major Policy Action

  • South Dakota: On Tuesday, Gov. Noem (R) signed a sexual deepfake bill (SD SB 79) into law. The new law adds computer-generated content to child pornography laws, a measure backed by the attorney general. This is the first sexual deepfake bill to pass this session after nine states passed sexual deepfake laws in the last few years. The South Dakota Senate also passed an electoral deepfake bill (SD SB 96) that would limit the use of deepfake media within 90 days of an election unless the communication includes a disclaimer. The measure now heads to the House. Currently, five states have enacted laws to regulate the use of deepfake media in elections with 34 states debating 78 bills this year. 

  • Utah: The Senate passed two bills related to AI this week, sending those bills to the House for consideration. On Monday, the Senate passed an electoral deepfake bill (UT SB 131) requiring a disclosure for political advertising containing synthetic media. The Senate passed a broader “AI Policy Act” bill (UT SB 149) on Tuesday that sets up various offices dedicated to AI and would establish liability for the use of AI that violates consumer protection laws if not properly disclosed.

  • New Mexico: On Wednesday, as they wrap up their quick 30-day legislative session, lawmakers passed an electoral deepfake bill (NM HB 182) sending it to the governor for her signature to become law. The bill requires a political advertisement generated in whole or in part by using AI to include a disclaimer and prohibits distribution within 90 days of an election.

  • Wisconsin: On Tuesday, the Senate passed a deepfake bill (WI SB 553), which would make it a felony to post, publish, distribute, or exhibit a deepfake of an identifiable person with intent to coerce, harass, or intimidate that person. The measure now heads to the House, which has its own companion bill (WI HB 609). 

  • Massachusetts: Gov. Healy (D) issued an executive order this week that establishes the Artificial Intelligence Strategic Task Force to study AI and Generative Artificial Intelligence technology and its impact on the state, including businesses and higher education institutions. The governor will also propose using  $100 million in upcoming economic development legislation to create an Applied AI Hub.

  • DC: Last Thursday, Mayor Bowser (D) signed a Mayor’s Order outlining actions that the District is taking to harness the power of AI in government services and establishing an AI Task Force to examine the District’s internal AI governance posture and provide the Mayor with recommendations to further the goals of responsible and effective use of AI tools by government agencies. 

Notable Proposals

  • California: Community colleges would not be replaced by AI under a proposed bill (CA AB 2370) that would limit AI use in instruction to a “peripheral tool.” 

  • Illinois: Lawmakers proposed a bill (IL HB 4672) that would require certain protections for performers before allowing the use of a digital replica of the performer's voice or likeness. Similar bills have been introduced in California, New York, and Tennessee. 

Previous
Previous

Connecticut’s AI Vision: An Analysis of SB 2

Next
Next

The Role of Impact Assessments in Combating AI Biases