California

AI Policy Overview

California has taken a leading role in the regulation of artificial intelligence (AI) policy. State lawmakers have enacted legislation limiting electoral and sexual deepfakes, requiring disclosure of chatbot use, mandating bias audits for state criminal justice agencies utilizing AI tools, and limiting facial recognition use by police. Additionally, state regulators have drafted, but haven’t formally proposed, a comprehensive regulation on automated decision making. 

Governor Newsom (D) signed an executive order on September 6, 2023, directing state agencies to study the potential uses and risks of generative AI as well as engaging with legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI. In March 2024, the state released formal guidelines, pursuant to Gov. Newsom’s 2023 executive order, for state agencies to follow when buying generative AI tools for government use. 

On October 13, 2023, Governor Newsom signed a bill (CA AB 302) into law mandating a thorough inventory of all “high-risk” AI systems “that have been proposed for use, development, or procurement by, or are being used, developed, or procured” by the state. The first annual report is due on January 1, 2025.

Transparency

In 2018, California enacted a law (CA SB 1001) that requires disclosure when using a “bot” to communicate or interact with another person with the intent to mislead about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. The disclosure must be “clear” and “conspicuous.” Under this law, “bot” is defined as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” The law excludes service providers of online platforms, including web hosts and ISPs. 

Bias Prevention

In 2019, California enacted a law (CA SB 36) requiring state criminal justice agencies to evaluate potential biases in AI-powered pretrial tools. Specifically, the law requires each pretrial services agency that uses a pretrial risk assessment tool to validate the tool by January 1, 2021, and on a regular basis thereafter, but no less frequently than once every 3 years, and to make specified information regarding the tool, including validation studies, publicly available.

Deepfakes

California was one of the first states to address deepfake use in electoral campaigns. In 2019, California enacted a law (CA AB 730) that prohibits producing, distributing, publishing, or broadcasting, with actual malice, campaign material that contains (1) a picture or photograph of a person or persons into which the image of a candidate for public office is superimposed or (2) a picture or photograph of a candidate for public office into which the image of another person is superimposed, unless the campaign material contains a specified disclosure. The law includes specific exceptions. The original law was set to sunset in 2023, however, another bill enacted in 2022 (CA AB 972) extended the law's sunset provision until 2027.

California enacted a sexual deepfake law (CA AB 602) in 2019 that provides a private right of action (but not a criminal violation) for the depicted individual, which is defined as an "individual who appears, as a result of digitization, to be giving a performance they did not actually perform or to be performing in an altered depiction."

Facial Recognition

In 2019, California enacted a law (CA AB 1215) that prohibits using facial recognition to analyze images captured by police body cameras.

Regulations

On November 27, 2023, the California Privacy Protection Agency (CPPA) released a draft text of regulations related to businesses' use of “automated decision making technology.” The CPPA emphasizes that this regulatory text is only in draft form and the agency has not yet started the formal rulemaking process on this issue. The draft rules would define “automated decision making technology” (which the agency acronyms ADMT) as “any system, software, or process — including one derived from machine-learning or artificial intelligence — that processes personal information and uses computation to make or execute a decision or facilitate human decision making.”

Importantly, consumers must be given notice that ADMT is being used and that notice must include a specific, plain-language explanation of how the business is using the technology. In addition to the notice requirement, the draft regulations establish a process for consumers to request information from a business about how they are using ADMT.

A primary issue area of these draft regulations is the ability for consumers to opt-out of having their information collected, evaluated, and used by an ADMT. The regulations would establish that consumers have a right to opt-out of a business's use of an ADMT if its use:

  • produces a legal or similarly significant effect (e.g., used to evaluate a candidate for hire or determine whether or not to give a loan to an applicant);

  • profiles individuals who are acting in their capacity as employees, independent contractors, job applicants, or students; or

  • profiles a consumer when they are in a publicly accessible place, such as a shopping mall, restaurant, or park.

The draft regulations also establish situations when consumers will not have a right to opt-out. As currently written, the regulations state that consumers will be unable to opt-out when ADMT is used to:

  • prevent, detect, or investigate security incidents;

  • prevent or resist malicious, deceptive, fraudulent, or illegal actions directed at a business;

  • protect the life and safety of consumers; or

  • when use of ADMT is necessary to provide the good or service requested.

On Dec. 8, 2023, the CPPA held a public board meeting where board members criticized the draft regulatory text as so broad that it could cover essentially any technology. As a result, the CPPA Board directed staff to prepare revised drafts that take into account the feedback from board members.

On Mar. 8, 2024, the CPPA Board voted to take a step toward formal rulemaking on regulations for automated decision-making technology. The proposed update clarifies the contents of a risk assessment, amends considerations for impacts of consumer privacy, and considers safeguards. A final vote on whether to proceed with final rulemaking may not occur until the summer of 2024, and rules may not be finalized until 2025.

Legislative & Regulatory History

  • 2024 - California issued formal guidelines for state agencies to follow when buying generative AI tools for government use. 

  • 2023 - Gov. Newsom issued Executive Order N-12-23 on Sep. 6, 2023, directing state agencies to study the potential uses and risks of generative AI as well as engaging with legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI.

  • 2023 - California enacted CA AB 302, which mandates a thorough inventory of all “high-risk” AI systems “that have been proposed for use, development, or procurement by, or are being used, developed, or procured” by the state.

  • 2022 - California enacted CA AB 972, which extended the sunset provision of CA AB 730 until 2027, which requires disclosure of deepfake use in campaign material.

  • 2019 - California enacted CA AB 1215, which prohibited the use of facial recognition to analyze images captured by police body cameras.

  • 2019 - California enacted CA AB 730, which required disclosure of deepfake use in campaign material.

  • 2019 - California enacted CA SB 36, which requires state criminal justice agencies to evaluate potential biases in AI-powered pretrial tools.

  • 2018 - California enacted CA SB 1001, which requires disclosure when using a “bot” to communicate or interact with another person.