California

AI Policy Overview

California has taken a leading role in the regulation of artificial intelligence (AI) policy. State lawmakers have enacted legislation limiting electoral and sexual deepfakes, requiring disclosure of chatbot use, mandating bias audits for state criminal justice agencies utilizing AI tools, and limiting facial recognition use by police. After a failed attempt in 2024, California enacted a first-in-the-nation AI safety law in 2025. Additionally, state regulators finalized rules creating obligations for businesses that use automated decision-making (ADMT) for “significant decisions” about California consumers.

Governor Newsom (D) signed an executive order on September 6, 2023, directing state agencies to study the potential uses and risks of generative AI as well as engaging with legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI. In March 2024, the state released formal guidelines, pursuant to Gov. Newsom’s 2023 executive order, for state agencies to follow when buying generative AI tools for government use. 

In 2023, Governor Newsom signed a bill (CA AB 302) into law mandating a thorough inventory of all “high-risk” AI systems “that have been proposed for use, development, or procurement by, or are being used, developed, or procured” by the state. The first annual report is due on January 1, 2025. 

The Generative AI Accountability Act of 2024 (CA SB 896) requires a state report examining significant, potentially beneficial uses for the deployment of generative AI tools by the state and a joint risk analysis of potential threats posed by the use of generative AI to California’s critical energy infrastructure. State agencies using generative AI would need to disclose that fact, with clear instructions on how to contact a human employee.

In 2024, California enacted a law (CA AB 2885) establishing a uniform definition for “artificial intelligence” for existing provisions in state law relating to studies of AI and inventories of AI use in state government. The law defines AI to mean an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

In 2025, California also enacted a law (CA AB 979) requiring the development of a California AI Cybersecurity Collaboration Playbook to facilitate information sharing and strengthen defenses against emerging threats across the cyber and AI communities.

AI Safety 

In 2025, California enacted the Transparency in Frontier Artificial Intelligence Act (CA SB 53), which establishes provisions aimed at ensuring the safety of a foundation AI model developed by a frontier developer. This law applies to AI models trained using a quantity of computing power greater than 10^26 integer or floating-point operations and requires developers to create and publish an AI framework that applies to their frontier AI model(s) and describes how the developer incorporated national standards, international standards, and industry-consensus best practices into their frontier AI framework. 

TFAIA also requires developers of frontier models to provide a summary of any catastrophic risk assessment resulting from the use of their frontier AI model to the Office of Emergency Services. The law defines a catastrophic risk as a foreseeable and material risk where use, storage, or development of the frontier model results in the death or serious injury of 50 or more people or causes more than $1 billion in damages if the model:

  • Provides expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon;

  • Engages  in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense if committed by a human; or 

  • Evades  the control of its frontier developer or user.

Additionally, the TFAIA creates whistleblower protections for employees of frontier model developers who believe the developers activities present a risk to public health and safety resulting from a catastrophic risk or other violation of the TFAIA. 

TFAIA also establishes a framework for the creation of a public cloud computing cluster that will be known as CalCompute to advance the safe, ethical, equitable, and sustainable development and deployment of AI.

Transparency

Chatbots/Customer Service

In 2018, California enacted a law (CA SB 1001) that requires disclosure when using a “bot” to communicate or interact with another person with the intent to mislead about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. The disclosure must be “clear” and “conspicuous.” Under this law, “bot” is defined as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” The law excludes service providers of online platforms, including web hosts and ISPs. 

In 2024, California enacted a law (CA AB 2905) requiring a call from an automatic dialing-announcing device to inform the caller if the prerecorded message uses an artificial voice that is generated or significantly altered using AI.

In 2025, California enacted a law (CA SB 243) establishing requirements for companion chatbots. The law defines a companion chatbot as an AI system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. Under the law, users must be given a clear and conspicuous notification that a companion chatbot is artificially generated if a reasonable person would be misled to believe the person is interacting with a human and requires chatbot operators to establish a protocol for preventing suicidal ideation, suicide, or self-harm content. Additionally, chatbot operators must disclose to minors that they are interacting with an AI and require a notification reminding minor users to take breaks every three hours. The law exempts customer service bots, video game bots, voice command interfaces that act as a virtual assistant.

In 2025, California also enacted a law (CA AB 578) that allows food delivery platforms to use an automated system to address customer service concerns. The law also requires that food delivery platforms allow consumers to promptly connect with a natural person to address their concerns.

Training Data

In 2024, California enacted a law (CA AB 2013), set to go into effect on Jan. 1, 2026, that requires generative AI developers to disclose on their website documentation that provides, among other requirements, a summary of the datasets used in the development and training of the AI technology or service. Notably, the law applies to both original developers and those making “substantial modifications” to a generative AI technology or service. 

In 2025, California enacted a law (CA AB 325) prohibiting using or distributing a pricing algorithm that uses nonpublic competitor data.

AI Detection Tools

In 2024, lawmakers enacted the California AI Transparency Act (CA SB 942) which requires a provider of a generative AI system (with over 1 million monthly visitors) to make available an AI detection tool, at no cost to the user, and offer the user an option to include a disclosure that identifies content as AI-generated content and is clear, conspicuous, appropriate for the medium of the content, and understandable to a reasonable person.

Bias Prevention

In 2019, California enacted a law (CA SB 36) requiring state criminal justice agencies to evaluate potential biases in AI-powered pretrial tools. Specifically, the law requires each pretrial services agency that uses a pretrial risk assessment tool to validate the tool by January 1, 2021, and on a regular basis thereafter, but no less frequently than once every 3 years, and to make specified information regarding the tool, including validation studies, publicly available.

Deepfakes

Political Deepfakes

California was one of the first states to address deepfake use in electoral campaigns. In 2019, California enacted a law (CA AB 730) that prohibits producing, distributing, publishing, or broadcasting, with actual malice, campaign material that contains (1) a picture or photograph of a person or persons into which the image of a candidate for public office is superimposed or (2) a picture or photograph of a candidate for public office into which the image of another person is superimposed, unless the campaign material contains a specified disclosure. The law includes specific exceptions. The original law was set to sunset in 2023, however, another bill enacted in 2022 (CA AB 972) extended the law's sunset provision until 2027.

In 2024, California enacted three additional laws addressing political deepfakes. The Defending Democracy from Deepfake Deception Act (CA AB 2655) requires a large online platform to block the posting of materially deceptive or created content related to an election within 120 days of an election and up to 60 days after an election and requires the online platform to label certain additional content inauthentic, fake, or false during specified periods before and after an election. In August 2025, a federal judge struck down CA AB 2655, ruling that the requirement that social media companies remove reported content within 72 hours violated Section 230 of the Communications Decency Act.

Another law enacted in 2024 focused on transparency (CA AB 2355) requires a committee that creates, originally publishes, or originally distributes a qualified political advertisement to include in the advertisement a specified disclosure that the ad was generated or substantially altered using AI. 

Finally, a third 2024 law (CA AB 2839) prohibits an entity from knowingly distributing an advertisement or other election communication with malice within 120 days of an election and 60 days after an election that contains materially deceptive deepfake content if the content is reasonably likely to harm the reputation or electoral prospects of a candidate. When accompanied by a disclaimer, materially deceptive content can be used for parody or satire or a candidate may use a deepfake of themselves. On Oct. 2, 2024, a federal judge granted a preliminary injunction to block the enforcement of AB 2839. The judge found that the law likely violates the First Amendment, writing “Most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”

Sexual Deepfakes

In 2019, California enacted a sexual deepfake law (CA AB 602) that provides a private right of action (but not a criminal violation) for the depicted individual, which is defined as an "individual who appears, as a result of digitization, to be giving a performance they did not actually perform or to be performing in an altered depiction."

In 2024, California enacted additional laws addressing sexual deepfakes. The pair of laws (CA SB 926 & CA SB 896) makes it a crime for a person to intentionally distribute or cause to be distributed certain sexual images, including digital and computer-generated images, without consent in a manner that would cause a reasonable person to believe the image is an authentic image of the person depicted, under circumstances in which the person distributing the image knows or should know that distribution of the image will cause serious emotional distress, and the person depicted suffers that distress.

Another law enacted in 2024 is aimed at social media (CA SB 981) and requires a social media platform to provide a mechanism that is reasonably accessible to users to report digital identity theft to the social media platform. The bill would require immediate removal of a reported instance of sexually explicit digital identity theft from being publicly viewable on the platform if there is a reasonable basis to believe it is sexually explicit digital identity theft.

In 2024, California enacted a pair of laws (CA AB 1831 & CA SB 1381) that amended child pornography laws to include matters that are digitally altered or generated by the use of AI.

In 2025, California enacted a law (CA AB 621) adding digitized sexually explicit material to provisions creating a civil action against a person who creates and intentionally discloses sexually explicit material without consent or creates such material of a minor. The law also authorizes a civil action against a person who knows, or reasonably should know, that the depicted individual was a minor when the material was created.

Digital Replicas

In 2024, California enacted two laws to protect performers from deepfake digital replicas. The first law (CA AB 1836) creates liability against a person who produces, distributes, or makes available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without specified prior consent. The second law (CA AB 2602) requires that a contract between an individual and any other person for the performance of personal or professional services is unenforceable only as it relates to a new performance by a digital replica of the individual if the provision meets specified conditions relating to the use of a digital replica of the voice or likeness of an individual in lieu of the work of the individual.

In 2025, California enacted a law (CA SB 683) allowing a party to seek an injunction for the unauthorized use of another’s name, voice, signature, photograph, or likeness in products, merchandise, or goods, or for  advertising. If an injunction is granted, a respondent will have two business days from the day the order is served.

Health Care

In 2024, California enacted two laws related to AI use in health care settings. The first law (CA AB 3030) requires a health facility, clinic, physician’s office, or office of a group practice that uses generative AI to generate written or verbal patient communications to ensure that those communications include both a disclaimer that indicates to the patient that the communication was generated by AI and clear instructions permitting a patient to communicate with a human health care provider.

The second law (CA SB 1120) requires a health or disability insurer that uses an AI, algorithm, and other software tools for utilization review or management decisions to comply with requirements pertaining to the approval, modification, or denial of services, inclusive of federal rules and guidance regarding the use of AI, algorithm, or other software tools. Such software must be equitably and fairly applied across the patient population.

In 2025, California enacted a law (CA AB 489) prohibiting an individual from using AI to falsely indicate or imply the possession of a license or certificate to practice a health care profession.

Education

In 2024, California enacted two laws related to AI use in education. The first law (CA AB 2876) requires the Instructional Quality Commission to consider incorporating AI literacy content into the mathematics, science, and history-social science curriculum frameworks after 2025 and to consider including AI literacy in its criteria for evaluating instructional materials when the state board next adopts mathematics, science, and history-social science instructional materials.

The second law (CA SB 1288) establishes a working group related to AI in public schools, to provide guidance for local educational agencies and charter schools on the safe use of AI in education, and to develop a model policy regarding the safe and effective use of AI in ways that benefit, and do not negatively impact, pupils and educators.

Facial Recognition & Law Enforcement Use of AI

In 2019, California enacted a law (CA AB 1215) that prohibits using facial recognition to analyze images captured by police body cameras. 

In 2025, California enacted a law (CA SB 524) establishing requirements for law enforcement’s use of AI. The law requires law enforcement agencies to maintain an AI policy and disclose if a report was made, in full or in part, with AI. The law also prohibits any draft report made with AI from being considered an officer’s official statement, however, final reports made with the use of AI may be considered official statements.  The law also prohibits vendors to law enforcement agencies from sharing, selling, or otherwise using information provided by a law enforcement agency from being processed by AI.

Regulations

California Privacy Protection Agency (CPPA)

On November 27, 2023, the California Privacy Protection Agency (CPPA) released a draft text of regulations related to businesses' use of “automated decision making technology.” The CPPA emphasizes that this regulatory text is only in draft form and the agency has not yet started the formal rulemaking process on this issue. The draft rules would define “automated decision making technology” (which the agency acronyms ADMT) as “any system, software, or process — including one derived from machine-learning or artificial intelligence — that processes personal information and uses computation to make or execute a decision or facilitate human decision making.”

Importantly, consumers must be given notice that ADMT is being used and that notice must include a specific, plain-language explanation of how the business is using the technology. In addition to the notice requirement, the draft regulations establish a process for consumers to request information from a business about how they are using ADMT.

A primary issue area of these draft regulations is the ability for consumers to opt-out of having their information collected, evaluated, and used by an ADMT. The regulations would establish that consumers have a right to opt-out of a business's use of an ADMT if its use:

  • produces a legal or similarly significant effect (e.g., used to evaluate a candidate for hire or determine whether or not to give a loan to an applicant);

  • profiles individuals who are acting in their capacity as employees, independent contractors, job applicants, or students; or

  • profiles a consumer when they are in a publicly accessible place, such as a shopping mall, restaurant, or park.

The draft regulations also establish situations when consumers will not have a right to opt-out. As currently written, the regulations state that consumers will be unable to opt-out when ADMT is used to:

  • prevent, detect, or investigate security incidents;

  • prevent or resist malicious, deceptive, fraudulent, or illegal actions directed at a business;

  • protect the life and safety of consumers; or

  • when use of ADMT is necessary to provide the good or service requested.

On Dec. 8, 2023, the CPPA held a public board meeting where board members criticized the draft regulatory text as so broad that it could cover essentially any technology. As a result, the CPPA Board directed staff to prepare revised drafts that take into account the feedback from board members.

On Mar. 8, 2024, the CPPA Board voted to take a step toward formal rulemaking on regulations for automated decision-making technology. The proposed update clarifies the contents of a risk assessment, amends considerations for impacts of consumer privacy, and considers safeguards. In September 2025, the Office of Administrative Law approved the California Privacy Protection Agency’s final rulemaking.

California Civil Rights Department (CRD)

On May 17, 2024, the CRD’s Civil Rights Council issued a notice of proposed rulemaking and new proposed modifications to California’s employment discrimination regulations. This follows draft modifications to its anti-discrimination law in March 2022. “Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems” seeks to restrict how employers can use AI to screen workers and job applicants. The proposed regulations would affirm that the state’s anti-discrimination laws and regulations apply to potential discrimination caused by the use of “automated systems” and make it clear that the use of third-party service providers is included in the regulation. Stakeholders had until July 18, 2024, to submit written comments on the proposed CRD regulations.

Legislative & Regulatory History

  • 2025 - The CCPA finalized regulations that create obligations for businesses that use automated decision-making (ADMT) for “significant decisions” about California consumers.

  • 2025 - California enacted CA SB 243, which establishes requirements for companion chatbots. 

  • 2025 - California enacted CA AB 361, which prohibits a defendant in a criminal case who developed, modified, or used AI  from asserting a defense that the AI caused harm to the plaintiff.

  • 2025 - California enacted CA AB 621, which adds digitized sexually explicit material to provisions creating a civil action against a person who creates sexual deepfakes.

  • 2025 - California enacted CA AB 853, which delays the enforcement of the California AI Transparency Act until August 2, 2026.

  • 2025 - California enacted CA AB 489, which prohibits using AI to use specified terms, letters, or phrases to falsely indicate or imply possession of a license or certificate to practice a health care profession.

  • 2025 - California enacted CA SB 524, which establishes requirements for law enforcement’s use of AI. 

  • 2025 - California enacted CA SB 683, which allows a party in a civil action to sue for the unauthorized use of a digital replica. 

  • 2025 - California enacted CA SB 361, which requires data brokers to disclose if they have sold consumer data to the developer of a generative AI system in the last year. 

  • 2025 - California enacted CA AB 325, which prohibits using or distributing a pricing algorithm that uses nonpublic competitor data.

  • 2025 -  California enacted CA AB 578, which allows food delivery platforms to use an automated system to address customer service concerns.

  • 2025 - California enacted CA AB 979, which requires the development of a California AI Cybersecurity Collaboration Playbook to facilitate information sharing and strengthen defenses against emerging threats across the cyber and AI communities.

  • 2025 - California enacted CA SB 53, which establishes requirements for frontier AI models. 

  • 2024 - California enacted CA SB 1288, which establishes a working group to provide guidance on AI in public schools.

  • 2024 - California enacted CA AB 2876, which requires consideration of AI literacy in the K-12 curriculum. 

  • 2024 - California enacted CA SB 896, which requires state agencies using generative AI to disclose that fact, with clear instructions on how to contact a human employee.

  • 2024 - California enacted CA SB 1120, which requires health insurers that utilize AI in decision-making to follow additional requirements.

  • 2024 - California enacted CA AB 3030, which requires communications to a patient by a health care provider that uses generative AI to disclose the AI use and give the patient the option to communicate directly with a human.

  • 2024 - California enacted CA AB 2905, which requires a call from an automatic dialing-announcing device to inform the caller if the prerecorded message uses an artificial voice generated using AI.

  • 2024 - California enacted CA SB 942, which requires generative AI providers to make AI detection tools available to users.

  • 2024 - California enacted CA AB 1831 & CA SB 1381, which amends child pornography laws to include matters that is digitally altered or generated by the use of AI.

  • 2024 - California enacted CA AB 2885, which establishes a uniform definition for “artificial intelligence” for existing provisions in state law relating to studies of AI and inventories of AI use in state government.

  • 2024 - California enacted CA AB 2013, which requires the developer of a generative AI system or service to publish documentation regarding the data used to train the AI system.

  • 2024 - California enacted CA SB 981, which requires a social media platform to provide a mechanism to report digital identity theft and requires immediate removal of a reported instance of sexually explicit digital identity theft.

  • 2024 - California enacted CA SB 926 & CA SB 896, which criminalizes the intentional distribution of certain sexual images, including digital and computer-generated images, without consent.

  • 2024 - California enacted CA AB 2839, which prohibits an entity from knowingly distributing an election communication with malice during specified periods before and after an election that contains materially deceptive deepfake content, unless used for parody or satire and accompanied by a disclaimer. In October 2024, a federal judge blocked this law with a preliminary injunction, ruling that the law likely violates the First Amendment. 

  • 2024 - California enacted CA AB 2355, which requires a committee that distributes a qualified political advertisement to include a disclosure that the ad was generated or substantially altered using AI.

  • 2024 - California enacted CA AB 2655, which requires large online platforms to block the posting of materially deceptive content related to an election and to label such content during specified periods before and after an election. In August 2025, a federal judge struck down CA AB 2655.

  • 2024 - California enacted CA AB 2602, which renders certain contracts unenforceable as related to a new performance by a digital replica of an individual.

  • 2024 - California enacted CA AB 1836, which limits digital replicas of a deceased personality’s voice or likeness without specified prior consent. 

  • 2024 - California issued formal guidelines for state agencies to follow when buying generative AI tools for government use. 

  • 2023 - Gov. Newsom issued Executive Order N-12-23 on Sep. 6, 2023, directing state agencies to study the potential uses and risks of generative AI as well as engaging with legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI.

  • 2023 - California enacted CA AB 302, which mandates a thorough inventory of all “high-risk” AI systems “that have been proposed for use, development, or procurement by, or are being used, developed, or procured” by the state.

  • 2022 - California enacted CA AB 972, which extended the sunset provision of CA AB 730 until 2027, which requires disclosure of deepfake use in campaign material.

  • 2019 - California enacted CA AB 1215, which prohibited the use of facial recognition to analyze images captured by police body cameras.

  • 2019 - California enacted CA AB 730, which required disclosure of deepfake use in campaign material.

  • 2019 - California enacted CA SB 36, which requires state criminal justice agencies to evaluate potential biases in AI-powered pretrial tools.

  • 2018 - California enacted CA SB 1001, which requires disclosure when using a “bot” to communicate or interact with another person.

Download PDF