Utah's Measured Approach to Mental Health AI
Key highlights this week:
We’re tracking 1,000 bills in all 50 states related to AI during the 2025 legislative session.
In an unexpected twist, the proposal to scale back Colorado’s AI law, which we analyzed last week, failed to make it across the finish line in time for Wednesday’s adjournment deadline.
A key Connecticut Senate committee approved Sen. Maroney’s (D) algorithmic discrimination bill this week. But the governor, who derailed last year’s effort, remains noncommittal.
The California Privacy Protection Agency released new draft regulations that trim back earlier proposals to regulate automated decision-making technology (ADMT). Stakeholders have until June 2 to submit comments.
And as legislative sessions wrap up in many states, and lawmakers have officially introduced 1,000 bills this year by our count, we dive into Utah’s underrated AI policymaking in this week’s deep dive.
Last year, I declared Utah’s moderate approach to AI regulation as a potential model for other states to follow. This year, lawmakers and regulators in Utah followed up on last year’s AI Policy Act with a targeted approach to mental health AI chatbots.
Since 2021, Utah has enacted 14 bills related to AI (including 6 this year). The state’s signature AI law is last year’s Artificial Intelligence Policy Act (UT SB 149), which clarifies that the use of an AI system is not a defense for violating the state’s consumer protection laws and requires disclosure when an individual interacts with AI in a regulated occupation. This emphasis on transparency is a hallmark of Utah’s strategy of light-touch AI regulation.
Utah’s Office for AI Policy
But an underappreciated aspect of Utah’s AI Act is that it established a first-in-the-nation Office for AI Policy, with a mission of consulting with businesses, academic institutions, and other stakeholders to facilitate dialogue on regulatory proposals to foster innovation and safeguard public safety. Many states have established AI working groups and committees, as well as economic incentives to drive AI research to their state without notable success. But Utah’s AI Policy Office stands out from the crowd.
Utah’s Office for AI Policy is led by BYU professor Zach Boyd. “I am mostly focused on the practical reality that Congress is going to be incredibly slow-moving on this if they accomplish anything at all about it,” Boyd said in an interview. “Even if some of this may eventually be resolved at the federal level, the states are the laboratories of democracy. This is where things get tried out quickly.”
The Office runs the state’s AI Learning Lab program, serving as a collaborative space for stakeholders to discuss and develop AI policies. Importantly, the Office has the authority to create regulatory mitigation agreements, which can provide temporary relief from certain regulations for AI companies participating in the Learning Lab. This allows companies to test new AI technologies with limited liability.
Regulatory Mitigation Agreements
The Office wasted no time. Shortly after the Office was established last summer, it began a dialogue with local health providers, national mental health companies, and startups on how to provide proper oversight for AI use in mental health services. Late last year, the Office announced its first regulatory mitigation agreement with ElizaChat, a company developing an app that schools can offer to teenage students to enable them to improve their mental wellness.
This agreement requires ElizaChat to implement a robust internal safety protocol for escalating severe cases to trusted adults and grants the company 30 days to rectify instances where the app may inadvertently engage in conversations that fall under the “practice of mental health therapy,” which requires state licensure. Boyd said of the agreement, “This is an example of how regulatory mitigation can empower innovative companies. Through our collaborative discussions with ElizaChat, we connected them with regulators to create a framework that safeguards consumers while allowing their service to flourish.”
Mental Health Chatbot Law
The result of this work with mental health stakeholders was legislation (UT SB 452) signed into law this year, regulating the use of mental health AI chatbots. The new law, which went into effect this week, addresses data privacy and transparency concerns with consumer interactions with a mental health chatbot powered by AI and incentivises the development and implementation of detailed safety and efficacy policies. The law defines "mental health chatbot" as AI technology that uses generative AI to engage in interactive conversations similar to confidential communications with a licensed mental health therapist.
Data Privacy & Advertising
First, the law prohibits suppliers of mental health chatbots from selling or sharing a Utah user's individually identifiable health information or user input with third parties. There are limited exceptions for user-consented sharing of data with a healthcare provider and certain scientific research. The law also places restrictions on advertising within mental health chatbot conversations, requiring clear disclosure of advertisements and prohibiting the use of user input to determine or customize advertisements. These provisions are meant to protect consumers without mandating blanket restrictions.
Transparency
Along with data privacy, transparency requirements are the low-hanging fruit of AI regulation. Last year’s AI Policy Act requires certain licensed professionals in Utah (e.g., accountants) to proactively disclose when a consumer is interacting with AI technology, while other professionals (e.g., telemarketers) must disclose AI use when asked by the consumer. This year’s law builds on these transparency mandates by requiring mental health chatbots to clearly disclose to users that they are an AI technology and not human. These disclosures need to occur before the initial interaction with the chatbot, at the beginning of any interaction if the user hasn’t accessed the chatbot in the previous seven days, and anytime a user asks the chatbot whether AI is being used.
Enforcement
Under the law, violations are enforced by the state’s Division of Consumer Protection and are not subject to a private right of action. Administrative fines for violations of the law can be issued up to $2,500 per violation. The Division may also pursue injunctive relief for violations of the law.
Robust Policies Offer Protection from Licensing Violations
Finally, instead of mandating an impact assessment-like list of reporting requirements and paperwork, the new law outlines a detailed list of policies ensuring the safety and efficacy of the mental health chatbot. In return for this reporting, the mental health chatbot supplier would receive an affirmative defense against any licensure violations. So the deal is that if you create, maintain, and implement a policy as perscribed under the law, and file that policy with the Division of Consumer Protection (the policy doesn’t need to be published publicly, like many proposed impact assessment requiremetns in other states), then the chatbot supplier is protected from potential liability from violations of the states licensing laws.
The reporting requirements in the law are pretty extensive, but overall, it requires policies to include licensed mental health therapist involvement, testing protocols, risk assessment mechanisms, measures to prevent discriminatory treatment of users, and compliance with federal health privacy regulations.
Notably, the reporting policy outlined in the law uses a standard of care to “ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in therapy with a licensed mental health therapist.” The Abundance Institute’s Taylor Barkley called this “one of my favorite provisions I’ve ever seen in any legislation anywhere.” Barkley explained that when setting a risk standard for AI, policymakers often demand pure perfection. He used autonomous vehicles as a recent example, where expectations are zero accidents instead of a safety performance somewhere in the wide range between the current human standard and perfection. He believes this provision in the Utah law “strikes the right balance.”
Striking the Right Balance
Policymakers in Utah have been particularly careful in striking the right balance between regulating an emerging technology that offers plenty of promise and potential harm. Utah remains an underrated model for other states to look toward when struggling with the same dilemma.
Recent Developments
Major Policy Action
Federal: The Trump Administration plans to rescind and modify the Framework for Artificial Intelligence Diffusion, a Biden-era rule that would have curbed the export of sophisticated AI chips. The rule, which was set to take effect May 15, restricts whether advanced AI chips can be shipped to the U.S. without a license based on the country of origin.
Colorado: A proposed bill (CO SB 318) to amend the AI law enacted last year was tabled Monday when Sen. Rodriguez (D) could not find consensus to move forward. A last-minute effort to delay the effective date of the law to 2027 fell short on Tuesday night, despite Gov. Jared Polis (D) and other state officials sending a letter to lawmakers imploring a delay. The session adjourned on Wednesday, leaving the law set to go into effect in February 2026, although lawmakers could be brought back for a special session this summer.
Connecticut: The Senate Judiciary Committee approved Sen. Maroney’s (D) algorithmic discrimination bill (CT SB 2). Gov. Ned Lamont (D) failed to support a similar measure last year, causing the bill to stall. This week, a spokesman for the governor was non-committal on the bill, saying, "We continue to meet with and hear from industry leaders, local developers, and legislators to encourage innovation and move our economy forward, while ensuring Connecticut's laws protecting individual rights cover evolving technologies."
California: The California Privacy Protection Agency released new draft regulations that trim back earlier proposals to regulate automated decision-making technology (ADMT). The original regulations had drawn criticisms from industry groups and policymakers, causing board members to revise the rules to reflect changing political winds. Stakeholders will have until June 2 to submit comments on the new rules.
Arizona: On Wednesday, the legislature sent the governor a bill (AZ SB 1295) that would prohibit the use of deepfake images, video, or voice recordings with the intent to defraud or harass others. A violation would be punishable as a Class 5 felony.
Florida: A bill (FL HB 369) that would have required generative AI tools to apply provenance data was indefinitely postponed and withdrawn from consideration on Saturday.
Oklahoma: On Monday, Gov. Kevin Stitt (R) signed a sexual deepfake bill (OK HB 1364) into law, which prohibits the dissemination of sexual deepfakes without consent. The law takes effect on November 1, 2025.
South Carolina: The Legislature sent a pair of bills (SC SB 28 and SB 29) to Gov. Henry McMaster (R) that address sexual deepfakes of minors. Both chambers also approved a bill (SC HB 3058) that makes it unlawful to intentionally disseminate a digitally forged intimate image of another without consent.
Texas: The House passed a package of AI-related bills, including legislation on sexual deepfakes (TX HB 449), a bill (TX HB 3133) to require social media platforms to take down sexual deepfakes, and a bill (TX HB 3512) to require certain state and local government officials have training in AI. They now head to the Senate, where they will need approval before the legislature is scheduled to adjourn on June 2.
Notable Proposals
Maine: A bill proposed this week (ME LD 1944) would add child sexual images created by generative AI or machine learning to provisions prohibiting child sexual abuse material. Maine is one of only a handful of states that has yet to pass any sexual deepfake legislation.
New Jersey: A bill (NJ A 5611) introduced on Monday would provide consumer protections against ticket resellers, including provisions that prohibit using software that disguises the purchaser's identity or bypasses ticket purchase limits, or the use of bots to obtain tickets. There have been similar bills introduced in DC, Iowa, Pennsylvania, and Texas this year.