States Have AI Chatbots in Their Crosshairs

\After several years of debating sweeping artificial intelligence bills, state lawmakers in 2026 are shifting toward narrower, use-case-driven regulation, with chatbots at the center of that transition. The growing intersection between AI policy and online child safety is driving that shift in large part, as legislators focus on how conversational systems interact directly with minors. Alongside child safety-focused proposals, states are also advancing chatbot regulations that impose disclosure obligations and guardrails around therapeutic or mental-health-related interactions, further expanding the scope of compliance considerations for businesses.

Chatbots have rapidly become a common feature across consumer apps, customer service platforms, education tools, and wellness services, shifting from experimental deployments to routine points of user interaction in everyday digital life. As their use becomes more prevalent, lawmakers have sought to add guardrails to protect consumers. Thus far, regulation of chatbots has primarily focused on three main areas: disclosure requirements, regulation of “companion” chatbots, and regulation of chatbots that provide professional services. 

Chatbot disclosure requirements

Lawmakers have pushed for chatbot disclosure requirements to ensure users understand when they are interacting with an automated system rather than a human, so they are not deceived or misled about the limitations of the interaction. Several states have already enacted laws that require chatbot interactions to be disclosed. 

https://docs.google.com/drawings/d/1cjbQXnHOgFuVOhJGrJ-HD1YsoggvlY5JAn_apxaZqe0/edit

For companies, the most consequential issues in emerging chatbot disclosure legislation involve when disclosure is required. Bills can vary on what entities are required to make disclosures. Utah lawmakers originally passed a law requiring proactive disclosure at the outset of an interaction for certain regulated occupations, with a more reactive disclosure requirement for certain activities under the consumer protection framework triggered only if a consumer asked. Last session, the more reactive requirement was amended (UT SB 226) to apply to any “supplier” that uses generative AI to interact with a consumer in connection with a transaction.

California passed the first chatbot disclosure law in 2019, but it was limited to requiring disclosure only when a bot knowingly deceives a person to incentivize a sale or influence an election. One proposed bill (CA AB 410) would expand disclosure requirements to any person who uses a bot to autonomously communicate with another. Colorado’s major AI law (CO SB 205) requires AI developers and deployers to disclose to consumers that they are interacting with AI, although that bill has yet to take effect and is expected to be amended in the 2026 session. Two of the proposals last year to amend the Colorado AI law would have scrapped most of the provisions, but retained the disclosure requirement for consumer interactions with AI. 

Maine lawmakers also passed a bill last session that (ME LD 1727) requires a "clear and conspicuous" disclosure that the consumer is not engaging with a human being. The law applies to artificial intelligence chatbots and “any other computer technology” used to “engage in trade and commerce with a consumer,” although only if a “reasonable consumer” would believe the interaction is with a human. 

While disclosure requirements establish a baseline transparency obligation across chatbot interactions, a distinct category of AI systems — companion chatbots designed to simulate ongoing personal relationships — has prompted lawmakers to impose additional safeguards that go well beyond simply informing users they are interacting with AI.

Targeting companion chatbots

Concern over companion chatbots has accelerated rapidly as a handful of high-profile lawsuits and media investigations have raised alarms about AI systems that simulate human companionship, particularly when used by minors. That issue salience spurred California lawmakers to enact the first landmark companion chatbot law last year (CA SB 243), requiring a “clear and conspicuous notification” that the companion chatbot is artificially generated and not human, and protocols to prevent users from having suicidal ideation or thoughts of self-harm. For known minors, operators must provide an additional notice once every three hours and take reasonable measures to prevent the chatbot from engaging in sexual conduct.

New York’s companion chatbot law (NY A 3008/S 3008), also enacted last year, has similar requirements for protocols for detecting and addressing suicidal ideation or expression of self-harm, but requires a “clear and conspicuous” notification at the outset of each interaction and at least every three hours for all users, not just minors.

California and New York each attempt to distinguish between these types of companion chatbots and those used for customer service or business purposes. California’s law applies to systems that provide “adaptive, human-like responses to user inputs,” are “capable of meeting a user’s social needs, including by exhibiting anthropomorphic features,” and are “able to sustain a relationship across multiple interactions.” New York’s law applies to systems that can “simulate a sustained human or human-like relationship,” by retaining information from prior interactions or user sessions to personalize engagement, asking unprompted emotion-based questions that go beyond direct responses to user prompts, and sustaining ongoing dialogue on matters personal to the user.

By contrast, New Hampshire lawmakers focused on prohibited acts in a law (NH HB 143) enacted last year. Under the New Hampshire law, certain online operators are prohibited from communicating with the intent to facilitate, encourage, offer, solicit, or recommend a child "imminently engage in" sexual conduct, illegal use of drugs or alcohol, acts of self-harm or suicide, or any crime of violence against another. Liability applies to any computer online service or internet service, including an AI chat program or bot, whose sole purpose is to provide responsive, open-ended generative communication through AI.

https://docs.google.com/drawings/d/12q7TH0D5GN_STlI44Mw_6DgCtL4R8xunsNoW-pesdYE/edit

Lawmakers in 15 states have already introduced companion chatbot bills. Some of these bills require companion chatbot operators to verify the user’s age, with certain content blocked for minors, parental controls required, or in some cases, a complete ban on use by minors. California Senator Steve Padilla (D), whose companion chatbot bill (CA SB 300) failed to pass last year, has introduced a bill (CA SB 867) that would ban the manufacture, sale, or exchange of a toy that includes a companion chatbot for a child 12 or under until 2031. 

Professional services chatbots

Both specialized therapy and mental-health chatbots as well as general-purpose AI chatbots are being used by consumers as low-cost tools that can be used at any time for emotional support, cognitive behavioral exercises, or stress management, often integrated into wellness apps and other digital health programs. But concerns have arisen about the scope of practice, consumer confusion, and patient harm, particularly when many mental health services can cross the line into licensed health services.  

Illinois (IL HB 1806) and Nevada (NV AB 406) each enacted laws in 2025 that prohibit artificial intelligence from practicing professional mental or behavioral health care or making such representations. These laws allow AI to be used by licensed therapists for supplementary services such as administrative tasks related to scheduling or organizing files, but prohibit AI from making independent therapeutic recommendations, directly interacting with clients, generating recommendations, or detecting emotional or mental states. Lawmakers in Utah took a different approach last year when they enacted a law (UT HB 452) allowing mental health chatbots to operate, so long as it discloses to the consumer the interaction is not with a human, and refrains from selling or sharing personal information with third parties or advertising products without disclosing it is an advertisement. 

Other states have taken a narrower, harm-based approach rather than directly regulating therapy chatbots as a category. For example, Texas enacted a limited AI regulation law (TX HB 149) that prohibits developing or deploying an AI system in a manner intended to incite or encourage physical self-harm, harm to others, or criminal activity. While not specific to mental health or therapy applications, the law reflects a trend of focusing on specific use cases for AI, rather than overall regulation. 

Beyond mental health, chatbots can be used for a number of professional services. Lawmakers in Virginia introduced a bill (VA HB 669) last week that would prohibit a chatbot from being used for several licensed services, including architecture, engineering, surveying, landscape architecture, geology, dentistry, medicine, nursing, optometry, pharmacy, physical therapy, certain mental health professions, psychology, social work, or veterinary medicine.

Rather than adopting comprehensive AI frameworks, lawmakers are layering targeted obligations onto chatbot deployments based on how they interact with users, what roles they perform, and the risks they present. As chatbot capabilities continue to converge across different contexts, navigating this fragmented and rapidly evolving landscape is likely to remain one of the central compliance challenges of state-level AI regulation in 2026 and beyond.

Previous
Previous

Inside Florida's “AI Bill of Rights”

Next
Next

2025 State of State AI Policy