Connecticut Scales Back AI Regulation in Third Legislative Push
Key Takeaways
- Connecticut SB 5 takes a narrower approach to Connecticut AI regulation than previous attempts, focusing on specific use cases rather than comprehensive developer and deployer requirements.
- The bill introduces companion chatbot regulation that requires disclosure to users and includes protections for minors, such as prohibiting chatbots that could encourage self-harm or provide unlicensed mental health services.
- Employers using automated systems for AI employment decisions would need to notify workers and job applicants, provide explanations for adverse decisions, and allow opportunities to correct data and appeal.
- Developers would be required to implement synthetic content labeling by October 2027, ensuring AI-generated images, audio, and video are marked in a way consumers can detect.
- The bill includes workforce development provisions like an AI Policy Office and regulatory sandbox program, designed to address Governor Lamont's concerns about making Connecticut attractive to AI companies.
- If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.
Connecticut Sen. James Maroney (D) has been an outspoken advocate for artificial intelligence regulation. Yet Connecticut has failed to pass comprehensive AI legislation in each of the last two legislative sessions. Is the third time a charm? Lawmakers have drafted a new bill in Connecticut that scales back many of the regulatory obligations on developers and deployers, reflecting a new political reality, but will it be enough to gain the support of a skeptical governor needed for enactment?
Last Friday, the Joint Committee on General Law released the draft of this year’s artificial intelligence legislation (CT SB 5) with 23 co-sponsors in the Senate, including Sen. Maroney. The proposal represents a departure from past efforts, which required documentation from developers and deployers, and at one point, “integrators” of “high-risk artificial intelligence systems.”
If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.