Connecticut’s AI Vision: An Analysis of SB 2

Lawmakers continue to move deepfake bills through the legislative process during a week full of major announcements from the AI industry. Some key highlights:

  • Sen. Maroney (D) unveiled bill language for the highly anticipated comprehensive AI bill in Connecticut, which is the subject of this week’s deep dive.

  • Sexual deepfake bills passed their legislative chamber of origin in West Virginia and a key committee in Kentucky

  • And Georgia lawmakers approved an electoral deepfake bill through the House. 


This week, lawmakers in Connecticut released the text of their much-anticipated AI bill (CT SB 2). The product of months of work by the AI Task Force created by legislation last session, the bill provides a comprehensive framework aimed at the development and deployment of AI models, while providing a longer timeline of enforcement with rolling effective dates and safe harbor provisions. The second half of the bill compiles many of the narrower focused legislation we’ve seen enacted in a handful of other states, addressing issues like deepfakes, educating workers and students on AI tools, creating access to AI computing power for entrepreneurs and researchers, government use of AI, and further study of AI policy issues. 

The first section of the bill addresses “high-risk artificial intelligence systems,” defined as any AI system that makes or is a controlling factor in a consequential decision. Effective July 1, 2025, developers of AI systems would have to take certain measures to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination, document limitations and uses of the system, establish data governance measures for training datasets, and explain how consumers can monitor such systems once deployed. 

Deployers of AI systems would be required to use “reasonable care” to protect against algorithmic discrimination with a safe harbor if they follow certain provisions. The bill requires them to implement a risk management policy and program and perform an impact assessment. Developers and deployers are both required to state publicly how they manage known or foreseeable risks. If it is discovered that a system has caused or is capable of causing algorithmic discrimination, it must be reported to the state attorney general.

The next section of the bill regulates generative AI systems, defined as those models able to produce synthetic digital content. Effective July 1, 2026, developers will be required to mitigate risks, provide that the model is not trained on unsuitable datasets (such as child pornography), achieve appropriate levels of performance and safety, and incorporate techniques that allow for authentication of content. Notably, open-source software would be exempt from these requirements.

The legislation would require that notice be given to consumers either subject to a consequential decision from a high-risk AI system or who interact with any AI system. Sole enforcement of AI regulation would fall to the state attorney general. Developers and deployers would have a 60-day right to cure a violation in the first year of implementation.

The second half of the bill is a cornucopia of other AI-related issues. The bill bans dissemination of sexual deepfake content without consent and political deepfakes within 90 days of an election without a disclaimer. Each state agency would be required to study how AI can help improve government efficiency, and provide for training for state employees on the use of generative AI. AI would be integrated into workforce training programs and colleges would need to provide online AI programs and certifications. The bill would also establish a computing cluster with colleges and universities to provide access to the computing power needed for small businesses and researchers to harness AI tools for innovation. 

Finally, the bill creates an Artificial Intelligence Advisory Council to make recommendations on regulation of AI in the private sector and adoption of an AI bill of rights for individuals, suggesting this is just one chapter in a multi-year effort to develop AI guidelines in Connecticut. 

Bill sponsor Sen. James Maroney (D) has been a point person among state lawmakers, not just in the Nutmeg State, but nationwide, organizing a bi-partisan, multi-state study group on AI. Having already shepherded a comprehensive privacy law through the legislature a few years ago, he has laid the groundwork to pass landmark legislation in artificial intelligence as well. Time will tell if Connecticut builds the framework other states look to adopt. 

Recent Policy Developments

In the News

  • Sora, Gemini 1.5, and Groq: The AI industry is abuzz this week with three major developments. First, Open AI announced and previewed its text-to-video model called Sora. Only a few weeks after launching its ChatGPT 4.0 rival Gemini Pro, Google announced an upgraded Gemini 1.5, which among other improvements will include a context window of 1 million tokens, which will give the chatbot a much larger memory than its rivals. Finally, a company called Groq has impressed with demos of its custom AI chips, which allow AI models to process query responses incredibly quickly, opening up opportunities like real-time voice conversations with AI models. 

  • FTC Targets Deepfakes: The Federal Trade Commission made two major announcements last Thursday. First, the FTC finalized a rule giving the agency stronger tools to combat scammers who impersonate businesses or government agencies, enabling the FTC to directly file federal court cases aimed at forcing scammers. Second, the FTP is seeking public comment on a supplemental notice of proposed rulemaking that would extend those impersonation protections to individuals. The proposed rule would make companies liable if they "know or have reason to know" that their technology "is being used to harm consumers through impersonation."

  • U.S. House AI Task Force: On Tuesday, Speaker Johnson (R) and Leader Jeffries (D) announced the establishment of a bipartisan Task Force on AI to explore how Congress can ensure America continues to lead the world in AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats. The Task Force has 24 members, will be led by Chair Jay Obernolte (R) and Co-Chair Ted Lieu (D), and will produce a comprehensive report that will include guiding principles, forward-looking recommendations, and bipartisan policy proposals developed in consultation with committees of jurisdiction.

Major Policy Action

  • West Virginia: On Tuesday, the Senate passed two sexual deepfake bills related to minors, sending them to the House for consideration. The first bill (WV SB 740) would criminalize the insertion into a recording of a visual image of an actual minor to create the appearance that it is a minor engaged in sexually explicit conduct. The second bill (WV SB 741) would prohibit the creation, production, distribution, or possession of artificially generated child pornography.

  • Kentucky: On Wednesday, the Senate Committee on State and Local Government reported favorably an electoral deepfake bill (KY SB 131) sending the bipartisan legislation to the full Senate for its consideration. The substitute bill approved by the committee would allow political candidates to bring a lawsuit against the sponsor of a deepfake electoral communication when the communication appears without a conspicuous disclosure.

  • Georgia: On Thursday, the House approved an electoral deepfake bill (GA HB 986) sending the legislation to the Senate for that chamber’s consideration. The bill would prohibit deepfakes related to politics within 90 days of an election and would create a new felony if deepfakes are intended to influence a candidate’s chance of being elected.

  • Virginia: Lawmakers decided this week to wait until the 2025 legislation session to address deepfakes after legislative committees failed to move forward on legislation that would have created penalties for deceptive use of deepfake media (VA HB 697 & SB 571) or nonconsensual sexual deepfakes (VA HB 1525), citing the need for more research

Notable Proposals

  • California: A bill (CA AB 3024) introduced by Rep. Rebecca Bauer-Kahan would require “data digesters,” defined as a business that uses personal information to train artificial intelligence, to register with the government. Registration would require a fee and certain disclosures on the categories of personal information used to train the AI system.

  • Mississippi: An electoral deepfake bill (MS SB 2423) introduced last week would regulate not just AI-generated political advertising, but prohibit AI-generated robocalls, like the ones that were used in the New Hampshire Democratic primary, unless accompanied with a disclaimer.

Previous
Previous

The Three Phases of State AI Regulation

Next
Next

California’s Focus on AI Development: An Analysis of SB 1047