AI Industry Weighs In: Amended Connecticut SB 2

This week, governors continue to sign the dozen or so AI-related bills sent to their desks as lawmakers debate additional AI measures. Some key highlights:

  • Governors in Indiana, Washington, and Idaho signed deepfake bills into law. With these additions, 13 states have now enacted laws to address sexual deepfakes and 8 states have adopted laws to address political deepfakes. 

  • Tennessee Gov. Lee signed the ELVIS Act, protecting performers’ voices from AI, into law along with a bill directing educational institutions to adopt AI policies. 

  • And a joint committee in Connecticut made major changes to the Senate’s comprehensive AI bill, which is the subject of this week's in-depth analysis. 


After months of work by a legislative work group, Connecticut lawmakers proposed a landmark comprehensive AI bill last month that could set a template for other states to follow. Not surprisingly, lawmakers have already amended the original bill to address numerous concerns raised after weeks of testimony from stakeholders. The proposal reflects one of the first major attempts to set broad guardrails for the emerging AI industry, but the issues raised thus far reflect just how difficult it will be to set rules for a technology that policymakers — and even industry professionals — are still struggling to wrap their heads around.

The much-watched comprehensive artificial intelligence bill in Connecticut (CT SB 2) passed out of committee last week with significant amendments. First introduced last month, we analyzed this bill as a framework listing obligations by developers and deployers of artificial intelligence systems. The original bill also banned sexual and political deepfakes, provided for AI education and certification programs, established a computing cluster for small businesses, and created an Artificial Intelligence Advisory Council to make further recommendations.

After weeks of hearings, the Joint General Law Committee (made up of members of both the Connecticut House and Senate) decided to keep most of the major provisions of the original bill but also made modifications in response to concerns brought up during the hearings. Lawmakers amended some definitions in the bill, such as adding  anyone who creates or substantially modifies general-purpose AI systems under the definition of “developer.” The Committee clarified the definition of “algorithmic discrimination” to exclude self-testing and allow efforts to increase diversity or redress historic discrimination. 

Section 2 of the bill, which requires certain disclosures from developers of high-risk AI systems, was broadened to apply to developers of all AI models and modifies what disclosures are required. While Section 3, requiring reasonable care to protect against foreseeable risks of discrimination against consumers, was narrowed to apply only to deployers of high-risk artificial intelligence systems.

Lawmakers rewrote Section 4 to require developers of general-purpose AI models to disclose technical documentation of testing, tasks, and policies. Developers would also be required to provide documentation to deployers to understand the capabilities of the model and disclose the content used to train such a model to the state attorney general's (AG) office.

The amended draft provides the Commissioner of Consumer Protection with oversight duties to share with the AG’s office and clarifies that both offices are tasked with enforcement of the law. The bill makes clear that there is no private right of action for violations of the requirements on developers and deployers, which raised concerns from Senator Tony Hwang (R) that “private citizens would lose their right to be able to have cause of action and remove the ability for them to address legal as well other damaging impacts caused by AI.”

The original bill provided for a ban on sexual deepfakes, but the committee tweaked those provisions and specified criminal penalties. Political deepfake provisions were also amended to specify disclaimer requirements and exempt parody or satire. Lawmakers amended the bill to add two new sections requiring developers and deployers of models that create “synthetic digital content” to label, in a machine-readable format to consumers, that the content is synthetic. 

The amended bill creates a new section requiring a deployer of a high-risk AI system to use reasonable care to protect consumers from reasonably foreseeable risks of algorithmic discrimination. It includes affirmative defenses for programs that integrate certain risk management frameworks and includes a 60-day right to cure in the first year of implementation with a right to cure left to the discretion of the Commission on Human Rights and Opportunities after that. Deployers would also be required to provide impact assessments to the Commission. 

Finally, lawmakers tacked on a competitive grant program to fund pilot studies that use AI to reduce health inequities in the state as well as programs for hospitals, fire departments, schools, nonprofits, and criminal justice institutions to integrate algorithms. The bill provides for the appointment of a primary point of contact for economic development in the field of AI. It also provides for a study on the use of AI by health providers.   

While the bill has support in the Senate, it’s less clear how strong the support in the House will be or whether Gov. Ned Lamont (D) will sign such far-reaching legislation. The bill faces opposition from industry groups and Connecticut’s Chief Innovation Officer, Daniel O’Keefe, told the joint committee that “AI regulations that are unique to Connecticut will cause developers and startups to pause before operating here, no matter how well-intentioned and well-crafted the law” and warned lawmakers “to be wary of unintended consequences.” And even if the full measure is enacted, lawmakers like State Rep. David Yaccarino (R) expect to revisit the topic in the near future, saying "It's so fast, so evolving, that we will have to come back.”

Recent Developments

In the News

  • Nvidia’s “Blackwell” AI Chip: On Monday, at their annual developer conference, dubbed “AI Woodstock” this year, chipmaker Nvidia CEO Jensen Huang unveiled its latest semiconductor, which the company claims is 30 times faster than previous chips, aimed at AI developers along with additional software for AI use. 

Major Policy Action 

  • Indiana: Gov. Holcomb (R) signed two deepfake bills into law, one targeting political deepfakes and the other sexual deepfakes. The sexual deepfake law (IN HB 1047) defines certain images created by AI to constitute an "intimate image" for purposes of the crime of distributing an intimate image. And the political deepfake law (IN HB 1133) prohibits the use of deepfakes in political advertising without a disclaimer and provides a civil action for the person being depicted. Both laws are effective immediately. The governor also signed a law (IN SB 150) that creates an AI task force to study and assess the use of AI technology by state agencies and requires an inventory of all AI use by the state.

  • Washington: Gov. Inslee (D) signed a sexual deepfake bill (WA HB 1999) and an AI study bill (WA SB 5838) into law. The deepfake law criminalizes the disclosing of fabricated intimate images, expands criminal offenses to fabricated depictions of minors, and provides a civil cause of action for the nonconsensual, intentional disclosure or threatened disclosure of a fabricated intimate image. The deepfake law goes into effect on June 6, 2024. The study bill creates an AI Task Force to assess uses, develop guiding principles, and make recommendations for the regulation of generative AI.

  • Idaho: On Wednesday, Gov. Little signed a deepfake bill (ID HB 575) into law. The law will make it unlawful to disclose explicit synthetic media without consent if the disclosure would cause the identifiable person substantial emotional distress, or to harass, intimidate, or humiliate a person or obtain money through fraud.

  • Tennessee: This week, Gov. Lee signed both the ELVIS Act (TN HB 2091) and a bill (TN SB 1711) directing educational institutions to adopt AI policies into law. The ELVIS Act, which we previously wrote about here, will add "voice" as a protected personal right. The education law requires universities and local school boards to adopt a policy regarding the use of AI technology by students, faculty, and staff for instructional and assignment purposes, to be implemented by 2025. 

  • Maryland: This week, lawmakers passed several deepfake bills out of their legislative chamber of origin, sending them to the opposite chamber for consideration. Two bills (MD HB 145 & SB 858) addressed sexual deepfakes and the third is a political deepfake bill (MD SB 978). 

  • Oklahoma: Last Thursday, the House passed a bill (OK HB 3453) that would provide for a “right to know” when AI is being used, require reasonable security measures for an AI model to protect personally identifiable information, require a right to consent to the use of a citizen's voice or image for derivative media generated by AI, and prohibit unlawful discrimination through an algorithm. The bill now heads to the Senate for consideration before the legislature is scheduled to adjourn at the end of the month. 

Notable Proposals

  • California: Senator Hurtado (D) introduced the Preventing Algorithmic Collusion Act (CA SB 1154), which would require reporting on each pricing algorithm upon request from the attorney general. The bill would prohibit using a pricing algorithm used or trained on nonpublic competitor data and require certain users of pricing algorithms to make disclosures.

  • California: On Wednesday, lawmakers amended a shell bill to introduce the California AI Transparency Act (CA SB 942), which would establish a Generative AI Registry and require certain generative AI providers to include a visible disclosure in AI-generated content and provide an AI detection tool by which a person can query the covered provider to the extent content was created by generative AI.  

  • New York: Concerns over rising housing costs spurred a proposal (NY AB 9473) to prohibit the use of an algorithmic device by a landlord for the purpose of determining the amount of rent to charge a tenant. Lawsuits were recently filed alleging major landlords in Tennessee and Washington of using algorithmic pricing software, with the U.S. Department of Justice reportedly investigating the practice.

Previous
Previous

Tennessee's AI Deepfake Defense: The ELVIS Act

Next
Next

Utah’s Moderate Approach to AI Regulation