Lessons from Regulating Facial Recognition Technology

The highly anticipated launch of Google’s Gemini AI model this week illustrates how quickly the chatbot application of large language models (LLMs) has captivated both the public and policymakers. But AI itself is not all that new, and policymakers have addressed AI use cases long before ChatGPT burst on the scene a year ago. 

One example is facial recognition technology. After decades of failed research, facial recognition took off when programmers utilized “facial geometry” to turn faces into math problems that AI algorithms could solve. The new technique accelerated the accuracy of facial recognition software to the point that tech companies use facial recognition to unlock user devices or recognize and tag friends in photos, and law enforcement has used the technology to identify suspects. However, even after these advances, the technology has raised concerns from researchers over bias with studies showing it is less accurate when used on women and people of color.

The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, actually provides the most respected industry benchmark to test the accuracy of facial recognition technologies. And that’s one reason why President Biden’s recent AI executive order tasked NIST with setting the standards to test AI systems for safety. 

Clearview AI, one of the leading facial recognition companies, controversially built its database of faces by scraping public photos and profile pictures from social media websites such as Twitter, Facebook, and even Venmo, eventually collecting 30 billion face photos to train its technology. The controversy over Clearview AI’s public data scraping is similar to the current debates about scrapping LLM training data off the internet. Today, over 3,000 law enforcement agencies in the US use facial recognition technology to match images of suspects to results in Clearview AI’s database. 

Policymakers quickly became skeptical of facial recognition, especially when used on the public without consent. States initially moved forward but eventually backtracked on broadly banning facial recognition use by law enforcement. However, the leader in regulating how technology like facial recognition is used is Illinois. 

In 2008, lawmakers enacted the Illinois Biometric Information Privacy Act (BIPA). The idea behind BIPA was spurred by a staffer at the Illinois ACLU after he observed that a local supermarket offered customers the option to pay using a fingerprint. The definition of biometric identifiers in the law includes facial geometry in addition to fingerprints, voiceprints, and retina scans. 

Under Illinois’ BIPA, companies collecting biometric data from individuals (1) must publish a general notice about the company’s biometric data period; (2) must provide specific notice and obtain consent from anyone whose biometric information is collected; and (3) are prohibited from selling or trading the personal biometric information for profit. 

Critically, BIPA provides a private right of action for anyone whose biometric information is violated under the law. And this has led to a flood of class action lawsuits. This year, the Illinois Supreme Court clarified that a company is in violation every time a biometric scan takes place, even if the same scan is repeated over time, and that claims can reach back as far as five years from the BIPA claim filing, putting a huge financial risk on any company looking to utilize biometric information. 

The settlements stemming from BIPA lawsuits include multi-hundred million dollar settlements from many of the major tech companies. In 2020, Clearview AI settled a BIPA lawsuit, agreeing to not sell its services to any private sector businesses in the U.S. and to delete any photos geolocated in Illinois from its database. 

Notably, Washington and Texas have also enacted biometric privacy laws, but neither of these statutes includes a private right of action for citizens to bring their own lawsuits. While no state has matched the reach of Illinois’ BIPA law, policymakers’ interest in privacy protections has only grown. And with the onset of proliferating AI technology, the AI regulatory debate will be closely entwined with the policy protection laws stacking up in the states. 

In addition to the privacy aspect, other parallels between the rise of facial recognition and today’s LLMs include the controversial scraping of public data to train the models, the role of NIST as a benchmarker, and concerns over potential bias built into the models. Policymakers have struggled to find a balance between privacy protections and promoting technological advancement with facial recognition and they’re likely to face similar challenges regulating AI. Similar to Illinois’ BIPA law, policymakers have focused on notice and consent requirements for AI regulations with a private right of action being a particularly powerful lever. 

Recent Policy Developments

  • New Mexico: Last Monday, the Science, Technology & Telecommunications Committee discussed transparency in AI. The committee also considered Vermont’s law on the use of AI in state government. 

  • Michigan: Last Thursday, Gov. Whitmer (D) signed a package of AI-related bills into law. These new laws require disclosures (MI HB 5141) for pre-recorded phone messages and political advertisements that were created with AI, prohibit (MI HB 5144) distributing media that manipulates the speech or conduct of an individual within 90 days of an election without a disclaimer, and establish (MI HB 5145) sentencing guidelines for election law offenses related to deceptive media created with AI. Another new law (MI HB 5143) defines "artificial intelligence." Keep track of state laws regulating the use of deepfakes in electoral campaigns with our dedicated issue page on multistate.ai. 

  • Illinois: Last Thursday, the Illinois House passed a bill (IL SB 382), sending it to Gov. Pritzker (D) for his signature, that would amend the Civil Remedies for Nonconsensual Dissemination of Private Sexual Images Act by adding a definition for “digitally altered sexual image.” By adding this definition, lawmakers are attempting to address the harms of deepfakes that are used to depict individuals in a sexual manner without their consent, a problem that has received increased attention from lawmakers. 

  • Ohio: On Monday, the Department of Administrative Services released guidelines for the use of AI by state government workers. The policies require agencies to define a formal process for AI use cases, establish AI training for state workers, set requirements for procurement, require certain security provisions, and regulate data governance. 

  • Washington: Rep. Clyde Shavers (D) is considering a bill on autonomous decision-making tools for next year’s session. The proposed legislation would require impact assessments for such tools, require statements regarding intended uses, disclose developer policies, and prohibit discrimination.

  • Connecticut: The AI Task Force chaired by Sen. James Maroney will meet on Wednesday, Dec. 13, 2023 with a focus on workforce development and AI use in health technology.

  • Industry: On Tuesday, a coalition of Meta, IBM, and over 50 AI companies launched the AI Alliance, which will advocate for an open-source approach to AI development.

Previous
Previous

States Address the Alarming Proliferation of Nonconsensual Sexual Deepfakes

Next
Next

Understanding California’s Proposed AI Rules: Notifications and Opt-Outs