How Facial Recognition Regulation Previews AI Policy Battles (Illinois BIPA Law Sets Template)
Key Takeaways
Illinois biometric privacy law (BIPA) requires companies to obtain consent before collecting biometric data like facial geometry and gives individuals the right to sue for violations. This has resulted in hundreds of millions of dollars in settlements from major tech companies.
Facial recognition regulation in states has evolved as policymakers balance privacy concerns with technological advancement. While some states initially moved to ban the technology, Illinois remains the leader with its enforceable privacy protections.
The BIPA private right of action allows individuals to file lawsuits when their biometric information is misused, with recent court rulings clarifying that violations occur each time a scan happens and claims can reach back five years.
AI biometric data consent requirements under laws like BIPA parallel today's debates about LLM training data, as both involve scraping public information and raise concerns about bias and privacy protections.
If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.
The highly anticipated launch of Google's Gemini AI model this week illustrates how quickly the chatbot application of large language models (LLMs) has captivated both the public and policymakers. But AI itself is not all that new, and policymakers have addressed AI use cases long before ChatGPT burst on the scene a year ago.
Facial Recognition Technology as an AI Regulatory Precedent
The Rise of Facial Recognition and Bias Concerns
One example is facial recognition technology. After decades of failed research, facial recognition took off when programmers utilized "facial geometry" to turn faces into math problems that AI algorithms could solve. The new technique accelerated the accuracy of facial recognition software to the point that tech companies use facial recognition to unlock user devices or recognize and tag friends in photos, and law enforcement has used the technology to identify suspects. However, even after these advances, the technology has raised concerns from researchers over bias with studies showing it is less accurate when used on women and people of color.
The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, actually provides the most respected industry benchmark to test the accuracy of facial recognition technologies. And that's one reason why President Biden's recent AI executive order tasked NIST with setting the standards to test AI systems for safety.
Clearview AI and the Data Scraping Controversy
Clearview AI, one of the leading facial recognition companies, controversially built its database of faces by scraping public photos and profile pictures from social media websites such as Twitter, Facebook, and even Venmo, eventually collecting 30 billion face photos to train its technology. The controversy over Clearview AI's public data scraping is similar to the current debates about scrapping LLM training data off the internet. Today, over 3,000 law enforcement agencies in the US use facial recognition technology to match images of suspects to results in Clearview AI's database.
Policymakers quickly became skeptical of facial recognition, especially when used on the public without consent. States initially moved forward but eventually backtracked on broadly banning facial recognition use by law enforcement. However, the leader in regulating how technology like facial recognition is used is Illinois.
Illinois Biometric Information Privacy Act Sets the Standard
In 2008, lawmakers enacted the Illinois Biometric Information Privacy Act (BIPA). The idea behind BIPA was spurred by a staffer at the Illinois ACLU after he observed that a local supermarket offered customers the option to pay using a fingerprint. The definition of biometric identifiers in the law includes facial geometry in addition to fingerprints, voiceprints, and retina scans.
BIPA Requirements and Private Right of Action
Under Illinois' BIPA, companies collecting biometric data from individuals (1) must publish a general notice about the company's biometric data period; (2) must provide specific notice and obtain consent from anyone whose biometric information is collected; and (3) are prohibited from selling or trading the personal biometric information for profit.
Critically, BIPA provides a private right of action for anyone whose biometric information is violated under the law. And this has led to a flood of class action lawsuits. This year, the Illinois Supreme Court clarified that a company is in violation every time a biometric scan takes place, even if the same scan is repeated over time, and that claims can reach back as far as five years from the BIPA claim filing, putting a huge financial risk on any company looking to utilize biometric information.
Major Settlements and Financial Impact on Tech Companies
The settlements stemming from BIPA lawsuits include multi-hundred million dollar settlements from many of the major tech companies. In 2020, Clearview AI settled a BIPA lawsuit, agreeing to not sell its services to any private sector businesses in the U.S. and to delete any photos geolocated in Illinois from its database.
Notably, Washington and Texas have also enacted biometric privacy laws, but neither of these statutes includes a private right of action for citizens to bring their own lawsuits. While no state has matched the reach of Illinois' BIPA law, policymakers' interest in privacy protections has only grown. And with the onset of proliferating AI technology, the AI regulatory debate will be closely entwined with the policy protection laws stacking up in the states.
Lessons from Facial Recognition for Future AI Regulation
In addition to the privacy aspect, other parallels between the rise of facial recognition and today's LLMs include the controversial scraping of public data to train the models, the role of NIST as a benchmarker, and concerns over potential bias built into the models. Policymakers have struggled to find a balance between privacy protections and promoting technological advancement with facial recognition and they're likely to face similar challenges regulating AI. Similar to Illinois' BIPA law, policymakers have focused on notice and consent requirements for AI regulations with a private right of action being a particularly powerful lever.
If you're a subscriber, click here for the full edition of this update. Or, click here to learn more about our MultiState.ai+ subscription.