The FTC Cracks Down on Rite Aid’s Deployment of AI-Based Technology
On December 19, 2023, the U.S. Federal Trade Commission (FTC) put a big lump of coal in Rite Aid’s stocking. The agency filed a complaint and proposed settlement regarding the pharmacy chain’s use of artificial intelligence (AI)-based facial-recognition surveillance technology. The complaint alleges that Rite Aid violated Section 5 of the FTC Act, 15 U.S.C. § 45, by using facial-recognition technology to identify shoplifters in an unfair manner that harmed consumers. The FTC further alleges that Rite Aid violated a 2010 FTC settlement with Rite Aid (the 2010 Order), by failing to employ reasonable and appropriate measures to prevent unauthorized access to personal information. While some may have missed this news in the run-up to the holidays, it marks a major step by the FTC to discipline businesses deploying AI systems — and provides lessons for companies seeking to avoid similar consequences (our more -detailed analysis of this case and its implications may be found in our January 8 Advisory).
For several years, the FTC has warned that it will use its Section 5 power against unfair and deceptive trade practices to penalize deployers of AI and other automated decision-making (ADM) systems that fail to take reasonable steps to protect consumers from harms resulting from inaccuracy, bias, lack of transparency, and breaches of privacy, among others (see the FTC’s blog post; our prior Advisory; and our March 7, 2023, March 29, 2023, and April 26, 2023 blog posts). The FTC’s crackdown proves that those threats were not empty.
Background of the Case
According to the FTC’s complaint, Rite Aid deployed AI-based facial-recognition technology to identify potential shoplifters at certain of its stores. The FTC claims that, in operation, the system yielded many erroneous matches and that Rite Aid employees, relying inappropriately on those results, increased surveillance of certain customers, forced customers to leave stores, falsely accused customers of shoplifting, and even reported customers to the police. The FTC alleges that the misidentifications disproportionately involved people of color and women. The complaint’s description of Rite Aid’s practices (see Paragraph 32) reads like a how-to manual for maximizing the risks of AI deployment.
Although the FTC’s allegations regarding Rite Aid’s deployment of AI do not focus on privacy and security violations, the complaint also claims that Rite Aid breached the 2010 Order, which required Rite Aid to implement and maintain a comprehensive information security program and retain documents relating to its compliance with that requirement.
Under the proposed settlement of the FTC’s current charges, Rite Aid will be subject to an array of obligations. Here are some key features:
- Rite Aid may not use facial-recognition technology for the next five years, other than for certain employment and healthcare uses, and then only if it obtains “Affirmative Express Consent” from targeted persons.
- Rite Aid will be required to destroy all photos and videos used or collected in connection with its facial-recognition program and, notably, all “data, models, or algorithms derived in whole or in part therefrom.”
- Before using any AI-based “Automated Biometric Security or Surveillance System” (including, but not limited to, facial recognition once the five-year prohibition is over), Rite Aid must establish, implement, and maintain a risk-management program. As with the prohibition on facial-recognition technology, this requirement would not apply in certain employment and healthcare contexts if Affirmative Express Consent is obtained.
- Rite Aid must implement an information security program satisfying numerous detailed requirements. It also must engage an independent, third-party “Information Security Assessor” (satisfactory to the FTC) to perform biennial reviews of the information security program; its effectiveness; and any gaps or weaknesses in, or instances of material noncompliance with, the information security program. Rite Aid must provide these assessments to the FTC.
- Rite Aid’s CEO must certify compliance with the settlement to the FTC annually.
Other than the five-year ban on using facial-recognition technology, these provisions will last for 20 years.
Risk-management programs like the one to which Rite Aid agreed are prudent not just for companies that come under FTC scrutiny, however. The FTC’s crackdown is likely to be neither a one-off nor just related to biometric surveillance. Instead, we expect it to mark the beginning of active FTC enforcement against allegedly unfair or deceptive use of ADM systems.
Businesses wishing to stay off the FTC’s “naughty list” should take stock of their consumer-affecting ADM systems and put in place a comprehensive, ongoing system for managing their risks. The U.S. National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (Framework) and the accompanying Playbook explain how to govern, map, measure, and manage AI risks. While NIST is a U.S. agency, it designed the Framework and Playbook to “[b]e law- and regulation-agnostic.” Organizations operating solely in the United States, solely in another country, or globally, all can use AI RMF 1.0 and the AI RMF Playbook to build their AI compliance programs to contain their regulatory and litigation exposures.
For help with understanding or managing your company’s AI risks, please feel free to contact the authors or Arnold & Porter’s multidisciplinary Artificial Intelligence team.
© Arnold & Porter Kaye Scholer LLP 2024 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.