Skip to main content
Enforcement Edge
March 29, 2023

Another Month, Another FTC Warning About AI Deceptions

Enforcement Edge: Shining Light on Government Enforcement

Hard on the heels of its February 27 blog post cautioning against deceptive marketing of artificial intelligence (AI) systems (see our earlier coverage), the U.S. Federal Trade Commission (FTC) was at it again on March 20, this time warning about using AI systems for deception. The growing capabilities of AI tools to generate highly realistic — but absolutely fake — video, images, recordings, and other media potentially threaten grave harm to individuals, institutions, and even entire societies. (It’s not hard to envision society-wide harm: imagine a realistic but fake video of a head of state announcing a military attack on another country.) The latest FTC guidance seeks to head off such mischief.

The FTC has a wide-ranging power at its disposal. Section 5 of the FTC Act broadly prohibits “unfair or deceptive acts in or affecting commerce”; authorizes the agency to stop such acts; and, in some cases, penalize the offenders. In the March 20 post, the FTC explains that this prohibition can apply to making, selling, or using any tool “that is effectively designed to deceive — even if that’s not its intended or sole purpose.”

So, how do you keep the FTC (among other federal, state, local, and foreign authorities) from taking notice of your AI-enabled products and services (in a bad way, that is)?

(1) Consider the risks, both from intended uses and reasonably foreseeable misuses. (Risk involves both the likelihood of a harm and the magnitude of the harm if it occurs.) If the residual risks after mitigation exceed either (a) the benefits from the product or service or (b) your risk tolerance, the FTC urges you to “ask yourself whether . . . you shouldn’t offer the product [or service] at all.”

(2) Take all reasonable precautions to mitigate the risks from intended uses and misuses before you offer the product or service for sale. “The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury.” Implement protection by design. Design AI systems and AI-enabled products and services from the ground up to minimize their potential for harm. Warnings to your customers and end users against misuse and easily defeated safeguards may well be inadequate mitigations of significant risks (in the FTC’s eyes, at least).

(3) If your AI system can create realistic fakes, consider how to make them obvious to the public. Indelible watermarks, labeling, and the like can keep fraudsters from taking advantage of their audience. As the FTC sees it, the “burden shouldn’t be on consumers, anyway, to figure out if a generative AI tool is being used to scam them.” If your tool makes fakes that crooks use to defraud people, and you haven’t made the fakery obvious, the FTC or other regulators may seek to hold you liable.

Conclusion

Effective AI risk management can be hard. According to the Risk Management Framework put out by the U.S. National Institute of Standards and Technology (see our prior coverage), it requires attention to governance and mapping, measurement, and management of risks and benefits. With limited resources and pressure to get to market, it can be tempting to cut corners or just to “move fast and break things.” Companies that don’t heed the FTC’s warnings, however, may find themselves on the wrong end of an expensive enforcement action.

For more information about managing AI’s regulatory and other risks, please feel free to contact the authors or Arnold & Porter’s multidisciplinary Artificial Intelligence team.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.