Skip to main content
Enforcement Edge
March 7, 2023

FTC Warns: All You Need To Know About AI You Learned in Kindergarten

Enforcement Edge: Shining Light on Government Enforcement

Okay, not exactly. But, in a February 27, 2023 blog post, the U.S. Federal Trade Commission (FTC) reminded businesses that several timeless principles apply to the marketing and use of even cutting-edge artificial intelligence (AI) systems. Seemingly every day brings new headlines about the amazing (or troubling) things that AI systems can do. AI is hot right now. And the FTC cautioned that “one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.”

Using an AI system entails some risks. The system’s prediction or other output sometimes will be incorrect. Inaccuracy can result from a variety of reasons. For example, AI systems today tend to be probabilistic: their output has a specified probability of being right (and, conversely, a specified probability of being wrong). In other words, we should expect AI systems to make mistakes.

AI’s risks don’t stop there. AI systems—as human constructions—are likely to reflect the biases of both their developers and society at large, including biases against historically disadvantaged groups. These biases not only increase the risk that AI systems will get things wrong, they also may result in violations of antidiscrimination laws.

Furthermore, AI systems can lack transparency (“what happened”), explainability (“how did it happen”), and interpretability (“why did it happen” or “what does it mean”). (For more on transparency, explainability, and interpretability, see Sections 3.4–3.5 of the U.S. National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework.)

And that’s on top of the privacy concerns raised by all the data that drive AI systems.

The FTC is acutely aware of these risks. Previously, it advised businesses “to avoid using automated tools that have biased or discriminatory impacts.” In the agency’s latest salvo, it urged companies—especially their marketing teams—not to make “false or unsubstantiated claims” about the efficacy of AI products, noting “that some products with AI claims might not even work as advertised in the first place.”

When marketing AI-enabled products and services, put yourself in the FTC’s shoes. To help you, the FTC identified several questions about which it “may be wondering” when it sees advertising about AI:

  1. Are you exaggerating about what your AI product can do? Don’t make deceptive performance claims, such as those that lack scientific support or those that are only true in the case of certain users or conditions.
  2. Are you promising that your AI product does something better than a non-AI product? Make sure the facts back you up. These sorts of comparative claims have to be supported by adequate proof.
  3. Are you aware of the risks? Businesses should understand the “reasonably foreseeable risks and impact” of an AI-enabled product or service before marketing it. In the FTC staff’s words, “If something goes wrong—maybe it fails or yields biased results—you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a ‘black box’ you can’t understand or didn’t know how to test.” Or, as we have said elsewhere, “‘AI did it’ is by and large, not an affirmative defense.”
  4. Does the product actually use AI at all? Given all the hype around AI these days, it may be tempting to claim a product or service is AI-enabled. Resist the temptation. FTC technologists, among others, can “look under the hood” to vet whether a product or service really does use AI. Before using an “AI-enabled,” “AI-powered,” or similar claim, bear in mind the agency’s position that “merely using an AI tool in the development process” is inadequate substantiation.

Conclusions

In a nutshell, don’t be so taken with the magic of AI that you forget the basics. Deceptive advertising exposes a company to liability under federal and state consumer protection laws, many of which allow for private rights of action in addition to government enforcement. Misled customers—especially B2B ones—might also seek damages under various contractual and tort theories. And public companies have to worry about SEC or shareholder assertions that the unsupported claims were material.

Come to think of it, didn’t we learn “tell the truth” in kindergarten?

For more information about managing AI’s regulatory and other risks, please feel free to contact the authors or Arnold & Porter’s multidisciplinary Artificial Intelligence team.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.