Skip to main content
Enforcement Edge
February 6, 2023

(Less) Risky Business: NIST’s Framework for Managing Risk Throughout the AI Value Chain

Enforcement Edge: Shining Light on Government Enforcement

The amazing capabilities of artificial intelligence (AI) systems have been splashed across the news of late. Businesses, governments, and consumers increasingly depend on AI systems—some jaw-dropping and many no longer so—for a wide range of activities. And these AI systems are, for the most part, beneficial. They make organizations more efficient, free people from time-consuming tasks, and expand our horizons.

AI systems have dark sides, though. (Not that AI systems can be malicious—at least not yet!) AI systems can:

  • Discriminate against historically disadvantaged groups in lending, employment, educational opportunities, allocation of healthcare services, and many other areas;
  • Cause accidents on factory or warehouse floors or on roadways;
  • Serve up harmful content to kids or other vulnerable populations;
  • Generate racist, sexist, or just plain ol’ inaccurate content—whether through intentional misuse, as in creation of unauthorized “deepfake” pictures, audio, or video, or as unintentional SNAFUs; and
  • Make myriad other types of mistakes of greater or lesser consequence.

To protect their organizations, and others, from potential harm while still delivering on AI systems’ benefits, designers, developers, sellers, deployers, and users (all called AI actors) need to manage the risks appropriately.

After almost two years of work, the US National Institute of Standards and Technology (NIST) recently made this undertaking much more accessible to the broad range of AI actors. On January 26, NIST published the first version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0). AI RMF 1.0 breaks AI risk management into four core functions (governing, mapping, measuring, and managing) and outlines an approach to each. Accompanying AI RMF 1.0 is NIST’s AI RMF Playbook (still in draft), which provides a recommended program for governing, mapping, measuring, and managing AI risks.

NIST is not a regulator, and the use of AI RMF 1.0 is voluntary. But legislatures and regulators, both in the United States and around the globe, have focused on AI more and more in recent years. They have proposed (and in some cases adopted) new mandates and adapted enforcement of existing regulatory regimes to the new risks posed by AI. (Our discussions of these developments can be found here.) Whether or not a regulatory regime applies, those harmed by AI systems also may resort to private litigation. In short, AI actors increasingly will face both regulatory and litigation threats.

While NIST is a US agency, it designed AI RMF 1.0 and the AI RMF Playbook to “[b]e law- and regulation-agnostic.” Organizations operating solely in the United States, solely in another country, or globally, all can use AI RMF 1.0 and the AI RMF Playbook to build their AI compliance programs to contain their regulatory and litigation exposures. Go check them out!

If you need assistance with understanding NIST’s framework, please feel free to contact the authors or Arnold & Porter’s multidisciplinary Artificial Intelligence team.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This blog post is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.