“AI … Is Here To Stay”: Schumer Unveils Legislative Framework to Advance American AI
Senate Majority Leader Chuck Schumer (D-NY) unveiled a new legislative initiative aimed at advancing and regulating American artificial intelligence (AI) technology.
What Happened? In an April 13 press release, Leader Schumer announced a four-part legislative framework aimed at promoting innovation while also providing transparency and protections for American consumers. The first three topics of the framework, dubbed “guardrails” by Leader Schumer, focus on the Who, What, and How of AI and aim to allow the government to regulate AI technology properly. The final guardrail, Protect, aims to “align AI systems with American values” and ensure AI will “create a better world.” The press release also highlights Leader Schumer’s engagement with AI policy experts, researchers, and industry professionals and stresses the importance of ensuring the United States stays ahead of China to lead the world on innovative AI technology.
More specifically, Leader Schumer’s framework addresses: (1) Who trained the algorithm and its intended audience; (2) What is the source of the data; (3) How does the AI system generate its responses; and (4) how the AI system must Protect customer information and society at large with transparent and robust ethical boundaries. The framework “will require companies to allow independent experts to review and test AI technologies ahead of public release or update” and to “give users access to those results.” It is not clear whether this requirement will be limited to riskier AI applications although such a limitation would be consistent with U.S. policy and emerging global norms.
What’s Next? Despite being couched as a “proposal” in the press release, reporting indicates “many of the particulars have yet to be determined” and notes that Leader Schumer’s spokesperson said, “This is the beginning of a broader effort.” Leader Schumer did not provide any timeline for the continued development of his AI legislative framework, but he did emphasize that time is of the essence to promote continued U.S. leadership in developing this technology and, in particular, to “stay ahead of China.” Given the embryonic state of the framework, Leader Schumer’s spokesperson observed that the effort might carry over into future Congresses (i.e., to 2025 or beyond).
While there has been recent bipartisan interest in AI policy among members of Congress, no comprehensive legislation regulating AI is poised for adoption. In light of the work that will be required in multiple committees in each house to educate members and build consensus, this process is likely to take several years. And while Leader Schumer has identified this effort as a priority, only matters for which there is strong public and bipartisan support have the chance of advancing through a closely divided Congress. His initiative might be viewed as an effort to move the discussion to the middle of the public square.
In the meantime, concerns over AI may contribute new momentum to efforts to pass a federal data privacy bill like the American Data Privacy and Protection Act (H.R. 8152 - 117), which contains provisions regulating algorithms and already has significant support among members of Congress from both parties. For now, AI-specific legislation is more likely to be focused on narrower objectives such as increasing AI transparency and preventing bias, like the Algorithmic Accountability Act (H.R. 6580 - 117/S. 3572 - 117), which has been introduced in the last two Congresses.
Outside of Congress, broad regulation of AI could come out of the Federal Trade Commission’s commercial surveillance rulemaking proceeding (see our Advisory for further analysis), and the FTC could open an investigation into OpenAI and its GPT-4 technology as urged in the recent complaint by The Center for Artificial Intelligence and Digital Policy. The National Telecommunications and Information Administration is seeking comments to shape federal policy on AI accountability, and an array of federal agencies are beginning to enforce longstanding powers against alleged misuses of AI in the economic spheres they oversee. Companies that are focused solely on potential future regulation of AI are behind the curve.
As we have discussed previously, the various state privacy laws apply additional privacy protections or requirements to certain uses of automated decision-making. We expect to see other states adopt similar provisions while debates on AI-specific regulatory legislation have begun in various statehouses, including California’s.
Leader Schumer’s legislative framework, while bare scaffolding at the moment, demonstrates how the federal government increasingly is leaning into AI policymaking. With bipartisan concerns around AI’s risks growing rapidly, expect to see congressional AI legislative and oversight efforts surge. Industry expertise will be crucial as Congress weighs how to regulate and promote AI technology. Companies that do not participate in the process may find the legislation that ultimately emerges unnecessarily interferes with their products, services, or operations.
Also, while it will take months (likely years) before Congress passes broad AI regulation, businesses developing, distributing, deploying, or using AI systems face real regulatory—not to mention litigation—risks under laws on the books right now. They should give careful consideration to creating programs to manage AI risks comprehensively and proactively.
For more information about AI policy efforts or managing AI’s regulatory and other risks, please feel free to contact the authors or Arnold & Porter’s multidisciplinary Artificial Intelligence team.
© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.