Skip to main content
All
November 9, 2023

Executive Order on AI Privacy: Balancing Innovation With Personal Data Protection

Advisory

In President Biden’s artificial intelligence (AI) executive order (EO), the administration attempts to address the concern that AI may undermine personal privacy, acknowledging that “[a]rtificial [i]ntelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires.” In announcing the EO, the administration called upon Congress to address AI privacy risks through broad privacy legislation, which members of Congress have attempted numerous times in the past decade without success. Implicit in the administration’s call for such legislation is the understanding that an EO can accomplish only so much. What it can and does do is to direct executive agencies to take measures within the scope of their authority to help mitigate potential privacy harms in the development and implementation of AI.

With respect to tools already available to the Executive Branch, the EO states that “[t]he Federal Government will enforce existing consumer protection laws,” including with respect to consumer privacy, in the growth of AI, and that the federal agencies should “consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy” including by “clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use.” With this directive, it is reasonable to expect that agencies with existing privacy expertise and authority — such as the Federal Trade Commission and the Department of Health and Human Services (HHS) — will closely scrutinize how businesses protect consumer privacy in connection with AI. It would appear, too, that these and other agencies use their rulemaking authority under existing law to further prescribe privacy protections in the development and use of AI tools.

The EO places a substantial emphasis on the development of “privacy-enhancing technologies” (PETs). The term is specifically defined under the EO to mean “any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality” and provides examples such as “secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy, and synthetic-data-generation tools.” The EO encourages the development of PETs by directing the Director of the National Science Foundation (NSF) to (1) fund the creation of a Research Coordination Network to advance “the development, deployment, and scaling of PETs”; (2) engage “with agencies to identify ongoing work and potential opportunities to incorporate PETs into their operations”; and (3) use the results of the United States-United Kingdom PETs Prize Challenge to inform approaches and opportunities for researching and adopting PETs. The Director of the NSF is also required to take a number of other actions “[t]o develop and strengthen public-private partnerships for advancing innovation, commercialization, and risk-mitigation methods for AI” including with respect to privacy protections. Additionally, to encourage uses of PETs, the Director of NIST is directed to “create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI” that “at a minimum, describe the significant factors that bear on differential-privacy safeguards and common risks to realizing differential privacy in practice.” NIST is also directed to coordinate with the Secretary of the Department of Energy and the Director of the NSF to develop and help ensure the availability of AI testing environments “to support the design, development, and deployment of associated PETs.” To the extent the federal government can adopt standards, identify use-cases, and establish guidelines for the development and deployment of PETs, the private sector may ultimately leverage federally researched PETs to support their own AI tools — much as the private sector has relied on the NIST frameworks and standards in designing and implementing cybersecurity protection measures.

More generally, the EO requires agencies to scrutinize how they collect and use personal information in connection with AI. Of particular concern is the use of commercially available information (CAI) — essentially, data sets including personal information that can be purchased from data brokers and other commercial actors. The EO directs the Director of the Office of Management and Budget (OMB) to (1) evaluate and take steps to identify CAI procured by agencies “in appropriate agency inventory and reporting processes”; (2) evaluate agency standards and procedures associated with the collection, processing, maintenance, use, sharing, dissemination, and disposition of CAI; and (3) issue a request for information “to inform potential revisions to guidance to agencies on implementing the privacy provisions of the E-Government Act of 2002,” specifically with respect to how privacy impact assessments (PIAs) may address privacy risks in AI. Privacy laws are increasingly requiring PIAs where businesses intend to use personal information to engage in profiling, automated decisionmaking, biometric identification, and other AI or AI-adjacent activities. Although the OMB’s actions under the EO are definitionally directed only at federal agencies, approaches to PIAs in those agencies may be adopted by the private sector as well.

The EO provides more granular direction for specific agencies in the mitigation of privacy risks in AI. Most notably, the Secretary of HHS is directed to establish an HHS AI Task Force to develop a strategic plan — including, potentially, regulatory action — “on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector.” Among other areas, the HHS AI Task Force will specifically consider “incorporation of safety, privacy, and security standards into the software-development lifecycle for protection of personally identifiable information, including measures to address AI-enhanced cybersecurity threats in the health and human services sector.” The Secretary of HHS is further directed to “develop a strategy, in consultation with relevant agencies, to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality, including” with respect to privacy. Additionally, the EO directs the Secretary of HHS to “consider appropriate actions to advance the prompt understanding of, and compliance with, Federal nondiscrimination laws by health and human services providers that receive Federal financial assistance, as well as how those laws relate to AI,” including by providing technical assistance to providers and payers about their obligations under privacy laws and issuing guidance or taking other action in response to complaints or reports of noncompliance with federal privacy laws in the AI context. The administration’s focus on privacy risks of AI in the healthcare context is in line with its focus on healthcare privacy in general, as has been demonstrated in Executive Order 14076, directing HHS to consider ways to strengthen the protection of sensitive information related to reproductive healthcare.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.