Skip to main content
All
July 19, 2023

Three’s Company: European Parliament Adopts Its Version of AI Act, Commencing Negotiations with Council and Commission

Advisory

An overwhelming majority of the European Parliament (Parliament) recently voted to pass the Artificial Intelligence Act (AI Act), marking another major step toward the legislation becoming law. As we previously reported, the AI Act regulates artificial intelligence (AI) systems according to risk level and imposes highly prescriptive requirements on systems considered to be high-risk. The AI Act has a broad extraterritorial scope, sweeping into its purview providers and deployers of AI systems regardless of whether they are established in the EU. Businesses serving the EU market and selling AI-derived products or deploying AI systems in their operations should continue preparing for compliance.

Where are we in the legislative process? The European Commission (Commission) began the process by proposing legislation (EC Proposal) in April 2021.1 The Council of the European Union (Council) then adopted its own common position (Common Position) on the AI Act in December 2022.2 On June 14, 2023, the Parliament created a third version of the legislation by adopting a series of 771 discrete amendments to the EC Proposal. Now, the Parliament, Council, and Commission have embarked on the trilogue, a negotiation among the three bodies to arrive at a final version for ratification by the Parliament and Council. They aim for ratification before the end of 2023 with the AI Act to come into force two (or possibly three) years later.

Below, we summarize the major changes introduced by the Parliament and guide businesses on preparing for compliance with the substantial new mandates the legislation will impose.

Key Takeaways

  • The Parliament’s action kicks off the trilogue to resolve differences among the three versions of the legislation, with the target of producing a final version by year end.
  • The Parliament adopted the OECD definition of “AI system.”
  • The Parliament added provisions regulating general purpose AI (in contrast, the Council left regulation to future development by the Commission), foundation models, and generative AI. These provisions likely will be the basis for trilogue negotiations on these topics because they are more advanced than the earlier proposals from the Council or the Commission.
  • The Parliament proposed different lists of prohibited practices and high-risk use cases, as well as modified requirements for high-risk uses.
  • The Parliament increased the maximum fine for violations of the prohibitions of certain AI uses to €40 million or, if the offender is a company, up to 7% of its global annual revenue for the preceding financial year, whichever is higher. However, the Parliament decreased the maximum fine for violations of provisions other than those related to prohibited practices, data governance, and transparency to €10 million or 2% of global annual revenue, whichever is higher.
  • For a practical approach to preparing for compliance, see A Practical Approach to Compliance below.

 

The Parliament’s Major Changes

The Parliament introduced several important changes to the AI Act.

A. Narrower Scope of Definition of AI System

Defining “AI system” has been one of the most controversial aspects of the legislative process because the definition will determine the legislation’s reach. The Commission’s initial proposed definition was criticized as overly broad because it could have reached statistical processes and other techniques in wide use that fall outside the common conception of “AI.” The Council attempted to address these criticisms by narrowing the scope, but its efforts also were criticized — in part for lack of “interoperability” by diverging from the Organization for Economic Cooperation and Development (OECD) definition to which EU members of the OECD had agreed several years ago. The Parliament resolved that problem by adopting the OECD definition:

“Artificial intelligence system” (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments.3

The Parliament also replaced the confusing term “user” with the more precise term “deployer.”4

B. Expansion of Prohibited Practices

The Parliament expanded the list of prohibited practices proposed by the Council and the Commission, adding the following:

  • “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces”*5
  • “Post” remote biometric identification systems, except where pre-judicial authorization is obtained where “strictly necessary” for a targeted search related to a serious crime6
  • AI systems used by law enforcement to assess the likelihood of natural persons offending or reoffending, or the occurrence or reoccurrence of an actual or potential criminal offense(s), based on profiling*7
  • Indiscriminate and untargeted scraping of biometric data from the internet or closed-circuit television footage to create or expand facial recognition databases  8
  • AI systems that recognize emotions or physical or physiological features when deployed for law enforcement or border control or in workplaces or educational institutions*9
  • AI systems that categorize natural persons by known or inferred sensitive or protected characteristics. The characteristics enumerated include “gender, gender identity, race, ethnic origin, migration or citizenship status, political orientation, sexual orientation, religion, disability or any other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights of the European Union” or under General Data Protection Regulation (GDPR) article 910

    (Asterisked practices were classified by the Council and the Commission as high-risk, but not prohibited, use cases.)

These additional prohibitions set up perhaps the toughest political dispute for resolution in the trilogue. Many of the national governments represented on the Council want greater freedom to deploy remote biometric-identification systems for law enforcement purposes than the Parliamentary majority, which is more protective of civil liberties. Press reports suggest the Parliament may yield on this point in exchange for other concessions, rather than see the legislation fail to emerge from the trilogue. However, if the trilogue breaks down, it most likely will be over this issue.

For a discussion of how the Council changed the Commission’s proposed list of prohibited practices, please see our prior Advisory.

C. Clarification of Prohibited Practices

The Parliament further clarified some of the practices it wishes to proscribe. First, Parliament specified that the prohibition with respect to AI systems with the objective to or the effect of materially distorting human behavior includes “neuro-technologies assisted by AI systems that are used to monitor, use, or influence neural data gathered through brain-computer interfaces insofar as they are materially distorting the behavior of a natural person in a manner that causes or is likely to cause that person or another person significant harm.”11

Second, the Parliament enlarged the prohibition against distorting behaviors or exploiting the vulnerabilities of certain groups of people to include prohibiting the exploitation based on “known or predicted personality traits,” 12in addition to age, physical, or mental incapacities, and social or economic situation. The Parliament also explained that “it is not necessary for the provider or the deployer to have the intention to cause the significant harm, as long as such harm results from the manipulative or exploitative AI-enabled practices.”13

(In several charts below, we summarize selected aspects of the AI Act, showing the Commission proposal in black; Council changes in blue; Parliament changes in red; and shared Council and Parliament changes in purple.)

Comparison of the EC Proposal, Council’s Common Position, and the Parliament Position 
 Prohibited Uses
  • Exploitation of known or predicted personality traits, age, physical or mental disability, or social and/or economic position.
  • Real-time remote biometric identification systems in public places for law enforcement (with some limited exceptions).
  • Remote biometric identification systems in public places without judicial process and “strictly necessary” in connection with serious criminal offense.
  • Biometric categorization of people by sensitive or protected attributes (therapeutic exception).
  • Predictive criminal risk assessments of individuals.    
  • Harmful or material distortion of human behavior through subliminal, or purposefully manipulative or deceptive techniques, including with neuro-technologies assisted by AI systems (therapeutic exception for subliminal).
  • Social scoring of natural persons or groups by governments of individuals leading to discriminatory treatment across contexts or disproportionate to behavior.
  • Creation or expansion of facial recognition databases from scraping of internet or CCTV footage.
  • Emotion inference in certain settings.

 

D. High-Risk AI Use Cases 

In addition to relocating multiple use cases to the prohibited uses list, the Parliament made several further modifications to the list of high-risk use cases. The Parliament added AI systems intended to influence voter behavior or the outcome of an election (except AI systems where natural persons are not directly exposed to outputs — principally internal campaign-management tools).14 It also included the AI systems used by “very large online platforms” (as designated under Digital Services Act article 33) to recommend user-generated content.15

Some of the high-risk categories are quite broad. Recognizing that not all use cases in those categories actually present significant risks, the Parliament joined the Council in exempting from treatment as high-risk those applications that are not likely to lead to a significant risk to the health, safety, or fundamental rights. 16The Parliament also proposed exempting critical infrastructure uses that do not pose a significant risk to the environment.17 The Parliament added a process for providers to apply to take advantage of these exemptions.18

For a discussion of how the Council changed the Commission’s proposed list of high-risk use cases, please see our prior Advisory.

 Comparison of the EC Proposal, Council’s Common Position, and the Parliament Position
 High-Risk Uses
 
  • Safety products or safety components for various types of critical infrastructure (e.g., digital infrastructure) or products.
  • Use by law enforcement, or by EU agencies, offices, or bodies in support of law enforcement agencies for individual risk assessments, credibility determinations, emotion detection, identification of “deep fakes,” evidentiary reliability evaluations, predictive policing, and profiling of individuals, and crime analytics.
  • Remote Biometric identification and categorization of people (exception for identity verification).  
  • Public assistance determinations.
  • Immigration and border control determinations, including applications for asylum, visa, and residence permits and verification of the authenticity of travel documents; and for certain other border management.
  • Admission, assignment, and assessment of students, or monitoring prohibited behaviors during test-taking.
  • Influencing elections or voting (limited exception).
  • Content recommendation by certain social media platforms. 
  • Recruitment and other employment decisions.
  • Judicial, administrative authorities, or alternative dispute resolution decision-making.
  • Access to private and public services, such as healthcare services, essential services, (e.g., housing, electricity, heating/cooling, and internet), emergency services dispatch, evaluation of creditworthiness and credit scoring (limited exception for fraud detection).
  • Other uses the Commission later designates as high-risk.
 

 

E. Requirements for High-Risk AI Systems

Once an AI system is classified as high-risk, the AI Act subjects it to numerous detailed requirements. The Parliament further clarified and expanded on existing obligations for providers and deployers of high-risk systems and other parties, including with respect to risk management systems;19 data sets used for training, validation, and testing;20 as well as recordkeeping and technical documentation requirements.21 Recognizing that deployers are in the best position to identify risks related to their high-risk systems, the Parliament proposed to require them to conduct fundamental rights22 impact assessments prior to use of any such system,23 in addition to any data protection impact assessments that may be required under the GDPR.24

The Parliament’s clarifications of the EC Proposal, like the Council’s, would make it easier for providers and deployers to comply with the requirements for high-risk AI systems although the two co-legislators took slightly different approaches. Exactly how burdensome compliance will be will depend on the precise details of the legislation that emerges from the trilogue.

For a discussion of how the Council changed the Commission’s requirements for high-risk use cases, please see our prior Advisory.

Comparison of the EC Proposal, Council’s Common Position, and the Parliament Position 
 Requirements for High-Risk AI Systems
Compliance (Providers) 
  •  Quality management system (compliance program), including regularly updated prescribed risk management system reflecting state of the art. No duplicative quality management systems are required for providers that already have such systems in place (e.g., under ISO 9001), only adaptation to certain aspects of the AI Act is required. 
  • Pre-market conformity assessment (certifications valid for up to 5 years absent substantial modifications) and post-market monitoring with immediate correction and reporting requirements upon reason to consider system noncompliant or risky to health, safety, or fundamental rights. Third party suppliers of AI components may voluntarily apply for a third-party conformity assessment.    
  •  Registration in EU database. Substantial modifications to high-risk AI systems must also be registered.
 Compliance (Others)
  • Third party that sells or installs under own brand, alters intended purpose, substantially modifies system, or incorporates system into product treated as provider.
  • Deployers must perform fundamental rights impact assessment (limited exception), including:
    • Identification of reasonably foreseeable impacts
    • Mitigation plan
    • Involvement of representatives of likely affected groups to the best extent possible
  • Importers and distributors, among other obligations, will be treated as providers if they make a substantial modification to an AI system, or general purpose AI system, such that the system becomes high risk. must verify upstream compliance, not degrade compliance, and report if system risky to health, safety, or fundamental rights; distributors have correction duty upon reason to consider system noncompliant.
  • Deployers also must perform data protection impact assessments when required by GDPR.
 
  • Deployers Professional users must operate consistent with provider instructions, monitor operations, input only relevant data, and assess data protection impact.
  • Data-governance obligations may be shifted contractually to deployer if deployer, not provider, has access to data.
  • Limits on contractual terms unilaterally imposed by provider on SME or startup deployers and downstream providers.
 

Human Oversight

  • Humans with sufficient AI literacy necessary competence, training, and authority must oversee operation and be able to:
    • Stop operation that allows the system to halt in a safe state, except where human interference would increase the risks.
    • Disregard, override, or reverse the output.
  • Enhanced human oversight over certain biometric identification systems and requirement of verification by at least two humans with the necessary competence, training, and authority before action can be taken based on the identification.

 Documentation, Disclosure, and Explainability

  • To help users understand and control operation and to facilitate governmental oversight, providers must supply:
    • Intelligible, concise, correct, clear, complete to the extent possible, reasonably relevant, and comprehensible to users, and accessible instructions describing:
      • Characteristics, capabilities, and limitations of performance, including, where appropriate, the foreseeable unintended outcomes and other risks
      • Human oversight and related technical safety measures
      • Expected lifetime and Necessary maintenance and care through AI system’s expected lifetime
      • The computational and hardware resources needed
      • Description of mechanism within AI system that lets users properly collect, store, and interpret the logs in accordance with Article 12(1)          
    • Detailed descriptions of continuously updated technical documentation covering (in part):

      • Descriptions of system and development process, including compliance trade-offs
      • Monitoring, functioning, and control, including system’s risks and mitigation
      • Provider’s risk management system
      • For small and medium enterprises, including start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent national authority unless deemed inappropriate by the competent authority.
 

Robustness, Accuracy, and Cybersecurity

  • High-risk AI systems must:
    • Perform consistently at appropriate levels throughout their lifecycles, notwithstanding attempts at manipulation of training or operation or unauthorized alteration; security by design and default. Performance metrics and their expected level should be defined with the main objective of mitigating risks and the negative impact of the AI system. The AI Office should develop non-binding guidance to address technical aspects and measurement of appropriate levels of performance and robustness.
    • Meet training, validation, and testing data quality requirements.
 

Retention of Records and Data

  • Automatic logging of operations, ensuring traceability, and retention of such records by providers and deployers of high-risk AI systems for at least six months. Retention periods must align with industry standards, and be appropriate to the high-risk AI system’s intended purpose.
  • For ten years:

    • Technical documentation
    • Documentation of quality management system
    • Certain conformity records
 Requirements for Certain AI Systems
 

Transparency — For high-risk and low-risk systems, if applicable:

  • System must inform people they are interacting with an AI system in a timely, clear, and intelligible manner, unless obvious.
  • Deployers Users of emotion recognition or (unless for law enforcement) biometric categorization system must notify natural persons exposed thereto, and obtain their consent prior to the processing of their biometric and other personal data in accordance with Regulation (EU) 2016/679, Regulation (EU) 2016/1725, and Directive (EU) 2016/280, as applicable.     
  • “Deep fakes” must be identified (qualified law enforcement and expressive, artistic, and scientific freedom exceptions), as well as, whenever possible, the name of the natural or legal person that generated or manipulated it.
 

Penalties for Violations

Up to greater of:

  • €40 €30 million
  • 7% 6% of global annual revenue
  • For small and medium enterprises, including start-ups, up to 3% of global annual revenue.

 

F. New Plan to Address Foundation Models and Generative AI

During the more than two years since the Commission first proposed the AI Act, AI technology has advanced dramatically. The speed of these changes is reflected in the evolution of the legislation from version to version. For example, the Council introduced provisions on “general purpose AI,” which the Commission had not contemplated. Likewise, ChatGPT™ burst onto the scene around the time the Council completed its work on the Common Proposal. Having had several more months to consider the impact of foundation models and generative AI, the Parliament was able to address these more recent technological developments.

The Parliament’s Proposal included several new provisions related to general purpose AI systems, including a revised definition: “[a]n AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.”25 Parliament also proposed restrictions on foundation models and generative AI — a subcategory of foundation models, which are themselves a type of general purpose AI.

The Parliament defined a foundation model as “[a]n AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.”26 The Parliament proposed a number of obligations on providers of foundation models, which are similar to the regime established under the AI Act for providers of high-risk AI systems.27 The requirements include:

  • Reducing reasonably foreseeable risks
  • Establishing data governance measures to assess the suitability of datasets, protect against bias, and mitigate risks; achieving appropriate levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity
  • Designing models capable of measuring their environmental impact
  • Creating technical documentation
  • Establishing a quality management system
  • Registering the model in the EU database28

Finally, the Parliament defined generative AI as “foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio or video.”29 In addition to complying with the requirements on foundation models, generative AI providers would have to:

  • Observe transparency requirements
  • Safeguard against generating unlawful content
  • Publish summaries of their use of copyright-protected materials in training data30

Because the Parliament took the fullest and most sophisticated approach to general purpose AI, foundation models, and generative AI, its proposals likely will serve as the basis for the trilogue negotiations on these points.

G. Increased Opportunity for Redress

The Parliament proposed to give individuals and groups additional avenues for redress from asserted violations.31 It introduces a new complaint process, which allows individuals or groups to file complaints with the relevant national supervisory authority alleging infringement of the AI Act.32 Complaints may be lodged without prejudice to any other administrative or judicial remedy.33 National supervisory authorities are required to keep complainants informed throughout the review process, and notify them of the outcome, including whether a judicial remedy is available.34

H. Administration of the AI Act

Administration of the AI Act has been a source of debate throughout the negotiation process. The Parliament proposed creating the AI Office35 — an independent body, intended to support, advise, and cooperate with member states on various matters, including the coordination of cross-border cases.36 The AI Office replaces the Commission and Council’s original proposal to establish a European Artificial Intelligence Board, which was intended to function as a cooperation mechanism responsible for facilitating the implementation of the AI Act.37 How to structure administration inside each member state also is a major difference among the three versions of the legislation. Participants in the trilogue will have to balance various budgetary and resource concerns, competing bureaucratic interests, disagreements over how much to centralize control, and how much to disperse responsibility among and within the member states.

I. Regulatory Sandboxes and Additional Support for Smaller Businesses

Like the Council, the Parliament added support for innovation — especially for smaller businesses. The Parliament would require member states to establish “regulatory sandboxes” (the Council and Commission made this optional) to allow innovative AI systems to be developed, trained, tested, and validated under supervision by regulatory authorities before commercial marketing or deployment. The Parliament also provided more elaborate guidance to the member states about what the sandboxes may or must entail, including the possibility of subnational or cross-border sandboxes. In addition, the Parliament would permit the Commission, as well as the European Data Protection Supervisor, to create sandboxes.38

The Parliament also expanded on the Council’s proposals for relieving burdens on smaller businesses.39 In addition, the Parliament sought to protect small and medium enterprises and startups from certain unfair contractual terms unilaterally imposed by providers of high-risk AI systems on deployers or downstream providers.40

Changes to Potential Penalties

The Parliament proposed even higher potential penalties for violations of the AI Act’s prohibitions of certain practices. Under Parliament’s Position, the maximum fine would be €40 million or, if the offender is a company, up to 7% of its global annual revenue for the preceding financial year, whichever is higher.41 These amounts reflect an increase from the originally proposed maximum fine of €30 million, or up to 6% of global annual revenue.42 (Small and medium enterprises should hope the Council prevails with its proposal that penalties for them be capped at 3% of global annual revenue.)43 However, the Parliament also decreased the maximum fine for violations of provisions other than those related to prohibited practices, data governance, and transparency to €10 million or 2% of global annual revenue, whichever is higher.44

A Practical Approach to Compliance

The AI Act is one response — albeit a prominent one — to the risks posed by AI systems. These risks include inaccuracy; bias; lack of transparency, explainability, and interpretability; privacy and cybersecurity; undermining of intellectual property rights; and harms to competition, all magnified by rapid and massive increases in AI systems’ power.

Businesses will be managing these risks for years to come. How these risks manifest themselves will vary from company to company, even within a sector, depending on how each seeks to capitalize on the benefits and efficiencies afforded by the emerging technology, its risk appetite, its corporate culture, and other factors. Where a company sits on the value chain (upstream developer, downstream developer, deployer, etc.) also will have a significant impact. Whatever the case may be, businesses operating in Europe (or whose customers operate in Europe using their AI systems or those systems’ outputs) should get a jump on preparing to comply with the AI Act.

With at least two years until the AI Act takes effect, businesses have some breathing room. While best practices are constantly evolving, early steps in the right direction will lower the likelihood that an expensive course correction will be needed later. Once the AI Act comes into force, companies will only be allowed to introduce AI systems for which the development process complies with the legislation’s requirements. Businesses working on AI systems they anticipate launching after the effective date should ensure now that their development processes satisfy those requirements unless they are willing to retrofit before launching — assuming retrofitting is even technically feasible.

Moreover, existing laws in a number of jurisdictions, including the United States, the United Kingdom, Japan, and the EU itself, already address various of the harms at which the AI Act is aimed. In other words, even though the AI Act may not take effect for a couple years, companies developing, distributing, procuring, or deploying AI systems have current obligations to ensure they do not violate privacy, antidiscrimination, consumer-protection, and other laws on the books. Given the various existing, new, and, at times, overlapping mandates, businesses should not wait any longer before commencing their compliance efforts.

An important first step is to establish policies to align legal, privacy, marketing, sales, development, and procurement professionals across all relevant departments within your organization to put clear guardrails in place with respect to AI systems and develop procedures to mitigate and manage risks. Providers of high-risk AI systems and foundation models, including generative AI systems, should also consider what changes they may need to make to comply with the AI Act. While the exact contours of the final legislation remain in doubt, enough is apparent for businesses to begin this work, including drafting technical documentation, creating recordkeeping practices, and preparing for various regulatory reporting responsibilities, among other tasks.

For a comprehensive approach to managing AI risks, consult the Artificial Intelligence Risk Management Framework (AI RMF) released by the U.S. National Institute of Standards and Technology (NIST).45 Accompanying the AI RMF is NIST’s AI RMF Playbook.46 The AI RMF Playbook provides a recommended program for governing, mapping, measuring, and managing AI risks. While prepared by a U.S. agency, the AI RMF and AI RMF Playbook are intended to “[b]e law- and regulation-agnostic.”47 They should support a global enterprise’s compliance with laws and regulations across jurisdictions.

Finally, businesses should continue to monitor regulatory developments. They should track the AI Act trilogue as it unfolds and be prepared to refine their compliance preparations as the legislation’s final form takes shape. Likewise, businesses should pay attention as lawmakers (and the plaintiffs’ bar) in the United States and globally scramble to respond to the risks presented by AI systems. Whether through new horizontal (cross-sector) legislation like the AI Act or through adaptation of existing sectoral laws, legislators and regulators around the world are striving to meet this moment with the right balance between precautions and promotion of innovation. The differences among jurisdictions may prove a challenge to companies operating globally. For now, though, firms can best prepare themselves by focusing on identifying, mitigating, and managing the risks arising from the AI systems they develop, distribute, procure, and deploy. Successful attention to these processes will go a long way toward ensuring compliance with the various regimes that are emerging.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.

  1. Commission Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final (Apr. 21, 2021) (EC Proposal).

  2. Council Common Position, 2021/0106 (COD), Proposal for a Regulation of the European Parliament and of the Council — Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts - General Approach (Common Position).

  3. Amendments adopted by the European Parliament on June 14, 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, P9_TA(2023)0236, Amendment 165, art. 3(1)(1) (Parliament Position); see also id., Amendment 18, recital 6.

  4. Id., Amendment 172, art. 3(1)(4).

  5. Id., Amendment 220, art. 5(1)(d); see also id., Amendment 41, recital 18.

  6. Id., Amendment 227, art. 5(1)(dd); see also id., Amendment 41, recital 18.

  7.  Id., Amendment 224, art. 5(1)(da); see also id., Amendment 50, recital 26 a.

  8. Id., Amendment 225, art. 5(1)(db); see also id., Amendment 51, recital 26 b.

  9. Id., Amendment 226, art. 5(1)(dc); see also id., Amendment 52, recital 26 c.

  10. Id., Amendment 217, art. 5(1)(ba); see also id., Amendment 39, recital 16 a

  11. Compare id., Amendment 38, recital 16 with Common Position recital 16.

  12. Compare id., Amendment 38, recital 16 with Common Position recital 16.

  13. Compare id., Amendment 38, recital 16 with Common Position recital 16.

  14. Id., Amendment 739, Annex III (1)(8)(aa); see also id., Amendment 72, recital 40 a.

  15. Id., Amendment 740, Annex III (1)(8)(ab); see also id., Amendment 73, recital 40 b.

  16. Common Position art. 6(3).

  17. Parliament Position, Amendment 596, art. 65(1); see also id., Amendment 60, recital 32.

  18. Id., Amendment 235, art. 6(2)(a).

  19.  Id., Amendment 261, art. 9(1); see also id., Amendment 76, recital 42.

  20. Id., Amendment 288, art. 10(3); see also id., Amendment 78, recital 44.

  21. Id., Amendment 336, art. 16(1)(c); see also id., Amendment 337, art. 16(1)(d); Amendment 81, recital 46.

  22. Id., Amendment 413, art. 29 a; see also id., Amendment 92, recital 58 a.

  23. Id., Amendment 410, art. 29 (6).

  24. Id., Amendment 169, art. 3(1)(1)(d).

  25.  Id., Amendment 168, art. 3(1)(1)(c); see also id., Amendment 99, recital 60 e.

  26. Id., Amendment 399, art. 28 b.

  27. Id.

  28. Id.

  29. Id.

  30.  Id., Amendment 627, art. 68.

  31. Id., Amendment 628, art. 68 a.

  32. Id., Amendment 629, art. 68 b.

  33. Id., Amendment 628, art. 68 a.

  34. Id., Amendment 525, art. 56(1).

  35. Id., Amendment 525, art. 56(1); see also id. Amendment 529, art. 56 b.

  36. EC Proposal art. 56; Common Position art. 56.

  37. Parliament Position, Amendment 289, art. 53(1); see also id. Amendment 490, art. 53(1)(a); Amendment 491, art. 53(1)(b); Amendment 116, recital 71.

  38. Id., Amendment 517, art. 55; see also id. Amendment 518, art. 55(1)(a); Amendment 519, art. 55(1)(b); Amendment 520, art. 55(1)(c); Amendment 521, art. 55(1)(ca); Amendment 522, art. 55(2).

  39. Id., Amendment 398, art. 28 a.

  40. Id., Amendment 647, art. 71(3).

  41. Common Position art. 71.

  42. Id.

  43. Parliament Position, Amendment 651, art. 71(4).

  44. U.S. Nat’l Inst. of Standards & Tech., Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Jan. 2023), available here.

  45. See NIST, AI RMF Playbook, available here.

  46. AI RMF 1.0, at 42.