European Commission Proposes Legislation Regulating AI
On April 21, the European Commission (Commission) proposed legislation (EC Proposal) that would comprehensively regulate artificial intelligence systems in the European Union, including externally located systems with output used inside the EU.1 The Commission simultaneously proposed updated legislation regulating machinery, which, among other changes, would address the integration of AI into machinery, consistent with the EC Proposal.2 In the transportation sector, high-risk AI safety components, products or systems covered by eight existing legislative acts would be exempt from the proposal although those acts would be amended to take the proposal’s requirements for high-risk systems into account.3 In the words of two commentators, “the heavy hand of the regulatory state has undoubtedly arrived in AI.”4 This statement is a bit hyperbolic as the General Data Protection Regulation and other laws already restrict AI in various ways.5 Nevertheless, developers, installers, adapters, importers, distributors, and operators of AI systems—whether inside the EU or not—all need to consider how the EC Proposal might require changes to their businesses.
The EC Proposal builds on the Commission’s February 2020 white paper on key priorities for AI regulation6 and an October 2020 resolution containing the European Parliament’s recommendations.7 The EC Proposal is undergoing a consultation period for stakeholder comments through July 6, 2021 while making its way to the European Parliament and the Council of the EU for their consideration, amendment and likely adoption.
Prohibited AI Practices
The EC Proposal would proscribe four AI practices:
- Subliminal Behavior Manipulation: An “AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.”
- Exploitative Behavior Manipulation: An “AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.”
- Governmental Social Scoring: AI systems used “by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
- “detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;” or
- “detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity.”
- Certain Real-Time Remote Biometric ID: “[T]he use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement” except in certain, specified circumstances.8
The EC Proposal classifies AI systems as either high-risk or low-risk based on their intended use. High-risk uses include:
- Many types of safety components and products (e.g., for medical devices, toys, many modes of transportation, radio equipment, and critical infrastructure);
- Public assistance determinations;
- Law enforcement use for individual risk assessments, credibility determinations, emotion detection, identification of “deep fakes,” evidentiary reliability evaluations, predictive policing, profiling of individuals, and crime analytics;
- Remote biometric identification and categorization of people;
- Evaluation of creditworthiness and credit scoring (limited exception);
- Immigration determinations;
- Admission, assignment and assessment of students;
- Emergency services dispatch;
- Judicial decision-making;
- Recruitment and other employment decisions; and
- Other uses the Commission later designates as high-risk.9
All other uses would be considered low-risk.
Requirements for High-Risk AI Systems
High-risk AI systems would be subject to numerous requirements.
A system would have to be designed to perform consistently at appropriate levels of robustness, accuracy and cybersecurity throughout its lifecycle. It would have to incorporate appropriate technical redundancy such as backup or fail-safe plans to ensure resiliency against “errors, faults or inconsistencies,” whether internal or external to the system. System design would have to enable users to monitor for “signs of anomalies, dysfunctions and unexpected performance” and otherwise facilitate human oversight, so users can stop operation or disregard, override or reverse the output. A system would have to include appropriate technical security solutions to prevent others from exploiting system vulnerabilities to manipulate its operations or output. If a system is trained with data, the training, validation and testing data would have to satisfy various requirements to ensure performance as intended.10
Conformity Assessments and Provider Compliance Programs
Before being placed on the market or into service, high-risk systems generally would have to undergo an assessment of their conformity with the legislation. For some systems, a provider (defined as the individual or entity “who develops an AI system or has it developed and places it on the market under its own name or trademark or puts it into service under its own name or trademark or for its own use, whether for payment or free of charge”)11 could perform the assessment itself while assessment by an independent “notified body” would be required in other cases. Conformity would be presumed for systems that adhere to EU harmonized standards or common specifications.12
Certifications of conformity would last for five years (absent substantial modification of the system). Providers would have to register their systems in an EU database and conduct post-market monitoring for noncompliance with correction and reporting obligations.13
Providers also would have to establish a compliance program (quality management system), including a risk management system that follows detailed prescriptions in the legislation.14
Documentation, Disclosure and Explainability
A provider of a high-risk AI system would have to supply extensive documentation with its system to ensure user understanding and control and to facilitate oversight by governmental authorities. For users, among other information, providers would have to:
- Furnish concise, complete, correct, clear, relevant, accessible, and comprehensible instructions.
- Specify the characteristics, capabilities and limitations of performance, including foreseeable unintended outcomes and other risks.
- Describe the human oversight measures for proper operation.
- Identify the expected lifetime and any necessary maintenance and care processes.15
AI systems also would have to be accompanied with highly detailed and continuously updated technical documentation, demonstrating compliance with the EC Proposal’s extensive requirements. Among the prescribed elements for this technical documentation would be:
- Detailed descriptions of the system’s:
- Development process;
- Design specifications, choices and compliance tradeoffs;
- Training data sets and techniques;
- Human oversight measures (with an assessment);
- Predetermined changes (for adaptive AI, which continues to learn from its operations); and
- Validation and testing procedures;
- Presentation of detailed information about the monitoring, functioning and control of the AI system, including expected accuracy levels and foreseeable unintended outcomes and other risks; and
- The provider’s risk management system (for which there would be extensive specifications).16
Retention of Records and Data
AI systems would have to log their results automatically to permit tracing their operations throughout their lifecycles for detection of risks to the health, safety or fundamental rights of people, as well as post-market monitoring of the AI systems. A provider or a professional user would have to retain these logs for a period that is proportionate to the intended use of the AI system and other applicable legislation.17
In addition, for 10 years after placing a system on the market or putting it into service, a provider would have to retain the prescribed technical documentation, required documentation about the provider’s compliance program and certain records related to the system’s conformity assessment.18
Obligations of Other Parties
Like other EU legislation,19 the EC Proposal also would impose duties on additional parties. A provider from outside the EU must appoint an authorized representative (unless it has an importer), which may fulfill many of the provider’s compliance obligations.20 A third party that sells or installs a system under its own brand, alters a system’s intended purpose or substantially modifies a system would be treated as the provider of that system.21
Among other requirements, importers and distributors must verify compliance upstream from them in the supply chain; not degrade compliance while products containing high-risk AI systems are in their custody; and report if a system it imports or distributes presents a risk to people’s health, safety or fundamental rights.22 Distributors would have a further duty to correct, withdraw or recall a system that it has reason to consider to be noncompliant.23
Professional users, for their part, must operate systems as directed by providers. They also must ensure they only input data that is relevant to the system’s intended purpose, conduct data protection impact assessments and monitor operations for anomalies.24 Although not expressly required under the EC Proposal, it would be prudent for companies employing high-risk AI systems to ensure that humans with the necessary competence, training and authority oversee their operations and are empowered to take full advantage of the system’s human-oversight precautions when warranted.
Requirements for Certain AI Systems
All AI systems, high-risk or not, would have to comply with three transparency requirements if applicable. A system that interacts with humans must inform them that they are interacting with an AI system unless it is obvious from the context. People must be notified if they are “exposed” to an emotion-recognition system or (unless for law enforcement purposes) a system that categorizes people by sex, age, hair or eye color, tattoos, ethnicity, sexual orientation, political orientation, or similar characteristics on the basis of biometric data. And deep-fake images or audio or video content must be identified as such, with qualified exceptions for law enforcement or expressive, artistic and scientific freedom. (A “deep fake” is defined as “content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.”)25
Low-risk AI systems that are not used in these applications, which the Commission believes to be the “vast majority of AI systems,”26 would remain unregulated under the EC Proposal.
Penalties for Violations
The authority to define and impose “effective, proportionate and dissuasive” penalties for violations would remain with the member states. Maximum fines would be:
- The greater of €30 million or six percent of global turnover for violations of:
- The prohibitions against certain AI practices; or
- The regulations on data and data governance in Article 10.
- The greater of €20 million or four percent of global turnover for violations of other requirements or obligations.
- The greater of €10 million or two percent of global turnover for provision of “incorrect, incomplete or misleading information” in response to requests of notified bodies or national competent authorities.27
For the implementation of the proposed legislation and other AI governance tasks, the European Commission suggests a mix of agencies. Sectoral authorities (e.g., medical device regulators) at both the EU and member state levels would continue to implement their mandates.28 The proposed legislation would direct member states to designate national supervisory and national competent authorities for enforcement.29 Finally, the European Commission proposed a new European Artificial Intelligence Board to coordinate among the Commission and the member state authorities; assist with ensuring consistent application of the legislation; share best practices among member states; and issue opinions, recommendations and written contributions on implementation.30 How this mix of authorities gels (or not) will significantly influence how burdensome the contemplated regulatory regime becomes.
Wide Extraterritorial Scope
Like the GDPR, the proposed legislation would have wide extraterritorial scope. In particular, providers exporting AI-enabled products and services into the EU would be covered and would need to implement the requirements put forward in the EC Proposal. Moreover, the proposed legislation extends to providers and professional users of AI systems outside the EU if the system’s output is used within the EU.31
Key Differences From the European Parliament’s Recommendations
The EC Proposal differs in several important ways from the recommendations the European Parliament adopted in October 2020. Parliament would not ban specific AI practices as the Commission proposed.32 On the other hand, Parliament defined high-risk somewhat more expansively (covering sectors and not just uses),33 and it suggested requiring high-risk AI not to “interfere in elections or contribute to the dissemination of disinformation” or cause certain other social harms.34 Parliament also proposed that all conformity assessments be performed by approved third parties while the Commission would allow providers of certain types of high-risk AI to assess themselves.35 Moreover, Parliament included an individual right to redress for violations and whistleblower protections,36 which the Commission did not. Any of these proposals could find their way into the final legislation that ultimately is adopted.
Parliament also recommended separate legislation amending the civil liability regime for AI systems.37 An earlier Commission report had indicated such changes might be necessary,38 so its proposed legislation may be forthcoming.
The EC Proposal marks a major milestone for AI. It would be “the first-ever legal framework on AI.”39 The EC Proposal will now be subject to standard legislative procedures in the Parliament and the Council of the EU, which will probably be lengthy and complex, as befits the subject. Once the final text is adopted and enters into force, most of its provisions will start applying 24 months later.40
Given the proposed legislation’s broad imposition of obligations and wide extraterritorial reach, companies all along the AI value chain should monitor the legislation’s evolution and start planning for compliance, regardless of their geographic location. Interested businesses—especially those with existing or planned operations that cannot comply with the EC Proposal—have an opportunity to influence the final legislation by contributing their views in the public consultation ending on July 6, 2021.41
We will continue to monitor developments with this EU legislation and other efforts to regulate AI. We invite you to watch for our further publications in this fast-moving area.
*A version of this Advisory will be published in Bloomberg Law.
© Arnold & Porter Kaye Scholer LLP 2021 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.
Commission Proposal for a Regulation on a European Approach for Artificial Intelligence, COM (2021) 206 final (Apr. 21, 2021) (EC Proposal).
Commission Proposal for a Regulation of the European Parliament and of the Council on Machinery Products, COM (2021) 202 final (Apr. 21, 2021).
EC Proposal arts. 2(2), 75-82, recital 29. The amended acts are Regulation 300/2008, 2008 O.J. (L 97) 72 (EC); Regulation 167/2013, 2013 O.J. (L 60) 1 (EU); Regulation 168/2013, 2013 O.J. (L 60) 52; Directive 2014/90, 2014 O.J. (L 257) 146 (EU); Directive 2016/797, 2016 O.J. (L 138) 44; Regulation 2018/858, 2018 O.J. (L 151) 1 (EU); Regulation 2018/1139, 2018 O.J. (L 212) 1 (EU); and Regulation 2019/2144, 2019 O.J. (L 325) 1 (EU).
Thomas Burri & Fredrik von Bothmer, The New EU Legislation on Artificial Intelligence: A Primer 6 (Apr. 21, 2021).
See Peter J. Schildkraut, Arnold & Porter, AI Regulation: What You Need to Know to Stay Ahead of the Curve (forthcoming).
Commission White Paper on Artificial Intelligence—A European Approach to Excellence and Trust, COM (2020) 65 final (Feb. 19, 2020).
Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies, P9_TA(2020)0275 (Ethics Regulation Resolution) (proposing legislation in Annex B).
Id. arts. 6-7; see id. annexes II-III.
Id. arts. 21, 40-41, 43, annexes VI-VII.
Id. arts. 11, 18, annex IV; see id. art. 9 (risk management system requirements).
See, e.g., Regulation 2017/745, arts. 11, 13-14, 16, 2017 O.J. (L 117) 1, 25-30 (EU) (on medical devices).
EC Press Release, European Commission, Europe Fit for the Digital Age: Commission Proposes New Rules and Actions for Excellence and Trust in Artificial Intelligence (Apr. 21, 2021).
E.g., id. arts. 9(7), 19(2), 43(3), 61(4), 63(3)-(5), (7), 64(3).
Compare EC Proposal art. 6, annexes II-III with Ethics Regulation Resolution annex B, arts. 4(e), 14(1), annex.
Ethics Regulation Resolution annex B, arts. 10-11.
Compare EC Proposalart. 43(2) with Ethics Regulation Resolution annex B., art. 15
Ethics Regulation Resolution annex B, arts. 13, 22.
Civil Liability Regime for Artificial Intelligence, P9_TA(2020)0276, annex B.
Commission Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics, COM (2020) 64 final (Feb. 19, 2020).
European Commission, Artificial Intelligence—Ethical and Legal Requirements (last visited Apr. 26, 2021).