Skip to main content
All
August 29, 2022

Global AI Regulation: Canadian Edition

Advisory

Over the summer, the Canadian government introduced the Artificial Intelligence and Data Act (AIDA) as part of the broader Digital Charter Implementation Act, 2022 (C-27), which also would update Canada’s privacy and data protection legal framework. This artificial intelligence (AI) legislation adds another piece to the jigsaw puzzle as companies contemplate constructing their global AI regulatory compliance programs.

The AIDA would create an entirely new risk-based regime for Canada’s international and interprovincial trade and commerce in “AI systems,” including their design, development, sale, and operation, as well as related data processing. At first blush, the legislation appears to be far less prescriptive than the proposed Artificial Intelligence Act (AIA) under consideration by the EU’s co-legislators. However, the AIDA leaves many of the details of the requirements to the federal government to specify in future regulations, which might lead to a more burdensome regime.

Because the AIDA may have some extraterritorial effect, US and other non-Canadian businesses should watch the bill’s evolution through the legislative process and consider the steps they would need to take to comply.

Scope

If passed, the AIDA would govern private-sector “regulated activities” regarding AI systems.

An AI system is defined as “a technological system that autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.” The similar Organisation for Economic Cooperation and Development (OECD) definition does not expressly capture content-generation systems, and it only reaches systems making decisions, recommendations or predictions that influence real or virtual environments. It is unclear whether these and other distinctions in wording would result in material differences in scope.

A “regulated activity” is broadly defined as a wide range of activities related to AI development and use, including “designing, developing or making available for use an AI system or managing its operations” as well as “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an AI system.” The legislation would affect regulated activities in interprovincial and international trade and commerce, leaving open the possibility of additional intraprovincial regulation of AI by the provinces.

The AIDA likely would have some extraterritorial application although how much will be determined through future regulations. As such, it would be prudent for multinationals, with global AI systems comprised of components designed, developed, managed, or used in Canada, to start familiarizing themselves with the proposed framework.

Requirements for All AI Systems

Like the EU’s proposed AIA, the AIDA follows a risk-based approach. However, the AIDA would be far less burdensome for even high-risk systems—at least pending elaboration by future regulations. Perhaps as a result, the AIDA would divide AI systems into only two categories—high-impact and not high-impact—instead of the four tiers in the proposed AIA (unacceptable risk, high risk, limited (or “transparency”) risk, and minimal or no risk).

Legal persons who design, develop or make available for use any AI system or manage its operation would have to: (1) establish measures regarding how data are anonymized and the use or management of anonymized data (to the extent such data are processed or made available for use in the course of an activity regulated by the AIDA); (2) assess whether their systems qualify as “high-impact systems” pursuant to criteria established in future regulations; and (3) maintain general records describing their compliance measures and supporting their impact assessment.

The term “anonymized data” is not defined in the AIDA. “Anonymize” is defined in another of the parts of the Digital Charter Implementation Act (the Consumer Privacy Protection Act (CPPA)) as “to irreversibly and permanently modify personal information, in accordance with generally accepted best practices, to ensure that no individual can be identified from the information, whether directly or indirectly, by any means.” The government may have intended for the AIDA definition of “anonymized data” to align with CPPA definition of “anonymize,” but this is unclear. A definition likely will be set out in future regulations.

Additional Requirements for High-Impact Systems

For high-impact systems, the person(s) responsible also would have to:

  • publish on a public-facing website a plain-language description of the system, including explanations of:
    • how the system is intended to be used;
    • the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make;
    • the mitigation measures set up as part of required risk-management; and
    • any other information prescribed by regulation;
  • establish and monitor measures to identify, assess and mitigate risks of unlawful discrimination1 and other physical, psychological, property, or economic harms that could result from the use of such system; and
  • notify the government of any “material harm” likely to result from its use.

Authority

The AIDA creates a range of new order-making powers and audit rights for the designated government Minister. The Minister could—in addition to significant recordkeeping, audit, publication, and disclosure obligations discussed above—order any person responsible for a high-impact system to cease using it or making it available for use if there are reasonable grounds to believe that using the system creates a serious risk of imminent harm.

If passed, the AIDA would also establish the Office of the Artificial Intelligence and Data Commissioner.

Enforcement

Breaches of the AIDA would be a civil violation but, for certain provisions, could be either a civil violation or a criminal offense. For civil violations, the AIDA would authorize regulations to establish an administrative monetary penalty regime, the stated purpose of which is to “promote compliance” and “not to punish.”

An organization acting in contravention of any AIDA requirements, or obstructing or providing false or misleading information during an audit or investigation, could face a criminal fine of up to the greater of $10 million (CAD) and three percent of its global revenues, whereas the court would fine a convicted individual an amount at the court’s discretion.

Three criminal offenses would bear even steeper potential penalties:

  • possessing or using personal information in any stage of AI development, or in operating or providing AI systems, knowing or believing that the information was obtained unlawfully;
  • knowingly or recklessly making available an AI system “likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property” and which causes such harm or damage; and
  • intending “to defraud the public and to cause substantial economic loss to an individual,” making an AI system available for use, which causes that loss.

An organization that commits one of these three offenses could be fined up to the greater of $25 million (CAD) and five percent of its global revenues. Convicted individuals would be fined an amount in the court’s discretion, imprisoned for up to five years less a day, or both.

Transparency Around Algorithms

In addition to the rights that would be granted to individuals under the AIDA, the CPPA would give individuals certain rights regarding the use of their personal information for automated decision-making. Similar—but not identical—rights are granted under Article 22 of the EU/UK General Data Protection Regulation (GDPR) and privacy laws in various US states as well as countries like Brazil and South Africa.

Under the CPPA, individuals would have a right to an explanation about any prediction, recommendation or decision made by an automated decision system using their personal information that could impact them significantly. The explanation must include the type and source of the personal information used to make the prediction, recommendation or decision, and the reasons or principal factors leading to the prediction, recommendation or decision.

Looking Forward and Around the Globe

It is difficult to predict whether the Digital Charter Implementation Act will be adopted; indeed, its proposed privacy statutes already are facing some of the same criticism levied at its similar predecessor bill—C-11—which died when the Canadian Parliament was dissolved for last September’s election. Nonetheless, the AIDA represents Canada’s contribution to the ongoing movement around the world to create new regulatory regimes to govern AI.

For other examples of this trend, in addition to the EU’s proposed AIA, the UK government is seeking comment on its proposed AI regulatory framework and plans to introduce its AI-governance strategy late this year. In January, the Chinese Cyberspace Administration adopted its Internet Information Service Algorithmic Recommendation Management Provisions and is finalizing its regulation of algorithmically created content, including virtual reality, text generation, text-to-speech, and “deep fakes.” Brazil, too, is crafting a law regulating AI.

In the United States, the leading congressional privacy law proposals contain algorithmic-governance provisions roughly comparable to the AIDA’s. Meanwhile, the Federal Trade Commission has requested comments on an Advance Notice of Proposed Rulemaking (ANPR) to address AI and other automated decision-making practices as well as privacy and data security.

US state and local governments also have begun to regulate algorithmic decision-making more stringently. Illinois requires employers that vet video interviews with AI systems to notify job applicants and obtain their consent. New York City recently adopted a law subjecting automated employment decision tools to an annual “bias audit” from an independent auditor.

It is not yet clear how these various regulatory initiatives will evolve, let alone how they will coexist. While these pieces fall into place, however, multinational corporations should not let the uncertainty paralyze their efforts to stand up compliance programs. Instead, they can take practical steps to stay ahead of the curve and prepare for what is coming.

© Arnold & Porter Kaye Scholer LLP 2022 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.

  1. Specifically, this requirement addresses the risks of “biased output”—content generated, or a decision, recommendation, or prediction made, by an AI system that adversely differentiates on one of the prohibited grounds of discrimination set out in the Canadian Human Rights Act without justification.