Virtual and Digital Health Digest
This digest covers key virtual and digital health regulatory and public policy developments during October 2023 from the United States, United Kingdom, and European Union.
Letter from the Editors:
Spurred, in part, by the COVID-19 pandemic and the need for new ways to reach patients at home, 2023 saw a boom in digital technologies and healthcare solutions: one-stop-shop telemedicine platforms, app-based remote patient monitoring, direct-to-consumer online pharmacies, software-based medical devices, and artificial intelligence/machine learning to bolster delivery of telehealth services. Then came a robust government response: FDA’s Discussion Paper on use of AI/ML in Drug Development and the formation of the Digital Health Advisory Committee, Office of Inspector General’s Special Fraud Alert on Telefraud, the end of COVID-19 public health emergency regulations and complimentary state-level telemedicine reforms, congressional briefings on AI, and the Biden Administration’s Executive Order on AI, to name a few examples of government action.
And that was just in the U.S. In the EU and UK, regulatory bodies also grappled with the introduction of machine learning, AI, and other software into healthcare services by, for example, new guidance from the EU Medical Device Coordination Group and UK Medicines and Healthcare products Regulatory Agency on software medical devices, the EU’s AI Act and the UK government’s AI White paper, the European Medicines Agency reflection paper on use of AI in the product lifecycle, the EU Data Privacy Framework and the equivalent UK-U.S. data bridge, and the European Health Data Space.
We call this the “Race to Regulate.” This push-pull dynamic between digital health innovation and government regulation is key to evaluating regulatory risks in today’s shifting legal landscape. This Digest seeks to keep up with these changes and provide you with an overview of the key guidelines and developments as the landscape develops. As we come to the end of 2023, join us on December 13 as we unpack pivotal moments in the 2023 Race to Regulate and discuss what’s next for virtual and digital health.
Do you have thoughts or feedback on our newsletter? We would like to hear from you. Contact us at email@example.com.
We look forward to hearing from you.
Jackie and Allison
In this issue, you will find the following:
- FDA Regulatory Updates
- Healthcare Fraud and Abuse Updates
- Corporate Transactions Updates
- Provider Reimbursement Updates
- Policy Updates
- Privacy Updates
EU and UK News
FDA Regulatory Updates
FDA Issues Guiding Principles for Predetermined Change Control Plans for Machine Learning Devices. To help support the quality of advancement in digital health technologies, on October 24, FDA, Health Canada, and the U.K.’s Medicines and Healthcare products Regulatory Agency (MHRA) issued guiding principles for predetermined change control plans for machine-learning medical devices (PCCP Guiding Principles). PCCPs, which are being developed and implemented in different ways in different regulatory jurisdictions, are increasingly being used to manage certain pre-defined device changes without requiring additional regulatory authorizations. The PCCP Guiding Principles include that a PCCP be focused and bounded, risk-based, evidence-based, and transparent. Issuance of the PCCP Guiding Principles follows the prior issuance of Good Machine Learning Practice guiding principles in 2021. Both sets of guiding principles are intended to lay a foundation for machine learning practices and encourage international harmonization.
Feedback on the PCCP Guiding Principles can be submitted to FDA’s docket on a proposed framework for modification to AI/ML-based devices.
FDA Seeks Public Comment on Digital Health Technologies (DHTs) for Detecting Prediabetes and Diabetes. On November 2, FDA’s Center for Devices and Radiological Health announced that it is seeking public comments on how DHTs, including AI/ML (artificial intelligence/machine learning), may help with early detection of risk factors for type 2 diabetes, prediabetes, and type 2 undiagnosed diabetes. In requesting comments, FDA noted that while many health care stakeholders are embracing DHTs to transform the way health care is delivered in patients’ homes, the full potential of DHTs for the detection of prediabetes and diabetes, especially in diverse populations, has yet to be realized. FDA is seeking input on several questions, including relating to community engagement, consortia research, current DHT diabetes-related uses, potential uses of AI/ML on healthcare datasets for detection of prediabetes, and integration of digital derived biomarkers into clinical decision support systems to identify undiagnosed diabetes.
Comments, which can be submitted to the docket, are due by January 31, 2024.
FDA Updates List of AI/ML-Enabled Devices. On October 19, FDA updated its publicly available list of AI/ML-enabled medical devices to add 171 devices to the list. Of those devices newly added to the list, 155 were devices with final decision dates between August 1, 2022 and July 30, 2023, and 16 were devices with decisions from prior periods that were identified through a revision of methods used to generate the list. With this latest update, the AI/ML devices list now has nearly 700 devices, reflecting the continued growth and adoption of technologies employing AI/ML in patient care. Although the AI/ML devices span therapeutic areas, radiology devices continue to account for the majority of devices on the list. FDA explains that “in addition to having the largest number of submissions, Radiology has experienced the steadiest increase of AI/ML-enabled device submissions of any specialty.”
FDA Updates Enforcement Policy for Certain Remote Monitoring Devices. On October 19, FDA issued final guidance titled “Enforcement Policy for Non-Invasive Remote Monitoring Devices Used to Support Patient Monitoring.” This guidance supersedes the covid-era enforcement policy guidance on remote monitoring devices first issued in March 2020. The updated policy applies to modified devices where the original device was a legally marketed, noninvasive remote monitoring device of a type listed in the guidance that measures or detects common physiological parameters and that is used to support patient monitoring. The guidance describes an enforcement discretion policy for limited modifications to the indications, functionality, or hardware or software of device types in the scope of the guidance without prior submission of a 510(k) provided the modifications do not create undue risk and do not directly affect the physiological parameter measurement algorithm. For subject devices, examples of such modifications include hardware or software changes to allow for increased remote monitoring capability, as well as a change in indications regarding use in the home setting. The guidance sets forth recommendations for labeling as well as design control and validation.
Healthcare Fraud and Abuse Updates
Doctor Convicted for Engaging in Genetic Testing and Durable Medical Equipment Telemedicine Scheme. On October 18, Dr. Alex Gloster, an independent contractor for several purported telemedicine companies, pled guilty for defrauding Medicare by ordering medically unnecessary durable medical equipment (DME) and Cancer Genetic Testing (CGx). Dr. Gloster signed thousands of doctors’ orders for DME and CGx tests for patients he never treated nor spoke to between September 2017 and August 2019. Dr. Closter’s orders amounted to over US$5.6 million in false and fraudulent claims submitted to Medicare, with Medicare reimbursing over US$2.4 million. Dr. Gloster made numerous false statements inthe scheme, such as falsely certifying in medical records and requisition forms that he was the beneficiaries’ treating physician, that he had personally examined patients, and used ordered DME and CGx tests to manage patients’ conditions. For ordering DME and CGx tests and electronically reviewing patient charts, Dr. Gloster was paid a set fee per doctor’s order, and generated $270,570 in fees under the scheme.
Why this matters?
While we have covered several similar cases this year involving ordering of medically unnecessary testing or DME, this case sets itself apart in that (1) it includes fraudulent ordering of DME and genetic testing, two areas we flagged in the October Digest as likely to draw additional claim scrutiny due to the potential for high reimbursement and (2) Dr. Gloster did not appear to have any contact with the beneficiaries prior to ordering the medically unnecessary services. The hefty penalty for this particularly brazen fraudulent scheme — US$5.6 million —is one of the highest penalties we saw in similar cases this year.
DOJ Investigates Telehealth/Medicine Provider, Cerebral. A June 2022 qui tam against Cerebral Inc. and Cerebral Medical Group, P.A. (Cerebral) was unsealed on October 16. The documents detail how Cerebral allegedly engaged in improper inducement of providers to prescribe stimulant controlled substances treating ADHD. Specifically, Katherine Keaton, a nurse practitioner who provided telemedicine care to patients and prescribed medication to patients for a variety of mental health disorders, previously worked at Cerebral Medical Group from approximately May 4, 2021 until her resignation on January 27, 2022. Keaton brought this action, alleging that Cerebral induced their providers to prescribe a “specific and excessive” amount of controlled substances to ADHD patients. In return, Cerebral would allegedly offer to reimburse the providers’ federal DEA certification costs. According to Keaton, Cerebral purportedly offered her reimbursement for her $800 DEA certification in exchange for Keaton prescribing approximately 400 prescriptions of stimulant drugs to treat ADHD. As a result of this improper inducement, Cerebral allegedly caused federal health insurance programs, such as Medicare, Medicaid, and TRICARE, to pay false and fraudulent claims for the reimbursement of Cerebral’s telemedicine mental health care services and medications. This case is still ongoing.
Corporate Transactions Updates
Still in Digital Health IPO Drought: Shaky Market Delays Waystar’s Initial Public Offering. Waystar Holding Corp., a digital health company that facilitated over US$4 billion in healthcare payment transactions last year by assisting hospitals and clinics in managing their finances, made its initial public offering (IPO) filing public in mid-October after filing confidentially in August. Waystar, the first and only digital health company to make its IPO filing public in 2023, was projected to be valued at up to US$8 billion (including debt) in its upcoming IPO. However, this excitement was short-lived — it was announced on November 1 that Waystar would delay its IPO until December or 2024 to ride out the market volatility and avoid a similar fate to other high-profile companies that recently went public only to trade below their IPO prices.
Waystar expressed its intention to list on the Nasdaq and has indicated that it still plans to go through with the IPO under the ticker symbol “WAY” when the market stabilizes. News of the Waystar IPO delay surfaced in early November, almost concurrently with the announcement that Olive AI, a health automation company once dubbed a “Telehealth Unicorn” after raising US$848 million and implementing its enterprise’s AI in over 900 hospitals, reportedly joined the club of “Fallen Unicorns” and sold its remaining assets to a number of corporations, including Waystar.
Digital Health Meets the Military: Telehealth Giant Amwell Strikes a Deal With the Defense Health Agency. In late October, telehealth giant Amwell (formerly American Well) and technology firm Leidos announced that they had been awarded a contract worth US$180 million with the Department of Defense’s Health Agency (DHA). The deal will allow the government to utilize the technology of Amwell and Leidos to provide a hybrid care technology platform that will power the “digital first” transformation of the Military Health System. It has been reported that the new health technology platform will be initially implemented at five locations, followed by an enterprise-phased rollout. In 2023, Amwell’s losses rose to US$629 million, but executives have remained outwardly optimistic, stating that the new contract with the DHA will have a significant impact on Amwell’s future financials.
Provider Reimbursement Updates
CMS Issues Physician Fee Schedule Final Rule. On November 2, the Centers for Medicare & Medicaid Services (CMS) issued the calendar year (CY) 2024 Medicare Physician Fee Schedule final rule. The rule implements provisions of the Consolidated Appropriations Act, 2023 (CAA, 2023) that temporarily extend the telehealth flexibilities established during the COVID-19 Public Health Emergency (PHE) through December 31, 2024. As we detailed in the July Digest, CMS proposed several changes related to telehealth reimbursement to align with the CAA, 2023; the final rule adopts those proposals in full and reminds stakeholders that many limitations on telehealth in effect prior to the PHE will resume effective January 1, 2025, unless Congress changes the Medicare statute. In particular, the rule:
- Continues to permit all telehealth services to be furnished in a patient’s home through December 31, 2024. Beginning in 2025, Medicare will only cover telehealth services furnished in a patient’s home for (1) mental health services; (2) substance use disorder services; and (3) clinical assessments related to End-Stage Renal Disease for beneficiaries receiving home dialysis. (p. 105). 42 U.S.C. § 1395m(m)(4)(C)(ii)(X).
- Delays in-person visit requirements for mental health telehealth services through December 31, 2024. Beginning in 2025, beneficiaries must have an in-person visit with their practitioners before beginning mental health telehealth services and, with some exceptions, every 12 months thereafter. (p. 149). 42 C.F.R. § 410.78(b)(3)(xiv).
- Continues to authorize occupational therapists, physical therapists, speech pathologists, and audiologists to offer telehealth services through December 31, 2024. Beginning in 2025, these professionals will not be eligible telehealth practitioners. (p. 152). 42 U.S.C. § 1395m(m)(4)(E).
In addition to temporarily extending telehealth flexibilities, the final rule also establishes a new categorization scheme and approval process for the Medicare Telehealth Services List. Beginning in CY 2025, all telehealth services will be designated either “permanent” or “provisional,” and CMS will use a five-step process to consider requests to add a new telehealth service or to make a provisional telehealth service permanent. (pp. 120-139). CMS also provided clarification on changes in billing requirements for remote physiological and therapeutic monitoring that took effect when the PHE expired in May. After the end of the PHE, remote monitoring can only be furnished to established patients and requires at least 16 days of data collection over a 30-day period in order to bill for remote monitoring codes. CMS responded to questions about billing for remote monitoring during global surgical periods and in conjunction with care management services. (pp. 182-185).
Senate Finance Committee Considers Telehealth Reform. On November 2, Senate Finance Committee Chair Ron Wyden (D-OR) and Ranking Member Sen. Mike Crapo (R-ID) released a draft of health care legislation with several telehealth provisions. The draft legislation would require CMS to establish a code or modifier to identify mental health services furnished through telehealth. It would also task the agency with issuing guidance on telehealth best practices, with a focus on patients with limited English proficiency and patients who are visually or hearing impaired. The committee is scheduled to vote on the draft legislation on Wednesday, November 8.
House Elects New Speaker as Federal Funding Deadline Looms. Federal government funding is set to expire on November 17, and both the House and Senate will be in session the majority of November, except for the week of Thanksgiving. On October 25, House Republican Conference Vice Chair Mike Johnson (R-LA) was unanimously elected as Speaker of the House in a 220-209 vote, ending the 22-day period since former Speaker Kevin McCarthy (R-CA) was ousted. Ever since being elected as the 56th Speaker of the House, Speaker Johnson has focused on passing the rest of the House Republican’s Fiscal Year 2024 appropriations bills. All of the House’s appropriations bills have been partisan, and thus are very unlikely to be considered by the Democratic-controlled Senate. On November 1, the Senate passed its first appropriations “minibus” package (S.Amdt. 1092 to H.R. 4366) by a bipartisan vote of 82-15. While the House has passed more individual appropriations bills on the floor than the Senate has so far this year, the passage of the Senate minibus package is notable because it includes the only appropriations bills that have passed through either chamber of Congress with broad bipartisan support this year. Speaker Johnson has previously suggested a continuing resolution that extends government funding through January 15 or April 15, in an effort “to ensure the Senate cannot jam the House with a Christmas omnibus.”
Biden Executive Order on Artificial Intelligence Highlights Privacy Risks. On October 30, President Biden signed an Executive Order titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” addressing many issues related to AI with directives to over 40 federal agencies. President Biden highlighted the risks that AI may pose to personal privacy, including by “making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires.” To address these risks, the EO calls on federal agencies to develop “privacy-enhancing technologies” (PETs), meaning a “software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality.” To complement this effort, the EO directs the National Institutes of Standards and Technology (NIST) to “create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI” that “at a minimum, describe the significant factors that bear on differential-privacy safeguards and common risks to realizing differential privacy in practice.” NIST is also directed to coordinate with the Secretary of Energy and the Director of the NSF to develop and help ensure the availability of AI testing environments “to support the design, development, and deployment of associated PETs.”
In addition, the EO directs the Secretary of HHS to establish an HHS AI Task Force on responsible deployment of AI and to “develop a strategy, in consultation with relevant agencies, to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality, including” with respect to privacy. The EO suggests that HHS provide technical assistance to providers and payers about their obligations under privacy laws and issuing guidance or taking other action in response to complaints or reports of noncompliance with federal privacy laws in the AI context.
American Hospital Association Sues HHS Over Directives on the Use of Online Tracking Technologies. In a complaint filed on November 2, the American Hospital Association (AHA) and three other plaintiffs, the Texas Hospital Association, the United Regional Health Care System, and Texas Health Resources, allege that HHS overstepped its authority in seeking to discipline HIPAA-covered entities and their business associates for the use of web tracking technologies. As reported in our October Digest, the AHA complained about the HHS position on tracking technologies in the association’s September 2023 letter to Senator Bill Cassidy (R-LA), urging that HHS withdraw its guidance on the potential privacy and security violations that may occur through the use of web tracking technologies.
In its complaint, the AHA claims that the data collected through web tracking technologies is not regulated under HIPAA because it does not constitute “individually identifiable health information,” and that HHS exceeded its authority in asserting otherwise. Invoking the Administrative Procedure Act, as well as the First Amendment, as the basis for its claims, the AHA is seeking an injunction prohibiting HHS from pursuing enforcement actions against AHA member hospitals, as well as members of the other plaintiff organizations named in the lawsuit, for actions involving web tracking technologies.
EU and UK News
The MDCG Adopts Guidance on Medical Device Software Intended To Work in Combination With Hardware or Hardware Components. On October 18, the Medical Devices Coordination Group (MDCG) adopted guidance on Medical Device Software (MDSW) intended to work in combination with hardware or hardware components that generate or provide input data to the software. For example, MDSW downloaded or available on wearables (e.g., smartwatches or augmented reality goggles) may achieve their intended purpose by receiving and analyzing data provided by a hardware or hardware component (e.g., camera or optical sensors). The guidance clarifies how to identify whether the hardware or hardware component are regulated as a medical device or an accessory to a medical device and how to comply with the respective regulatory requirements by setting out guidance on the qualification and appropriate regulatory pathway for the hardware. The guidance also sets out the three regulatory options for manufacturers of such products:
- The hardware or hardware component is placed on the market as an accessory to a MDSW.
- The hardware or hardware component is placed on the market as a medical device either (1) as part of a system, (2) as a combination with another medical device, or (3) as an integral part of a medical device.
- The hardware or hardware component is an integral part of a general consumer product or wearable digital product and is not a medical device or an accessory to a medical device and has no intended medical purpose.
UK Regulatory Sandbox Coming Soon. On October 30, the MHRA announced that it aims to launch the “AI-Airlock” in April 2024. The AI-Airlock will be a novel regulatory sandbox, which will allow developers of software and AI medical devices to test their products in a safe environment, generate robust evidence for regulatory submissions, and address any challenges with a technology’s safety and efficacy evaluation. The sandbox will be monitored by the MHRA and will consist of a collaborative approach between regulators, developers, academia, and the NHS. It is hoped the AI-Airlock will ultimately mean that patients can access new technologies faster.
UK Government Announces Large Investments in Innovative Technologies. On October 3, the UK Department of Health and Social Care (DHSC) announced a £30 million investment to support the roll out of innovative technologies for the NHS. According to the DHSC, this investment will help ease the pressures on the NHS this winter and could include the expansion of virtual wards, the investment in wearable medical devices for use by patients at home to aid diagnosis and management of chronic conditions, and the investment in diagnostic imaging technologies. On October 29, the Prime Minister announced the launch of a £100 million investment in AI in healthcare particularly in areas such as dementia, mental health, and oncology. Finally, on October 30, the DHSC also announced £21 million of funding to deploy AI tools to speed up the diagnosis and treatment of lung cancer.
CPI Report Reveals Challenges and Opportunities for UKMedTech. On October 23, the UK Centre for Process Innovation (CPI) published two reports calling for an urgent MedTech industrial strategy to avoid the UK falling behind in the rapidly growing HealthTech sector. The first report, written in collaboration with the Association of British HealthTech Industries, is titled “Challenges and Opportunities for UK HealthTech Manufacturing Scale Up.” It highlights that many companies may be moving from the UK to other countries to benefit from more competitive pricing and more flexible manufacturing processes. The second report titled “An Action Plan: Driving Growth of the UK Digital Health Industry” maps the changes that may be needed for the UK to maximize its global potential in the digital health market.
Digital Transformations for Health Lab Launched During World Health Summit. On October 16, the Digital Transformations for Health Lab (DTH-Lab) was launched during the World Health Summit. The DTH-Lab is a global consortium that will implement the Lancet and Financial Times Commission Report on Governing Health Futures 2030. The report contained four actions to address health inequalities and promote public health in the era of digitalization:
- Recognize digital technologies as determinants of health.
- Build a governance architecture that creates trust in digital health.
- Develop a new approach to the collection and use of health data based on data solidarity.
- Invest in digitally transformed health systems.
These recommendations will now be implemented by the DTH-Lab, which will explore how digital and AI transformations can improve health and well-being and strengthen citizenship and empowerment.
WHO Publishes Guidance on Regulatory Principles Applicable To Use of AI in Health. On October 18, the World Health Organization (WHO) published guidance on “Regulatory considerations on artificial intelligence for health.” The publication aims to outline key principles that governments and regulatory authorities can follow to develop new guidance or adapt existing guidance on AI at national or regional levels. The new guidance emphasizes the importance of establishing AI systems’ safety and effectiveness, rapidly making appropriate systems available to those who need them, and fostering dialogue among stakeholders, including developers, regulators, manufacturers, health workers, and patients. It outlines six areas for regulation of AI for health: transparency and documentation; risk management; validating data and being clear about intended use; data quality; privacy and data protection; and collaboration between relevant bodies and individuals.
G7 Agree on Guiding Principles and Voluntary Code of Conduct for AI Developers. On October 30, G7 leaders agreed on International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. These principles and the voluntary Code of Conduct will complement, at an international level, the legally binding rules that the EU co-legislators are currently finalizing under the EU AI Act. The aim of the Code of Conduct and the Guiding Principles is to promote safe and trustworthy AI. As discussed in our September Digest, the voluntary Code of Conduct will provide practical guidance and attempt to create a non-binding rulebook for AI developers. Both documents will be reviewed and updated as necessary, including through multistakeholder consultations, to ensure they remain fit for purpose and responsive to this rapidly evolving technology.
GC Dismisses Request for Interim Relief Sought Against the EU-US Data Privacy Framework. On October 12, the European General Court dismissed the application for interim measures lodged by a French member of the European Parliament, Philippe Latombe, to suspend the application of the EU-U.S. Data Privacy Framework (the Data Bridge), which was discussed in the October Digest. The General Court dismissed the interim measures application on the grounds that the urgency required for the adoption of such measures had not been demonstrated. Accordingly, the Data Bridge remains fully applicable for the time being. However, the dismissal has been appealed, although it is not yet clear when this appeal will be determined. The results of these pending proceedings are not only relevant for entities concerned by the Data Bridge, but also for those concerned by the UK-U.S. Data Bridge, which, as discussed in our October Digest, is an extension of the EU Data Bridge.
Updated Code of Practice for Operators and Developers of Apps. On October 13, the UK’s Department for Science, Innovation and Technology published an updated version of the code of practice for app store operators and app developers (Code). The Code was first published on December 9, 2022, with the aim of setting out minimum security and privacy requirements of apps to protect users. As mentioned in our January Digest, the Code applies to all apps, including health-related apps. Some of the changes include:
- Principle 2.7: Instead of the previous requirement that developers should provide users with a mechanism to delete locally held data, developers need only provide a mechanism for users to request deletion of their personal data.
- Principles 3.1 and 3.3.1: The vulnerability disclosure process, which the developer must create and maintain for every app, must be accessible within the app store.
- Principle 8.1: The reporting process for personal data breaches has been clarified such that the operator must inform the developer, and the developer informs other relevant stakeholders.
Operators and developers were initially granted nine months to implement the Code, but based on feedback that some provisions required clarification and that certain barriers to implementation existed, this has been extended by a further nine months. Operators and developers should now comply with the Code by June 2024.
Opinions From the EDPS on AI Act. On October 23, the European Data Protection Supervisor (EDPS) adopted Opinion 44/2023 on the EC proposal for the AI Act in the light of legislative developments. Details on the AI Act can be found in our Advisories here and here. The EDPS sets out a number of recommended changes to the proposal. These include:
- Broadening the scope of the AI Act (e.g., to high-risk AI systems existing prior to its application date)
- Introducing explicit prohibitions on the use of AI systems (e.g., using AI to infer emotions if not used for health or research purposes)
- Introducing additional specifications for high-risk AI systems
- Clarifying elements for cross-border cases involving AI-systems (e.g., definition of national territory)
- Clarifying the tasks, duties and powers of the authorities involved in the implementation of the AI Act, including those of the EDPS
Product Liability Updates
European Parliament Adopts Negotiating Position on the New EU Product Liability Directive. On October 18, the European Parliament (EP) adopted its negotiating mandate on the European Commission’s (EC) proposal for the revised Product Liability Directive (PLD), as discussed in our November 2022 Digest. The EP’s proposed revisions to the PLD are set out in a report dated October 12, 2023. Some of the key changes include clarification that the PLD will not apply to free and open-source software, extending the limitation period to 30 years for latent defects and clarification that economic operators that make substantial modifications to a product should be limited to the modified part of the product only. The European Council’s negotiation position was published in June 2023 (discussed in our July Digest) and so on October 23, the EC, European Council, and EP began trialogue negotiations to agree on the final text of the PLD. The next trialogue is likely to happen in December 2023.
Statement From Industry on the Proposed EU Product Liability Directive. On October 23, the European Federation of Pharmaceutical Industries and Associations, MedTech Europe, and others published an industry statement calling for “a major rethink” on the EC’s proposal for a revised PLD. The industry states that as currently proposed, the PLD is unbalanced, being too consumer-friendly. For example, industry notes that the current draft disproportionately shifts the burden of proof onto defendants and could lead to abusive disclosure exercises. The industry also calls for compensation thresholds to be reintroduced and for further investigation into the effects of including software in the strict liability regime. Overall, the industry has concerns that the PLD would lead to an increase in litigation, a reduction in innovation, and much more uncertainty for businesses.
Opinion From the EDPS on AI Liability Act. On October 11, the EDPS adopted Opinion 42/2023 on the EC proposals for the revised PLD and the AI Liability Directive. The proposal for the AI Liability Directive aims to ensure victims of damage caused by AI can obtain equivalent protection to damage caused by other products. The EDPS fully endorses this aim and sets out a number of recommended changes to the proposal. These include:
- Ensure individuals that suffer damage caused by AI systems produced or used by EU institutions enjoy the same protection as if the damage were caused by AI systems produced or used by private entities or national authorities.
- Extend the disclosure of evidence mechanism and the rebuttable presumption of a causal link to all AI systems, not just those defined as “high-risk.”
- State that the proposal is without prejudice to EU GDPR, such that individuals can obtain redress through different avenues.
*The following individuals contributed to this Newsletter:
Amanda Cassidy is employed as a Senior Health Policy Advisor at Arnold & Porter’s Washington, D.C. office. Amanda is not admitted to the practice of law.
Eugenia Pierson is employed as a Senior Health Policy Advisor at Arnold & Porter’s Washington, D.C. office. Eugenia is not admitted to the practice of law.
Mickayla Stogsdill is employed as a Senior Policy Specialist at Arnold & Porter’s Washington, D.C. office. Mickayla is not admitted to the practice of law.
Katie Brown is employed as a Policy Advisor at Arnold & Porter’s Washington, D.C. office. Katie is not admitted to the practice of law.
Heba Jalil is employed as a Trainee Solicitor at Arnold & Porter's London office. Heba is not admitted to the practice of law.
© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.