Skip to main content
All

This digest covers key virtual and digital health regulatory and public policy developments during March and early April 2024 from the United States, United Kingdom, and European Union.

In this issue, you will find the following:

U.S. News

U.S. Featured Content

The Centers for Medicare & Medicaid Services (CMS) has been particularly busy so far this spring. On April 2, 2024, CMS issued a final rule updating network adequacy standards for qualified health plans (QHPs). QHPs are insurance plans that are certified by the Health Insurance Marketplace. Then, on April 4, 2024, CMS issued the Medicare Advantage and Part D final rule, including several telehealth-friendly provisions. Taken together, these final rules highlight the continuing transformative impact of telemedicine on federal health care programs.

EU and UK News

EU/UK Featured Content

The EU institutions had a busy week mid-March. On March 12, 2024, the European Parliament (EP) formally adopted the revised Product Liability Directive, which makes several important changes to the existing European Union (EU) product liability regime, including that software and artificial intelligence (AI) technologies will now fall within the scope of a product. On March 13, 2024, the EP formally adopted the Artificial Intelligence Act, meaning the legislative process for the world’s first binding law on AI is nearing its conclusion. Finally, on March 15, 2024, the Council of the European Union and the EP reached a provisional agreement on the European Health Data Space (EHDS), which aims to improve access to health data electronically across the EU. Each of these important legislative provisions should shortly be finalized and will then become law in the EU.

U.S. News

FDA Regulatory Updates

FDA Issues Additional Guidance on Definition of “Cyber Device” Under FDORA. On March 12, 2024, the Food and Drug Administration (FDA) issued draft guidance titled “Select Updates for the Premarket Cybersecurity Guidance: Section 524B of the FD&C Act” (the Draft Guidance), proposing to update a cybersecurity guidance issued in September 2023. The new Draft Guidance helps clarify FDA’s interpretation of the term “cyber device” as used in the cybersecurity provision of Section 3305 of the Food and Drug Omnibus Reform Act of 2022 (FDORA). As codified in Section 524B of the FDCA, FDORA established certain cybersecurity requirements that apply to marketing submissions and certain other applications for “cyber devices.” Section 524B defines a “cyber device” as a device that “(1) includes software validated, installed, or authorized by the sponsor as a device or in a device; (2) has the ability to connect to the internet; and (3) contains any such technological characteristics validated, installed, or authorized by the sponsor that could be vulnerable to cybersecurity threats.”

FDA explains in the Draft Guidance that it “considers a ‘cyber device’ to include devices that are or contain software, including software that is firmware or programmable logic.” This FDA definition is informed in part by the definitions recognized by the National Institute for Standards and Technology (NIST) for the term “software.” Of note, FDA considers the “ability to connect to the internet” to include devices that are able to connect to the internet, whether intentionally or unintentionally, through any means (including at any point identified in the evaluation of the threat surface of the device and the environment of use). Citing to reports of medical device ransomware attacks, FDA asserts that “it is well-demonstrated that if a device has the ability to connect to the Internet, it is possible that it can be connected to the Internet, regardless of whether such connectivity was intended by the device sponsor.” Although not an exhaustive list, FDA regards devices that include any of the following features to have the ability to connect to the internet: Wi-Fi or cellular; network, server, or Cloud Service Provider connections; Bluetooth or Bluetooth Low Energy; radiofrequency communications; inductive communications; and hardware connectors capable of connecting to the internet (e.g., USB, ethernet, serial port).

The Draft Guidance also includes recommendations on documentation manufacturers of cyber devices should submit in applicable premarket submissions to comply with the FDORA requirements. Comments on the Draft Guidance are due by May 13, 2024.

FDA Releases Paper on Artificial Intelligence and Medical Products. On March 15, 2024, FDA released a paper titled “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together” (the Paper). The Paper discusses plans by FDA’s Center for Biologics Evaluation and Research (CBER), the Center for Drug Evaluation and Research (CDER), the Center for Devices and Radiological Health (CDRH), and the Office of Combination Products (OCP) to align their efforts to advance the responsible use of AI for medical products. These FDA centers and offices intend to take actions in the following four focus areas as further detailed in the Paper: (1) fostering collaboration to safeguard public health; (2) advancing the development of regulatory approaches that support innovation; (3) promoting the development of harmonized standards, guidelines, best practices, and tools; and (4) supporting research related to the evaluation and monitoring of AI performance. In discussing AI, the Paper employs a definition of AI from the October 2023 Biden Administration Executive Order on use of AI in the health sector, with AI defined as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.

With respect to what industry can expect in the next FDA AI guidance, plans include final guidance on marketing submission recommendations for predetermined change control plans for AI-enabled device software functions, draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions, and draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products.

The Paper is intended to complement the “Artificial Intelligence and Machine Learning Software as Medical Device Action Plan” guidance, which FDA published in January 2021.

FDA Clears First Prescription Digital Therapeutic App for the Treatment of Depression. On March 30, 2024, FDA granted a 510(k) clearance to Otsuka America Pharmaceutical (Otsuka) for Rejoyn™ (also known as CT-152), a prescription digital therapeutic for the treatment of Major Depressive Disorder (MDD) symptoms as an adjunct to clinician-managed outpatient care for adult patients with MDD aged 22 years and older who are on antidepressant medication. Rejoyn is a prescription smartphone app-based digital therapeutic, which delivers a proprietary interactive cognitive-emotional and behavioral therapeutic intervention. In an April 1, 2024 press release, Otsuka and Click Therapeutics, Inc. describe Rejoyn as offering “a novel approach to the treatment of depression symptoms as it is designed to target neural networks affected by depression and is hypothesized to leverage the brain’s inherent neuroplasticity to alter those connections leading to symptom reduction over time.” The device description in the 510(k) decision summary states that Rejoyn is not intended to be used as a stand-alone therapy or as a substitution for the patient’s clinician prescribed medications.

FDA Announces Grant Opportunity for Research Related to Digital Health Technologies. On March 14, 2024, FDA announced a grant opportunity to address topics related to the use of digital health technologies for remote data acquisition in clinical investigations to support drug development. The closing date for applications is May 20, 2024.

Health Care Fraud and Abuse Updates

Nurse Pleads Guilty in Telemedicine Pay-Per-Referral Telemedicine Scheme. On March 8, 2024, Jean Wilson — a licensed nurse practitioner and the owner of two telemedicine companies — pleaded guilty for her role in a US$136 million conspiracy to defraud Medicare. Wilson owned Advantage Choice Care LLC and Tele Medcare LLC, as well as two orthotic brace suppliers, Southeastern DME and Choice Care Medical. Through these companies, Wilson allegedly recruited and bribed medical professionals into signing prescriptions for orthotic braces and prescription drugs that were medically unnecessary, ineligible for Medicare reimbursement, and/or not provided as presented to federal health care programs. In some instances, the Department of Justice (DOJ) alleges, Wilson only paid the providers when they signed the orthotic brace orders.

This pay-per-referral scheme is the exact type of payment structure that the Health and Human Services (HHS) Office of Inspector General (OIG) noted as a “suspect characteristic,” suggesting a “heightened risk of fraud and abuse” in its July 2022 Special Fraud Alert (SFA). Specifically, OIG stated in the SFA that paying medical practitioners a fee corresponding with the volume of reimbursable items or services ordered or prescribed implicates the federal anti-kickback statute and “corrupt[ing] medical decision-making, driv[ing] inappropriate utilization, and result[ing] in patient harm.” See Arnold & Porter’s Advisory on OIG’s July Special Fraud Alert for more information.

Corporate Transactions Updates

Artificial Intelligence in the Operating Room: Johnson & Johnson Partners With NVIDIA to Utilize AI in Surgery. On March 18, 2024, Johnson & Johnson MedTech announced it partnered with Silicon Valley giant NVIDIA to accelerate the integration of AI use during surgical procedures. The partnership aims to improve real-time data analysis in the operating room by integrating NVIDIA’s health care AI platform into Johnson & Johnson MedTech’s surgical technologies. If successful, the collaboration could improve surgical decision-making and promote better outcomes for patients.

The memorandum of understanding provides that Johnson & Johnson MedTech will utilize NVIDIA’s Holoscan edge AI platform to create infrastructure to deploy AI-powered software applications in the operating room. Johnson & Johnson MedTech has an extensive portfolio of technologies for use during surgical procedures that could benefit from real-time analysis with AI, including Shockwave’s technologies, which Johnson & Johnson announced this month it would acquire in a US$13.1 billion deal.

The announced partnership is part of a larger trend. This past year, a number of surgical technology companies, including Proprio, which developed a surgical navigation platform utilizing light field technology for real-time 3D visualization of surgery, and Zeta Surgical, which created technology offering real-time guidance with millimeter precision during surgery, have obtained FDA clearance to utilize their AI-powered tools in the operating room. Further, AI continues to be the number one funding priority in the digital health world, with 40% of first quarter total funding in digital health for 2024 going to AI-enabled companies with US$1.1 billion across 45 digital health deals.

Provider Reimbursement Updates

CMS Issues Notice of Benefit and Payment Parameters for 2025 Final Rule. On April 2, 2024, CMS issued a final rule regarding certain health care marketplace regulations. 89 Fed. Reg. 26218. With respect to QHPs, the final rule provides that, for plan years beginning on or after January 1, 2026, state exchanges and state-based exchanges on the federal platform must require that all issuers seeking certification of a plan as a QHP submit information to the exchange on whether network providers offer telehealth services. Id. at 26332. The agency stated that this data will not be displayed to customers but will instead help inform the future development of telehealth standards. Id. at 26388.

CMS Issues 2025 Medicare Advantage and Part D Final Rule. On April 4, 2024, CMS issued a final rule that made a variety of changes to the Medicare Advantage Program and the Medicare Prescription Drug Benefit Program. 89 Fed. Reg. 30448. The provisions below reflect the evolving role of telehealth in federal health care programs.

  • Medicare Advantage rules require that certain types of plans meet specified network adequacy standards to ensure there is a sufficient number of providers and facilities to furnish covered services to enrollees in the network service area. Plans may receive a 10-percentage point credit towards the percentage of beneficiaries that must reside within required time and distance standards when the plans contract with telehealth providers of certain specialty types. The final rule adds a new Outpatient Behavioral Health facility-specialty type to the list. Id. at 30491. The agency emphasized that the telehealth credit is designed to encourage the use of telehealth services but is not a replacement for in-person care. Id. at 30494.
  • Facility-based institutional special needs plans (FI-SNPs) are a type of Medicare Advantage plan that restricts enrollment to residents of long-term care facilities. Given the unique nature of the FI-SNP model of care, the final rule outlines an exception to current network adequacy requirements for FI-SNPs that provide sufficient and adequate access to basic benefits through additional telehealth benefits. Id. at 30674.

Policy Updates

Federal Appropriations Funding Update. In March, the House and Senate passed into law a Fiscal Year 2024 minibus appropriations funding package, which included provisions such as (1) an extension of the Geographic Practice Cost Index floor for physician work under the Medicare Physician Fee Schedule (MPFS) until January 1, 2025; (2) a one-year extension (from 2025 to 2026) to incentive payments for participation in certain alternative payment models, with a reduction of the incentive from 3.5% to 1.88% for that one-year extension; and (3) an increase in the 2024 conversion factor under the MPFS by 1.68%; this increase is in addition to the 1.25% increase to the 2024 conversion factor that Congress passed as part of the Consolidated Appropriations Act, 2023 (Pub. Law 117-328), making the total increase 2.93%. The minibus bill also includes a one-month extension of the Medicare sequester but does not include more contentious provisions related to pharmacy benefit managers, hospital transparency requirements, and site-neutral payments, nor does it reauthorize the SUPPORT Act or the Pandemic and All-Hazards Preparedness Act. These additional policies could be considered during the lame-duck session in December 2024.

House Ways & Means Committee Considers Telehealth Permanency. On March 12, 2024, the House Ways & Means Committee held a hearing titled, “Enhancing Access to Care at Home in Rural and Underserved Communities.” Several members discussed their support for making COVID-19 pandemic-era telehealth waivers permanent, including Chair Jason Smith (R-MO) who expressed his support for increasing access to audio-only telehealth for individuals living in rural and underserved areas.

HHS Secretary Testifies to Congress on 2025 Budget Request. On March 14, 2024, the Senate Finance Committee held a hearing on President Joe Biden’s Department of Health and Human Services 2025 budget request, which included an exchange between HHS Secretary Xavier Becerra and Ranking Member Crapo (R-ID) on the importance of extending telehealth flexibilities that currently expire at the end of 2024. On March 20, 2024, the House Appropriations and Ways & Means Committees held back-to-back hearings with HHS Secretary Becerra, which resulted in a commitment from HHS to work with Congress toward long-term solutions related to telehealth. Rep. Kevin Hern (R-OK) discussed his support for the Access to Prescription Digital Therapeutics Act of 2023 (H.R. 1458) to expand Medicare and Medicaid coverage of prescription digital therapeutics because of the “unique and promising ways to help cancer patients and those with behavioral health issues,” and Secretary Becerra said he would be open to working with Congress on the issue.

Privacy and AI Updates

House Committee Considers New Privacy and AI Legislation. The House Energy and Commerce Subcommittee on Innovation, Data, and Commerce held a hearing on April 17, 2024 on several proposed bills related to information privacy and artificial intelligence, including a discussion draft of the proposed American Privacy Rights Act (the APRA). The draft APRA, released on April 7, 2024 by House Committee on Energy and Commerce Chair Cathy McMorris Rodgers (R-Wash.) and Senate Committee on Commerce, Science and Transportation Chair Maria Cantwell (D-Wash.), has garnered significant attention as a possible solution to the roadblocks faced by its predecessor bill in the last Congress, the American Data Privacy Protection Act. The key provisions of the draft APRA that apparently contributed to its bipartisan sponsorship are:

  • Preemption. The draft APRA would preempt many provisions of current state privacy laws. The draft expressly states its purpose to “establish a uniform national privacy and data security standard in the United States to prevent administrative cost burdens placed on interstate commerce.
  • Private right of action. Individual consumers could sue an entity subject to the APRA for violations of their rights, and if successful, recover actual damages, injunctive relief, declaratory relief, and reasonable attorney’s fees and costs. (The Federal Trade Commission and State Attorneys General would also have enforcement authority.)
  • Exemptions for information handled in compliance with certain existing privacy laws. The APRA would consider entities otherwise within its applicable scope to be exempt from its privacy requirements to the extent that such entities (1) are regulated under one or more of certain other federal privacy regimes, including HIPAA’s privacy regulations and the federal rules governing the protection of substance use disorder information and (2) are in compliance with those other privacy regimes.

For health care providers, health insurance plans, and their service providers in their role as HIPAA “business associates,” the APRA would likely have minimal impact on their operations if the exemption for HIPAA-regulated entities is maintained as currently drafted. The impact of the proposed bill on others involved in processing personal health information, however, would be considerably less advantageous. While the draft bill preempts many aspects of state law, it expressly preserves from preemption “provisions of laws that protect the privacy of health information, health care information, medical information, medical records, HIV status, or HIV testing.” Thus, even though many aspects of state laws such as the California Consumer Privacy Act would be preempted, it appears that the provisions in those laws that are specific to “sensitive” personal information, including health information, would remain intact, as would health-specific state laws such as the Washington State My Health My Data Act.

The April 17, 2024 House subcommittee also heard testimony on the Algorithmic Accountability Act (AAA), H.R. 5628, which was introduced in the House by Rep. Yvette Clarke (D-NY) and in the Senate by Sens. Ron Wyden (D-OR) and Corey Booker (D-NJ). The AAA would require the Federal Trade Commission (FTC) to adopt regulations requiring impact assessments of “automated decision systems and augmented critical decision processes.” Under those regulations, entities meeting certain revenue or personal data processing thresholds and that use (or expect to use) an automated decision system in an “augmented critical decision process” undertake various assessment activities, including performing ongoing testing and evaluation of the current and historical performance of the system and decision process, assessing the information security measures in place with respect to such system and process, and documenting the current and potential future or downstream positive and negative impacts of such system and process on consumers.

FTC Takes Action Against Two Online Health Care Providers for Failure To Protect Patient Privacy. In two separate actions, the FTC recently used its enforcement powers to seek settlements with online health care providers.

First, the FTC referred to the DOJ claims against the online alcohol addiction treatment service Monument, Inc. for allegedly deceiving the users of its service about how it used and shared sensitive health data. Based on those claims, the DOJ filed a federal district court complaint against Monument on April 12, 2024, alleging that Monument deceptively shared its users’ sensitive health data with third-party advertising platforms such as Meta and Google. Although Monument did not share customers’ actual medical records, it allegedly disclosed users’ health information to third-party advertising platforms via tracking technologies integrated into Monument’s website and also shared hashed versions of its users’ email addresses, which Meta allegedly could re-identify to the individual users. The charges in the complaint include violations of both Section 5 of the FTC Act and the Opioid Addiction Recovery Fraud Prevention Act of 2018 (OARFPA), which prohibits deceptive practices with respect to any substance use disorder treatment service or product. If executed, the proposed order of settlement, filed together with the complaint, would require Monument, among other things, to pay a fine of US$2.5 million, to permanently cease sharing health data for advertising purposes, to obtain affirmative express consent for any other disclosure of health data, and to direct all third parties with whom it shared user data to delete such data.

Second, as announced by the FTC on April 15, 2024, telehealth provider Cerebral, Inc. has agreed to settle the FTC’s charges, submitted in a complaint in federal district court in Florida, that it failed to protect its patients’ sensitive health information. Like the complaint against Monument, the complaint against Cerebral alleges that the company repeatedly acted inconsistently with its representations as to its protections of users’ privacy, and thereby violated Section 5 of the FTC Act, and violated OARFPA by engaging in unfair and deceptive practices with respect to substance use disorder treatment services. Specifically, the complaint alleges that Cerebral shared sensitive information of almost 3.2 million consumers with social media companies such as LinkedIn, Snapchat, and TikTok through tracking technologies employed on Cerebral’s website or apps. Through the use of these tracking tools, Cerebral allegedly shared personal data about its users’ medical and prescription histories; email addresses; phone numbers; birthdates; IP addresses; pharmacy and health insurance information; and other health information.

Cerebral reportedly has agreed to a proposed settlement order, under which it will be required to pay more than US$7 million and, among other things, (1) permanently ban Cerebral from using or disclosing consumers’ personal and health information to third parties for most marketing or advertising purposes, (2) generally require the company to obtain consumers’ consent before disclosing such information to outside parties; (3) implement a comprehensive privacy and data security program that, among other things, addresses the specific problems outlined in the complaint; (4) post a notice on its website alerting users to the allegations outlined in the complaint and detail the steps it is required to take under the order; and (5) implement a data retention schedule and provide consumers with a clear mechanism to request that their data be deleted.

EU and UK News

Regulatory Updates

European Parliament Adopts the AI Act. On March 13, 2024, the Members of the European Parliament formally adopted the Artificial Intelligence Act (AI Act). Following a lengthy negotiation period since the initial proposal by the European Commission in April 2021, the legislative process for the world’s first binding law on AI is nearing its conclusion. For further details on the negotiations surrounding the text of the AI Act, see our January 2023 Advisory.

There are several provisions of the AI Act that are worth mentioning; in particular:

  • Harmonized rules for placing on the market, putting into service, and using AI systems in the EU, such as producing codes of practice for low-risk AI-systems; requiring the placing on the market of high-risk systems (which will include medical devices) subject to the presentation of technical documentation proving compliance with requirements; and mandatory registration in the EU database prior to the use of certain AI systems
  • Prohibitions of certain AI practices considered to threaten citizens’ rights, such as biometric categorization systems based on sensitive characteristics, with exceptions for AI systems intended strictly for medical purposes
  • Specific requirements for high-risk AI systems and their operators, such as classifying AI systems based on risk levels in accordance with their potential risks and level of impact and imposing strict requirements on high-risk AI systems, including medical devices (although there are provisions that acknowledge parallel requirements under the EU Medical Devices Regulation)
  • Harmonized transparency rules for certain AI systems, such as specifying the minimum information that the instructions of use accompanying high-risk AI systems must contain; requiring disclosure when content has been artificially generated or manipulated; marking the outputs of AI systems in a machine-readable format and detectable as artificially generated or manipulated; and complying with EU copyright law
  • Rules on market monitoring, market surveillance governance, and enforcement, for instance, obligations to report serious incidents to the authorities
  • Measures to foster innovation, such as establishing regulatory sandboxes and real-world testing to facilitate the development and testing of innovative AI before placing on the market

The legislation extends its jurisdiction broadly, applying to AI systems operating within the EU and to AI systems outside the EU whose output is introduced into the EU market. Therefore, companies at every stage of the AI development process must carefully monitor the AI Act’s provisions, irrespective of where they are located geographically. The AI Act now awaits formal adoption by the Council of the EU, expected to take place in April, before it can become law.

DHSC and MHRA Accept Recommendations to Tackle Biases in Medical Devices. On March 11, 2024, the UK government’s Department of Health and Social Care (DHSC) and Medicines and Healthcare products Regulatory Agency (MHRA) set out their planned measures to address racial, ethnic, and other biases in the design and use of medical devices. The measures are in response to an independent report on “equity in medical devices,” which was commissioned by DHSC over concerns that pulse oximeters were not as accurate for patients with darker skin tones. The report made 18 recommendations in order to tackle potential bias, which the DHSC and MHRA have fully accepted. The MHRA will now request that applicants describe how bias will be addressed in applications for approvals of medical devices and will publish strengthened guidance for developers on how to improve diversity in the development and testing stages. The DHSC will also support work to remove racial bias in data used in clinical studies and improve the transparency of data used in the development of medical devices using AI.

Second Reading of Private Members’ Bill in the UK’s House of Lords on the Topic of AI Regulation. On March 22, 2024, a Private Members’ bill, called the Artificial Intelligence (Regulation) Bill, had its second reading in the House of Lords. The bill was first introduced in the House of Lords on November 23, 2023. The main purpose of the bill is to establish a central AI authority to coordinate and monitor the regulatory approach to AI, while promoting transparency, reducing bias, and balancing regulatory burden against risk. Although the bill largely tracks the government’s white paper setting out the government’s pro-innovation approach to the regulation of AI, it seeks to introduce the provisions into law. In contrast, and as set out in previous digests, it is currently not the government’s intention to introduce AI-specific legislation, but it instead intends to develop a set of core principles for regulating AI while leaving regulatory authorities, like the MHRA, discretion over how the principles apply in their respective sectors. A briefing report, published on March 18, 2024, describes the bill in more detail and a full debate was held on March 22, 2024. The bill will now move to the committee stage in the House of Lords where the bill will be scrutinized line by line. It will then proceed through a number of additional stages within the House of Lords, prior to repeating the process within the House of Commons. Only a minority of Private Members’ bills become legislation, and it is unlikely that this will become law given the government’s position. Even so, it is clear there is a growing debate in the UK about whether the government’s proposed approach to AI is correct.

Privacy Updates

Council and European Parliament Reach Provisional Agreement on the EHDS. On March 15, 2024, the Council of the EU and the EP reached a provisional agreement on the regulation creating an EHDS. The regulation aims to improve access to health data electronically across the EU. It is important to note that the text of the agreement has not yet been published and that this summary is solely based on press releases from March 15, 2024 and March 22, 2024. More details on the provisional agreement are included in our March 2024 blog. The agreement now needs to be formally adopted by the Council of the EU and the EP before it can become law.

Key elements of the agreement that are worth mentioning include:

  • Broad definition of health data, including health records, clinical trial data, health claims and reimbursement information; pathogen genetic and other human molecular data; or aggregated data on health care resources, expenditure, and financing
  • Limits to access to health data:

    • Permission is necessary prior to accessing data and is granted by a health data access body.
    • Patients have the right to object to secondary use of their data, subject to certain conditions (i.e., an opt-out mechanism), except when requested by a public body for public interest purposes; Member States may introduce further measures for certain data (e.g., genomic data).
    • Measures are in place in case of non-compliance, such as revoking data permits, excluding access to the EHDS for up to five years, or imposing periodic penalty payments.
    • Patients will be informed every time their data is accessed and information about the data applicant will be made public for purposes of accessing data and expected benefit, safeguards, and justified estimated processing period.
  • Limits to sharing of health data:

    • Data can only be shared in an anonymized or pseudonymized format to third parties mentioned in the data permit and only for public interest purposes (such as research and innovation).
    • Data is not permitted to be shared for advertising or assessing insurance requests.
    • Member States may introduce stricter measures regarding access to specific types of sensitive data (such as genetic, epigenomic and genomic data, and human molecular data).
  • The secondary use of electronic health data covered by IP and regulatory data protection rights, as well as trade secrets, is possible if it follows principles outlined in the regulation (for instance, informing the health data access body and justifying what exactly needs protection).
  • Health data transfers to third countries must comply with General Data Protection Regulation requirements and additional measures will be specified in a Delegated Act; data must be stored in the EU or in a country subject to a data protection adequacy decision by the European Commission.
  • A stakeholder forum will provide input on the EHDS and facilitate cooperation to ensure implementation.

DARWIN EU Calls for New Data Partners To Add to Its Network. On March 6, 2024, the European Medicines Agency (EMA) announced that the Data Analysis and Real World Interrogation Network (DARWIN EU) will expand and is looking for 10 new data partners to add in 2024. DARWIN EU is a coordination center created by the EMA and the European Medicines Regulatory Network to provide timely and reliable real-world evidence (RWE) on the use, safety, and effectiveness of medicines for human use from real world health care databases across the EU. The results of the studies carried out are made public in the new Heads of Medicines Agency-European Medicines Agency Catalogue of RWD studies. DARWIN EU obtains anonymized patient data from data partners, who generate RWE from sources such as hospitals, primary care, health insurance, registries, and biobanks to support regulatory activities of EMA’s scientific committees and national regulators in the EU. At the moment, DARWIN EU’s data partners include 20 public or private institutions from 13 European countries. The call to become a data partner is open to any data custodian in Europe and is a continuously open call until the end of 2024. However, applications will be reviewed and selected twice a year; applications received on or before April 30 and October 31, 2024 will be considered.

Reimbursement Updates

Flash Glucose Monitoring Systems Can Now Be Prescribed and Reimbursed in an Italian Region. The Italian Lombardy Region has adopted Resolution No. XII/1827 introducing a new regional eligibility criteria allowing Flash Glucose Monitoring Systems to be prescribed and reimbursed. The Flash Glucose Monitoring Systems are recommended in the resolution for patients with decompensated and non-decompensated type 1 diabetes mellitus and subjects with non-decompensated type 1 diabetes mellitus; patients with type 2 diabetes mellitus on basal insulin therapy and, for a limited period of three months, patients with type 2 diabetes receiving oral hypoglycemic therapy. It is the first time that glucose monitoring devices are reimbursed in Europe for type 2 diabetic patients on oral hypoglycemic therapy. Flash Glucose Monitoring Systems allow diabetic patients to self-monitor their health status.

Product Liability Updates

Revised EU Product Liability Directive One Step Closer To Becoming Law. On March 12, 2024, the EP formally adopted the revised Product Liability Directive (PLD). The Council of the EU is now expected to do the same without further amendments, after which the PLD will be published and will enter into force. This follows the provisional trialogue agreement reached on December 14, 2023 between the European Commission, EP, and Council (discussed in our January 2024 digest). The revised PLD makes several important changes to the existing EU product liability regime. For example, software and AI technologies will now fall within the scope of a product, there will be disclosure obligations on manufacturers, and the burden of proof for claimants has been alleviated through the introduction of several rebuttable presumptions. The EU Member States will have 24 months to transpose the measures into national law from the date the revised PLD enters into force.

IP Updates

Getty Images v. Stability AI: Getty Files Its Reply to Stability AI’s Defense. As we reported in our January 2023 digest, there is a significant ongoing case between Getty Images and Stability AI in the UK High Court testing the boundaries of copyright infringement. In particular, whether the use of a dataset obtained by the scraping of millions of images from websites owned by Getty Images (which also included images from a number of other well-known sources, including Pinterest, Flickr, Wikimedia, and Tumblr) to train Stability AI’s text-to-image generator, Stable Diffusion, amounts to copyright infringement in the UK. Getty Images also claims that the output produced by Stable Diffusion reproduces a substantial part of Getty Images’ copyrighted works, amounting to a separate act of copyright infringement. There are also claims of database right infringement and trademark infringement, which we do not discuss further in this update.

In the latest pleading filed by Getty Images on March 28, 2024, Getty Images disputes Stability AI’s allegation that there is a separate database and, instead, it asserts that the calculations described in Stability AI’s defense are part of the AI model itself. Getty Images maintains that, to generate synthetic images, an AI model must learn from images contained within the datasets on which it is trained. In relation to the output generated by Stability AI, Getty Images argues that Stability AI has a high degree of control over the features of Stable Diffusion and that it has failed to design the model so as to prevent it from generating synthetic image outputs which comprise a reproduction of a substantial part of the input image and which therefore infringe Getty Images’ copyright. Getty Images also intends to rely on indirect and/or “subconscious” copying arising from the fact that Stable Diffusion was trained on the copyrighted images. We will continue to monitor this high-profile case.

*The following individuals contributed to this Newsletter:

Amanda Cassidy is employed as a Senior Health Policy Advisor at Arnold & Porter’s Washington, D.C. office. Amanda is not admitted to the practice of law.
Eugenia Pierson is employed as a Senior Health Policy Advisor at Arnold & Porter’s Washington, D.C. office. Eugenia is not admitted to the practice of law.
Sonja Nesbit is employed as a Senior Policy Advisor at Arnold & Porter’s Washington, D.C. office. Sonja is not admitted to the practice of law.
Mickayla Stogsdill is employed as a Senior Policy Specialist at Arnold & Porter’s Washington, D.C. office. Mickayla is not admitted to the practice of law.
Katie Brown is employed as a Policy Advisor at Arnold & Porter’s Washington, D.C. office. Katie is not admitted to the practice of law.
Jonathan Mellor is employed as a Trainee Solicitor at Arnold & Porter's London office. Jonathan is not admitted to the practice of law.

© Arnold & Porter Kaye Scholer LLP 2024 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.