Skip to main content

This digest covers key virtual and digital health regulatory and public policy developments during February 2024 from the United States, United Kingdom, and European Union.

In this issue, you will find the following:

U.S. News

U.S. Featured Content

On February 28, 2024, the White House released an Executive Order (EO) aimed at restricting access by “certain countries of concern” to Americans’ sensitive personal data, including genomic and personal health data, and U.S. government-related data. According to the EO, certain foreign governments are seeking access to such data “to engage in a wide range of malicious activities,” which may involve the use of “advanced technologies, including artificial intelligence (AI), to analyze and manipulate bulk sensitive personal data to engage in espionage, influence, kinetic, or cyber operations or to identify other potential strategic advantages over the United States.” The EO directs the Department of Justice (DOJ), the Department of Homeland Security (DHS), and other federal agencies to take a variety of regulatory actions to prevent the adverse collection, use, and disclosure of bulk U.S. sensitive personal data and U.S. government-related data.

The DOJ has already issued a draft Advance Notice of Proposed Rulemaking (ANPRM) pursuant to the EO. Comments on the ANPRM are due on April 19, 2024.

EU and UK News

EU/UK Featured Content

The UK continues to pursue a “pro innovation” flexible approach to the regulation of AI. As outlined in the UK government's response to the public consultation, the government will develop a set of core principles for regulating AI, while leaving regulatory authorities, like the Medicines and Healthcare products Regulatory Agency (MHRA), discretion over how the principles apply in their respective sectors. A central governmental function will coordinate regulation across sectors and encourage collaboration. The government’s aim with this approach is to enable the UK to remain flexible to address the changing AI landscape, while being robust enough to address key concerns. This is in sharp contrast to the position in the EU, where the EU AI Act is reaching the conclusion of the legislative process.

U.S. News

FDA Regulatory Updates

FDA Adds the Digital Health Advisory Committee (DHAC) to the List of Standing Advisory Committees. On February 22, 2024, FDA issued an immediately effective rule amending its advisory committee regulations (21 C.F.R. Part 14) to add the DHAC to the list of standing advisory committees. The DHAC, which FDA first announced in October 2023, is being created to help the agency explore the complex, scientific, and technical issues related to digital health technologies (DHTs), such as artificial intelligence/machine learning, augmented reality, virtual reality, digital therapeutics, wearables, remote patient monitoring, and software. The newly amended regulation describes the function of the DHAC as advising the FDA commissioner or designee in discharging responsibilities as they relate to ensuring that DHTs intended for use as a stand-alone medical product, as part of a medical product, or as a companion, complement, or adjunct to a medical product are safe and effective for human use. The DHAC will consist of a core of nine voting members including the chair.

FDA Warns Against Using Unapproved Smartwatches To Measure Blood Glucose Levels and Clears First OTC Continuous Glucose Monitor. On February 21, 2024, FDA issued a safety communication warning consumers, patients, caregivers, and health care providers about risks related to using unapproved smartwatches or smart rings that claim to measure blood glucose levels without piercing the skin. According to FDA, these smartwatches and smart rings are manufactured by dozens of companies and are sold under various brand names. The devices at issue differ from applications that display data from FDA-authorized blood glucose measuring devices that pierce the skin (e.g., continuous glucose monitoring devices (CGM)). FDA explains that it has not authorized, cleared, or approved any smartwatch or smart ring that is intended to measure or estimate blood glucose levels on its own. FDA cautions that inaccurate blood glucose measurements can result in improper diabetes management.

Two weeks after issuance of the safety communication, on March 5, 2024, FDA announced that it cleared for marketing the first over-the-counter (OTC) continuous glucose monitor. The Dexcom Stelo Glucose Biosensor System is an integrated CGM intended for individuals 18 years and older who do not use insulin, such as individuals with diabetes treating their condition with oral medications or those without diabetes who want to better understand how diet and exercise may impact blood sugar levels. Per FDA, the system is not for individuals with problematic hypoglycemia (low blood sugar) as the system is not designed to alert the user to this potentially dangerous condition. The Stelo Glucose Biosensor System consists of a wearable sensor, paired with an application installed on a user’s smartphone or other smart device, to continuously measure, record, analyze, and display glucose values. Commenting on the clearance of the device, Dr. Jeff Shuren, Director of FDA’s Center for Devices and Radiological Health (CDRH), described CGMs as a powerful tool for monitoring blood glucose, and stated that “[g]iving more individuals valuable information about their health, regardless of their access to a doctor or health insurance, is an important step forward in advancing health equity for U.S. patients.”

FDA Clears First Over-The-Counter Fingertip Pulse Oximeter. On January 31, 2024, FDA also granted a 510(k) clearance to Masimo Corporation’s MightySat-OTC. According to Masimo, this is the first FDA clearance for an OTC fingertip pulse oximeter. MightySat-OTC is indicated for the spot-checking of functional oxygen saturation of arterial hemoglobin and pulse rate. It is only meant to be used by individuals 18 years and older who are well or poorly perfused under no motion conditions. The cleared indication specifies that MightySat-OTC is not intended for diagnosis or screening of lung disease and treatment decisions using the device should only be made under the advice of a health care provider.

FDA Official Comments on Experiences With Predetermined Change Control Plans (PCCPs). On February 21, 2024, at the Association for the Advancement of Medical Instrumentation/FDA neXus medical device standards conference, Jessica Paulsen, associate director for digital health at CDRH’s Office of Product Evaluation and Quality, discussed learnings to date from FDA and industry’s experience with PCCPs. As reported by Regulatory Focus, Paulsen indicated that industry is struggling with how much information to include in PCCPs and the types of changes that a PCCP can be used for. Notably, Paulsen discussed attempts by some sponsors to use PCCPs for modifications to a device’s intended use, which she stated would not be appropriate for a PCCP in most cases. Paulsen also addressed comments about the potential use of PCCPS for certain types of manufacturing and materials (e.g., per- and polyfluoroalkyl substances) changes, suggesting that such areas should be further explored with FDA. Paulsen recommended that sponsors only include a “handful” of proposed modifications in a PCCP and recommended that sponsors considering use of a PCCP meet with FDA early on through the presubmission process.

Health Care Fraud and Abuse Updates

Health Care Fraud and Abuse Composes Lion’s Share of Recoveries in Banner Year for DOJ FCA Judgments and Settlements; Trend Continues in 2024. Some health care entities, including providers, hospitals, pharmacies, and laboratories, helped set a new record in 2023, and not the kind of record that helped their bottom line.

In February, the DOJ released its False Claims Act (FCA) year-in-review, citing a monumental 543 settlements and judgements, the highest number of settlements and judgments that the government and whistleblowers were party to in a single year to date. Health care matters composed the majority of the recoveries. Of the more than US$2.68 billion in FCA settlements and judgments reported by the DOJ, over US$1.8 billion related to matters that involved the health care industry, including one settlement involving false claims for remote cardiac monitoring.

As we reported in our February 2024 digest, the uptick in telemedicine fraud and abuse enforcement continues into 2024. On February 16, 2024, Steven Richardson was charged and pleaded guilty for his role in a US$110 million telemedicine fraud scheme involving medically unnecessary durable medical equipment (DME), including orthotics like knee and back braces. Richardson owns Expansion Media (Expansion) and Hybrid Management Group (Hybrid), and allegedly used both of these companies to enter into business relationships that generated leads by targeting Medicare beneficiaries. Telemarketers would then allegedly pay Expansion and Hybrid on a per-order basis to generate DME for these beneficiaries. These prepopulated orders were signed by doctors and nurses Richardson allegedly found by working with medical staffing companies. These doctors and nurses would typically sign the orders without having any contact with beneficiaries. Richardson would then allegedly provide the signed orders to the telemarketing companies that sold the orders to DME suppliers who would utilize the orders to submit medically unnecessary claims to Medicare. In a similar case, Kareem Memon pleaded guilty on February 8, 2024 for his role in a DME kickback scheme. He and his co-conspirators owned and operated marketing call centers and telemedicine companies, using these entities to obtain doctors’ orders for medically unnecessary DME for Medicare beneficiaries.

Corporate Transactions Updates

Let's Get Together, Yeah Yeah Yeah, Why Don't You and I Combine (Digital Health Version). Digital health companies are facing the restraints of a challenging funding environment in Q1 2024. To find profitability in a crowded and underfunded market, digital health companies are turning to their peers and larger, more established digital health players to benefit from each other’s customer bases.

On February 21, 2024, digital chronic condition management company DarioHealth announced it acquired digital mental health company Twill in a deal valued at approximately US$40 million. DarioHealth’s customer base is primarily self-insured, middle-market companies, while Twill’s clients include more prominent digital health players including Microsoft, Google, and Amazon. Equipped with Twill’s impressive client base and technology capabilities, DarioHealth aims to create a best-in-class digital health platform to support mental well-being, maternal health, and costly chronic conditions.

Under the cash-and-stock deal, DarioHealth paid US$10 million in cash and will issue approximately 10 million shares of common stock in the form of pre-funded warrants for the benefit of Twill's debt and equity holders. The warrants will vest in four equal amounts at 270 days, 360 days, 540 days, and 720 days post-deal closing. Also concurrent with the acquisition, Dario priced a US$22.4 million private placement of convertible preferred stock.

On February 29, 2024, WebMD Health Corp. announced it will acquire the operating assets of Healthwise, Incorporated, a nonprofit that develops engagement technology for patients. The operating assets to be acquired by WebMD include Healthwise’s client relationships and patient-engagement technology, which will be used to bolster WebMD’s patient engagement and expand its footprint to over 650 health care organizations, including over 50% of hospitals in the U.S. and 85% of the top 20 payers. The acquisition makes WebMD’s patient engagement platform, Ignite, the largest health care growth engagement platform in the country.

The financial details of the deal between WebMD and Healthwise have not been disclosed.

Provider Reimbursement Updates

Health Groups Advocate Telehealth Permanency. On February 22, 2024, a coalition of 216 provider and patient groups released a letter urging Congress to make telehealth flexibilities established during the COVID-19 public health emergency (PHE) permanent. Signatories include hospital systems, patient advocacy groups, and numerous medical professional societies.

As we covered in our November 2023 digest, many flexibilities established during the PHE that expanded telehealth access for Medicare beneficiaries will expire at the end of 2024. These flexibilities include waiving in-person visit requirements for mental health telehealth services and allowing beneficiaries to use telehealth without geographic restrictions. The letter signatories urge Congress to act well in advance of the December 31 deadline in order to create certainty for Medicare beneficiaries, providers, and the Medicare program itself. They further contend that continued investment in telehealth technology and infrastructure, particularly in rural and underserved communities, depends on predictable and consistent telehealth reimbursement policy.

Policy Updates

Senate Finance Committee Holds Hearing on AI’s Impact on Federal Health Programs. On February 8, 2024, the Senate Finance Committee held a hearing titled Artificial Intelligence and Healthcare: Promises and Pitfalls, which examined the use of algorithms and other AI-enabled tools in federal health care programs. Members of both parties suggested making changes to certain Centers for Medicare and Medicaid Services reimbursement policies, which could be effective for regulating the use of AI in the future. Democratic members highlighted the potential for algorithmic discrimination if AI is widely used without federal guardrails, with Chair Ron Wyden (D-OR) expressing support for legislation to require companies to conduct impact assessments of their AI models.

White House Creates AI Safety Consortium. On February 9, 2024, the Biden administration announced the creation of the AI Safety Institute Consortium (AISIC), which convenes over 200 representatives from the private sector, civil society, and industry to develop AI safety guidelines. The creation of AISIC was mandated by the Biden administration’s Executive Order on Safe, Secure, and Trustworthy AI and is designed to promote standards for AI development while continuing to foster innovation. AISIC’s work will focus on standards for AI risk management, watermarking, capability evaluations, and other considerations.

House Leaders Establish Bipartisan Task Force on AI. On February 20, 2024, House Speaker Mike Johnson (R-LA) and Minority Leader Hakeem Jeffries (D-NY) announced the establishment of a bipartisan Task Force on AI to produce a report of bipartisan policy recommendations related to federal regulation of evolving AI technologies. Led by Reps. Jay Obernolte (R-CA) and Ted Lieu (D-CA), the task force consists of 24 members who will likely work with the existing House and Senate AI caucuses to draft the House’s AI-related legislative package. The package’s timing is still unclear, but it is intended to be available by the end of 2024 or sometime in 2025.

Privacy and AI Updates

Executive Order on Preventing Access to Americans’ Sensitive Data by Countries of Concern. The new EO issued by President Biden to restrict access to bulk U.S. sensitive personal data and certain government-related data by “countries of concern” charges DOJ with designating which nations (foreign governments) are “countries of concern.” The DOJ ANPRM identifies China (including Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela as such countries.

The EO also charges DOJ with promulgating regulations to prohibit or restrict access by such countries to bulk U.S. sensitive personal data and government-related data. As envisaged by the ANPRM, DOJ’s anticipated rules would prohibit, subject to certain exceptions or authorizations, any “U.S. person” from knowingly engaging in (1) certain data transactions with a country of concern; (2) certain transactions involving data brokerage with any foreign person (unless the U.S. person implements certain contractual requirements); or (3) certain data transactions with a country of concern or covered person that provide access to a certain threshold of bulk human genomic or biospecimen data — whether in a single transaction or aggregated across transactions.

In the ANPRM, DOJ suggests definitions of both “sensitive personal data” and “bulk U.S. sensitive personal data.” DOJ would define “sensitive personal data” as including (1) “covered personal identifiers,” (for example a government identification or account number such as a Social Security number, passport number, or driver's license or state identification number); a device-based or hardware-based identifier; demographic or contact data; advertising identifier (such as Google Advertising ID or Apple ID for Advertisers or a network-based identifier (such as an Internet Protocol address or cookie data); (2) precise geolocation data; (3) biometric identifiers; (4) human genomic data; (5) personal health data; and (6) personal financial data.

With respect to “bulk U.S. sensitive personal data,” the ANPRM suggests that this term be defined, in relation to relevant transactions, as a threshold amount of certain types of sensitive personal data relating to U.S. persons, in any format, regardless of whether the data is anonymized, pseudonymized, de-identified, or encrypted, if such dataset is accessed through one or more covered data transactions by the same foreign person or covered person. The specific types of data and threshold number of U.S. persons that would make data “bulk U.S. sensitive personal data” if so accessed are: (1) “human genomic data (100 to 1,000 U.S. persons), (2) biometric identifiers (100 to 1,000 U.S. persons or U.S. devices), (3) precise geolocation data (100 to 1,000 U.S. persons or U.S. devices), (4) personal health data (1,000 to 1 million U.S. persons, (5) personal financial data (1,000 to 1 million U.S. persons), and (6) covered personal identifiers (10,000 to 1 million U.S. persons).

The ANPRM is just the first indication of the breadth and significance of the EO for U.S. entities that handle sensitive personal information in contexts where such information might be shared with or accessible to a country of concern, including for research and development. Also forthcoming, among numerous other initiatives, are data security requirements to be proposed by the secretary of Homeland Security, acting through the director of the Cybersecurity and Infrastructure Security Agency, to address the “unacceptable risk” posed by restricted transactions. As directed in the EO, the data security requirements are to be based on the Cybersecurity and Privacy Frameworks developed by the National Institute of Standards and Technology.

EU and UK News

Regulatory Updates

UK Government Publishes Further Details on the Regulation of AI in the UK. On February 6, 2024, the UK government published its response to its consultation on regulating AI in the UK. Consistent with their initial consultation, the government proposes to establish a “pro-innovation” framework of principles for regulating AI, while leaving regulatory authorities, such as the MHRA for medicines and devices, the discretion over how the principles apply in their respective sectors. According to the government, the aim is that this approach will enable the UK to remain flexible enough to deal with the speed at which AI is developing, while being robust enough to address key concerns.

To assist in this process, the government has committed to providing regulators with funding to train and upskill their workforce to deal with AI and also to develop tools to monitor and address risks and opportunities. In addition, the government has proposed, and has already started to establish, a new central function to coordinate regulatory activity and help address regulatory gaps. The initial key roles of the central function will be to increase coherence between regulators, promote information sharing, and publish cross-sectoral guidance.

The consultation response concludes with the government setting out a roadmap of next steps in 2024 in relation to AI regulation. This includes, among those discussed above, collaborating with the AI Safety Institute to address risks of AI and sharing knowledge with international partners. Although development of a regulatory sandbox is not included in this list of next steps, the response notes that the majority of respondents stated that health care and medical devices would benefit most from an AI sandbox and it will likely be left to individual regulatory authorities to develop sector-specific sandboxes. The MHRA has already announced its intention to launch a regulatory sandbox called the “AI-Airlock” in April 2024 for software and AI medical devices. Further information can be found in our February 22, 2024 blog post.

IMDRF SaMD Working Group Opens Public Consultation on Considerations for Device and Risk Characterization for Medical Device Software. On February 2, 2024, the Software as a Medical Device (SaMD) Working Group of the International Medical Device Regulators Forum (IMDRF) published a Proposed Document titled “Medical Device Software: Considerations for Device and Risk Characterization.” The guidance will apply to the subset of software that meets the definition of a medical device, as defined by the IMDRF. The purpose of the guidance is to promote and inform clear accurate characterizations of medical device software and introduce a general strategy for characterizing software-specific risks that leverages the key features of a comprehensive medical device software characterization. The IMDRF guidance is referred to in the EU Medical Device Coordination Group guidance and in the ongoing consultation on medical devices in the UK, and so the guidance, once finalized, is likely to have implications for the EU and UK approach. The working group is inviting comments and feedback from the public until May 2, 2024.

European Parliament Informally Adopts Provisional Agreement on the AI Act. On February 13, 2024, the Members of the European Parliament voted in favor of the provisional agreement reached with the Council of the European Union (Council) on December 9, 2023 on the Artificial Intelligence Act (AI Act), discussed in our January 2024 digest. The text now needs to be formally adopted by the EU Parliament and by the Council shortly. It is expected that the AI Act will become law in early 2024 and will apply two years later, except for some provisions that will apply earlier.

European Artificial Intelligence Office Established. On February 14, 2024, the European Commission’s decision of January 24, 2024, establishing the European Artificial Intelligence Office (AIO), was published in the Official Journal of the European Union. The decision forms part of the European Commission’s package of measures to deliver on the twin objectives of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology. The decision entered into force on February 21, 2024, after which the AIO began its operations. The AIO will support the development and use of trustworthy AI, while protecting against AI risks. The AIO was established within the European Commission as the center of AI expertise and forms the foundation for a single European AI governance system.

EU Council Endorses Extension to IVDR Transition Periods and Accelerated Launch of Eudamed. On February 21, 2024, the Council of the European Union endorsed the European Commission proposal to amend the Medical Device Regulations (EU) 2017/745 and the In Vitro Diagnostic Medical Device Regulations (EU) 2017/746 (IVDR), as applicable, to extend the transition provisions for certain in vitro diagnostic medical devices under the IVDR; allow for a gradual roll-out of Eudamed so that certain modules will be mandatory from late 2025; and include a notification obligation in case of interruption of supply of a critical device. The details are discussed in our previous February 2024 digest and in our February 7, 2024 blog post. The text will now need to be formally adopted by the EU Parliament and Council.

UK DHSC Announce £10 Million Funding for Innovative Medical Devices. On February 14, 2024, the UK Department of Health and Social Care announced it will provide a £10 million funding package to support eight health tech companies bringing their innovative medical devices to market. The funding forms part of the UK’s Innovative Devices Access Pathway (IDAP) pilot scheme. As discussed in our October 2023 digest, the IDAP initiative was launched to accelerate the development of innovative medical devices and to help bring those technologies to the NHS. The companies include Avegen Ltd., which is being supported in its development of a Multiple Sclerosis fatigue smartphone app that delivers exercises, cognitive behavior therapy, and targeted physical activity in a personally customizable format, and Presymptom Health Ltd., which has developed a new test and algorithm with the potential to predict infection status up to three days before conventional diagnosis is possible. These eight companies will also receive ongoing support from UK government bodies, including the MHRA, to help accelerate the process of obtaining regulatory approval.

UK Advertising Regulator Upholds Complaints Against Advertising of Two Digital Health Apps. On February 21, 2024, the UK Advertising Standards Authority (ASA) issued rulings against two digital health app developers, finding that each app was marketed as a medical device without the requisite conformity marking. The ASA is responsible for enforcing the UK Code of Non-Broadcast Advertising and Direct & Promotional Marketing (CAP Code), a self-regulatory code governing consumer advertising in the UK.

The ASA held that the advertising for the Impulse Brain Training app implied that the app could diagnose Attention Deficit Hyperactivity Disorder (ADHD). Similarly, the ASA found that the advertising for the Happyo app amounted to claims that it could diagnose and treat ADHD, as well as alleviate the symptoms of ADHD. As such, the respective claims were medical claims that presented each app as a medical device despite the apps not having the appropriate conformity marking. The ASA also held that the advertising for each app breached the CAP Code provision that advertisers must not discourage consumers from seeking essential treatment for a condition for which medical supervision should be sought.

These two rulings serve as a reminder to digital health app providers that the ASA can also enforce medical claims made with respect to such products, and this is not the exclusive jurisdiction of the UK medical devices regulator, the MHRA. Although the ASA’s powers are limited compared to that of the MHRA, it is generally a more active regulator than the MHRA and these rulings may indicate greater scrutiny of claims for health apps. The ASA may also refer a matter to the MHRA for enforcement if a company continues to make unlawful medical claims regarding an app despite a negative ASA ruling.

UK and France Announce New Funding to Further Global AI Safety. On February 29, 2024, the UK and France announced a new partnership to boost research collaboration and further global AI safety. Along with the announcement of £800,000 new funding for cutting-edge research, the UK and French ministers announced a landmark new partnership between the UK AI Safety Institute and France’s Inria (The National Institute for Research in Digital Science and Technology), to jointly support the safe and responsible development of AI technology. On the same day, the French-British joint committee on Science, Technology and Innovation met for the first time. They will continue to meet every two years to discuss a variety of opportunities for shared research and teamwork, from low-carbon hydrogen and space observation, to AI and research security.

Privacy Updates

UK ICO Reminds App Developers to Comply With Data Privacy Laws. On February 8, 2024, the UK Information Commissioner’s Office (ICO) issued a reminder to app developers to comply with data protection laws and protect the privacy of their users. The reminder follows a review conducted by the ICO in 2023 into how various period and fertility apps processed personal data and the impact of such processing on users. Although the ICO states that “no serious compliance issues or evidence of harms were identified” in the review, it is clear that there was room for improvement in app developers meeting their privacy obligations, especially for health apps where the data are particularly sensitive. The ICO provided four tips to app developers to ensure compliance:

  • Be transparent. Developers should clearly and concisely explain the purposes for processing a user’s data, the retention periods, and who the data will be shared with. This information should be easily accessible to the user.
  • Obtain valid consent. Users must provide explicit and unambiguous consent to processing of their data, with a clear action to opt-in. Default methods (e.g., a pre-ticked box) are not appropriate. Users must also be able to easily withdraw their consent.
  • Establish the correct lawful basis. Developers should carefully consider the legal basis (consent, contract, legitimate interests) under which they will process the data. The legal basis should be specific to each purpose of processing and not adopted on a blanket basis.
  • Be accountable. Developers must be accountable for complying with their obligations under relevant data protection laws.

The Department for Science, Innovation and Technology has also published a code of practice for app store operators and app developers, which builds upon some of these core principles.

UK Government Publishes Guidance on AI Assurance. On February 12, 2024, the UK government published new guidance on AI assurance to assist industry and regulators build and monitor trustworthy and responsible AI systems. It sets out a range of techniques for businesses to measure, evaluate, and communicate that their technologies are trustworthy and comply with the core principles as proposed by the UK government in its white paper in March 2023 (and as endorsed by the government response to the consultation discussed above). Businesses are encouraged to routinely assess the risks and impact of bias and data protection by employing a range of assurance tools and using global technical standards. They should also put in place various internal policies and processes such as those addressing data collection, processing and sharing, risk mitigation, key staffing responsibilities, and avenues for staff to escalate concerns. The guidance concludes with five key actions for organizations:

  • Consider existing regulations applicable to AI systems (e.g., UK GDPR).
  • Train the organization’s workforce.
  • Review internal governance and risk management.
  • Monitor publication of new regulatory guidance.
  • Participate in the development of AI standards.

EFPIA Shows Concerns Over the Negotiations of the Text of the EHDS. On February 26, 2024, the European Federation of Pharmaceutical Industries and Associations (EFPIA) expressed concerns about the ongoing negotiations regarding the text of the regulation establishing a European Health Data Space (EHDS). EFPIA had already shared concerns regarding the regulation in the preceding months (see our January 2024 digest).

It highlighted the rush shown by the European legislators (European Parliament and Council of the European Union) to finalize the regulation before the upcoming European elections taking place in June. EFPIA urged legislators to take the necessary time to finalize the regulation to ensure the quality and robustness of the legal instrument forming the basis of EHDS creation.

Among its concerns, EFPIA presented the worry shared within the European health care ecosystem that the EHDS lacks the required level of legal certainty and consistency with the existing regulatory frameworks.

EFPIA also pointed out key issues that have not been adequately addressed, including:

  • Unclear and incoherent definitions regarding the type of data and actors involved in the EHDS
  • Lack of clarification on the interaction between the EHDS and other legal frameworks
  • Failure to reduce legal fragmentation or ensure harmonization and incentivize consistent implementation
  • Absence of specifications regarding the scope of electronic health data for secondary use
  • Regarding op-in/out mechanisms, the regulation should only allow the opt-out mechanism, and solely when there is no risk of inconsistent implementation of health data disparities
  • Lack of incentives for health research and innovation
  • Failure to leverage existing health data infrastructures
  • Absence of measures to avoid excessive data localization and international health data
  • Failure to involve all relevant health stakeholders

*The following individuals contributed to this Newsletter:

Amanda Cassidy is employed as a Senior Health Policy Advisor at Arnold & Porter’s Washington, D.C. office. Amanda is not admitted to the practice of law.
Eugenia Pierson is employed as a Senior Health Policy Advisor at Arnold & Porter’s Washington, D.C. office. Eugenia is not admitted to the practice of law.
Sonja Nesbit is employed as a Senior Policy Advisor at Arnold & Porter’s Washington, D.C. office. Sonja is not admitted to the practice of law.
Mickayla Stogsdill is employed as a Senior Policy Specialist at Arnold & Porter’s Washington, D.C. office. Mickayla is not admitted to the practice of law.
Katie Brown is employed as a Policy Advisor at Arnold & Porter’s Washington, D.C. office. Katie is not admitted to the practice of law.
Heba Jalil is employed as a Trainee Solicitor at Arnold & Porter's London office. Heba is not admitted to the practice of law.

© Arnold & Porter Kaye Scholer LLP 2024 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.