Virtual and Digital Health Digest
Thank you to all who joined us for our December 13 panel titled the “Race to Regulate.” In case you missed it, unpack this year’s pivotal legal challenges impacting the 2023 — and 2024 — digital legal landscape in our Year in Review Pocket Book. Do you have a 2023 digital health highlight or something you would like to see covered on the Digest in 2024? Tell us about it: firstname.lastname@example.org.
This digest covers key virtual and digital health regulatory and public policy developments during November 2023 from the United States, United Kingdom, and European Union.
In this issue, you will find the following:
- FDA Regulatory Updates
- Healthcare Fraud and Abuse Updates
- Corporate Transactions Updates
- Provider Reimbursement Updates
- Policy Updates
- Privacy Updates
EU and UK News
FDA Regulatory Updates
Center for Devices and Radiological Health (CDRH) Issues Update on Its Artificial Intelligence (AI) Program. On November 16, CDRH issued an update on its AI program. The program conducts regulatory science research to ensure patient access to safe and effective medical devices using AI or ML, and is one of 20 research programs in CDRH’s Office of Science and Engineering Laboratories. In this update, CDRH noted a number of regulatory science gaps that the AI program is focused on addressing. These gaps include (1) a lack of methods to analyze the training and test methods to understand, measure, and minimize bias of AI-enabled devices; (2) a lack of methods to evaluate the safety and effectiveness of continuously learning AI algorithms; and (3) a lack of methods to evaluate the safety and effectiveness of emerging clinical applications of AI-enabled medical devices. CDRH also described the research areas it focuses on, including methods to measure and quantify algorithmic bias, reduce performance difference among subpopulations, and ensure generalizability.
CDRH Announces New Initiatives on Pulse Oximeters. On November 16, CDRH also announced two new initiatives relating to pulse oximeters. First, CDRH announced that it will hold a virtual public meeting of the Anesthesiology and Respiratory Therapy Devices Panel of the Medical Devices Advisory Committee on February 2, 2024 to discuss pulse oximeters. Planned discussion topics for this meeting include “a new approach to improve the quality of premarket studies and associated methods used to evaluate the performance of pulse oximeters submitted for premarket review, taking into consideration a patient’s skin pigmentation and patient-reported race and ethnicity” and “the type and amount of data that should be provided by manufacturers to the FDA to evaluate the performance of pulse oximeters submitted for premarket review ... to ensure pulse oximetry is equitable and accurate for all patients.” Second, CDRH announced that it published a discussion paper, “Approach for Improving the Performance Evaluation of Pulse Oximeter Devices Taking Into Consideration Skin Pigmentation, Race and Ethnicity.” CDRH stated that it “hope[s] that this discussion paper and request for feedback will help to engage stakeholders and obtain public comment about concerns with pulse oximetry before the upcoming virtual public meeting.” According to CDRH, these new initiatives were informed by an earlier November 2022 advisory committee meeting, where stakeholders shared perspectives about ongoing concerns that pulse oximeters may be less accurate in individuals with darker skin pigmentation.
Owlet, Inc. (Owlet) Receives De Novo Clearance for Over-the-Counter Pulse Oximeter Sock for Infants. On November 9, Owlet announced that it received de novo marketing authorization for Dream Sock®, its over-the-counter pulse oximeter sock for infants. According to Owlet, the cleared Dream Sock® monitors and displays health readings from infants wearing the sock, including pulse rate and oxygen saturation level. It also alerts caregivers if their infant’s readings fall outside of preset ranges. This clearance follows Owlet’s receipt of a Warning Letter from the FDA in October 2021, relating to the regulatory status of the company’s Smart Sock products.
Healthcare Fraud and Abuse Updates
OIG Issues Consumer Alert on Remote Patient Monitoring Schemes. On November 21, OIG issued a consumer alert warning individuals of fraud involving telemedicine and remote patient monitoring (RPM). The alert identified a marked rise in medically unnecessary orders for equipment used to monitor patients remotely arising from consumer-directed ads and patient cold-calling by DME companies or pharmacies. According to the OIG, equipment may or may not be sent or the equipment sent may not be FDA-approved. The alert urged consumers to review their explanation of benefits and report fraud to the dedicated U.S. Department of Health and Human Services (HHS) hotline. Interestingly, the alert also gave insight into OIG’s thoughts on when RPM is medically necessary. Specifically, the OIG stated RPM “is beneficial for those whose condition might deteriorate quickly, where monitoring can reduce complications, hospitalizations, or death.”
The November RPM consumer alert is the latest in a range of OIG’s enforcement tactics (e.g., July 2022 Special Fraud alert, DOJ announcement of coordinated law enforcement action against telemedicine health care fraud, etc.) and guidance (OIG issued an updated Compliance Program Guidance last month) to stimy a sharp rise in fraudulent claims following a major industry shift to telemedicine spurred by the COVID-19 pandemic. This alert comes on the heels of hefty fines and penalties for medically unnecessary DME schemes (see coverage of select, key enforcement actions in this area in the October and November digests). DOJ and HHS summarized some of the telemedicine enforcement actions in its most recent annual report, published in November. Consumer alerts are usually followed by increased OIG enforcement action. Looking ahead, expect to see continued enforcement in the RPM space, including scrutiny on providers’ DME arrangements with third-party vendors.
Corporate Transactions Updates
New Amazon Prime Benefit: Unlimited Doctor Visits for $9 a Month Through One Medical. On November 8, Amazon announced it now offers 24/7 on-demand virtual care nationwide as a Prime membership benefit for an additional $9 a month. The new benefit, available only to Amazon Prime members, will offer healthcare services for preventive care, immediate concerns, and chronic conditions like diabetes. $9 a month also allows Prime members access to in-person office visits at any of One Medical’s hundreds of locations in the U.S., advertising “longer appointments so you don’t feel rushed.” This announcement comes less than a year after Amazon first announced its US$3.9 billion acquisition of the primary care provider One Medical, which had promising financials with US$1 billion in revenue, but was not yet profitable at the time of the acquisition.
$9 a month for healthcare appears to be a slam-dunk for Amazon, but it is too early to tell whether this will be Amazon’s golden ticket to secure a lasting spot in the digital health market.
Will Relaxation of Certain California Telehealth Laws Promote More Investment? California recently passed two bills that will make it easier for California residents to access telehealth services. AB1369 authorizes non-California licensed specialists to provide telehealth care to California residents who have life-threatening conditions or diseases. This could make the concept of virtual on-call specialists or subspecialists platforms easier to access in smaller, more rural communities without the specific employment costs, and thus create opportunities for geographically independent multi-specialty groups to market outside of their primary service area. Likewise, AB 232 addresses a critical shortage in behavioral health counselors by authorizing marriage and family therapists, professional clinical counselors, and clinical social workers licensed in other states to provide mental health services to California residents for up to 30 days. Investment and consolidation of behavioral health providers has continued to be strong as result of the ongoing high demand, and this bill will likely promote continued growth in this sector.
Provider Reimbursement Updates
Senate Finance Subcommittee Considers Telehealth Permanency. On November 14, the Senate Finance Subcommittee on Health Care held a hearing titled “Ensuring Medicare Beneficiary Access: A Path to Telehealth Permanency” to discuss making a broad range of pandemic-era telehealth flexibilities permanent. The senators heard testimony from telehealth practitioners and researchers on a range of telehealth flexibilities set to expire at the end of 2024, which include removing geographic restrictions and waiving in-person visit requirements for mental health telehealth services.
Witnesses stressed that Congress should make these flexibilities permanent far in advance of their expiration date so that regulatory uncertainty does not hinder telehealth investment. While most witnesses agreed that reimbursement for telehealth services should continue at parity for in-person visits to account for technology costs, telehealth researcher and Harvard professor Dr. Ateev Mehrotra argued in his testimony that payment parity would give virtual-only companies an unfair competitive advantage.
A group of bipartisan lawmakers expressed support for continued telehealth flexibility. Many lawmakers commented specifically on the ability of telehealth to expand Medicare beneficiaries’ access to care. Remarking on the value of telemedicine to rural communities, Ranking Member Steve Daines (R-MT) shared, “It’s safe to say there’s no going back now, as we’ve seen how transformative telehealth can be.” Earlier this year, subcommittee Chairman Ben Cardin (D-MD) and Sen. Daines co-sponsored the CONNECT for Health Act of 2023, a bipartisan bill that would make most telehealth flexibilities permanent.
MedPAC Debates Reimbursement for Software as a Medical Service. On November 2, commissioners on the Medicare Payment Advisory Commission (MedPAC) held a preliminary meeting to discuss how Medicare should cover and pay for software as a medical service.
Since 2018, Medicare has covered software that helps providers make clinical decisions, such as algorithmic diagnostic systems that analyze images of the eye to detect retinal diseases. Such services generally have been separately payable under the Outpatient Prospective Payment System and Physician Fee Schedule. While payments for software programs are typically bundled under the Inpatient Prospective Payment System, manufacturers can apply for new technology add-on payments, which provide additional payments for two to three years.
Several commissioners voiced concern over this payment program, arguing that Medicare should not be providing additional payments for technology that should make healthcare more efficient.
While Medicare spending on separately payable software programs has remained low, the MedPAC commissioners acknowledged that spending on software as a medical service may increase rapidly as technology evolves. Other commissioners advocated an active role for the Medicare program in driving innovation and suggested reforms such as creating a new benefit category or adjusting the definition of durable medical equipment to include standalone software products.
The MedPAC commissioners agreed the payment for software as a medical service would require further study. In concluding the discussion, Commissioner Robert Cherry stated, “[w]e are far away from having a set of recommendations about how these things should be dealt with.”
DEA Expects to Propose New Telemedicine Regulations. According to the Biden administration’s updated regulatory agenda, the Drug Enforcement Administration (DEA) expects to propose new rules governing the prescription of controlled substances via telemedicine later this month.
As we covered in the October digest, the DEA allowed physicians to prescribe controlled substances without an in-person visit during the public health emergency (PHE). In February 2023, anticipating the end of the PHE, the agency proposed two rules (first PHE rule available here and second PHE rule available here) that, if finalized, would have significantly curtailed the telemedicine flexibilities permitted during the PHE. After receiving thousands of comments in opposition to the proposal, the agency issued a temporary rule extending the telemedicine flexibilities until December 31, 2024. The agency stated it expects to promulgate new standards or safeguards by the fall of 2024.
Data Shows Telehealth Decline. According to recently released data from Epic Research, telehealth usage has dropped nearly 25 percentage points since its peak in 2020 but remains higher than before public health emergency levels. The dataset, gathered from health systems using Epic Electronic Health Records software, shows that telehealth still accounts for 37% of mental health encounters and 11% of infectious disease appointments.
Congress Averts Shutdown and Pushes Funding Debate to 2024. Throughout November, Congress considered various pathways forward to avoid a potential federal government shutdown on November 17. On November 16, President Biden signed into law a “laddered” continuing resolution that extends current federal funding through January 19 and February 2. Before the end of the year, Congress must pass the “National Defense Authorization Act (NDAA) for Fiscal Year 2024” (H.R. 2670/S. 2226) and may consider a national security supplemental funding package as requested by President Biden earlier this fall. While the House and Senate have met throughout the fall to reconcile significant differences between the two versions of the NDAA bills, the NDAA is expected to receive a vote by mid- to late-December. If passed, the supplemental funding package could include funding for Israel, Ukraine, the Indo-Pacific region (Taiwan), humanitarian assistance for the Gaza Strip, border security, and the processing of migrants at the southern U.S. border.
CRS Reports on Biosecurity Concerns From Use of AI. On November 22, the Congressional Research Service published a report titled “Artificial Intelligence in the Biological Sciences: Uses, Safety, Security, and Oversight.” The report discusses some of the challenges that digital technologies’ AI-enabled software may pose to the U.S. health system, including laboratory biosecurity and biosafety concerns.
House Hearings on Health-Related AI Issues. On November 29, the House Energy & Commerce Health Subcommittee held a hearing titled, “Understanding How AI is Changing Health Care.” Members discussed the importance of establishing robust privacy standards to protect patients’ data, including Rep. Greg Pence (R-IN) speaking in support of expanding privacy protections beyond what is already established under the Health Insurance Portability and Accountability Act (HIPAA) given the advent of new biometric and wearable health technology. The Energy & Commerce Committee plans to host government officials from HHS, the Department of Commerce, and the Department of Energy for a hearing on December 13 titled, “Leveraging Agency Expertise to Foster American AI Leadership and Innovation.”
Class Action Suit Claims UnitedHealthcare Used AI to Wrongfully Deny Claims. On November 14, the estates of two deceased beneficiaries of UnitedHealthcare’s Medicare Advantage plans filed a class action lawsuit against UnitedHealthcare (UHC) for its alleged deployment of AI to make adverse coverage decisions about elderly patients. According to the complaint, which was filed in the U.S. District Court for the District of Minnesota, UHC knew that the AI model known as “nH Predict” had a 90% error rate and that roughly 0.2% of policyholders would appeal denied claims while “the vast majority [would] either pay out-of-pocket costs or forgo the remainder of their prescribed post-acute care.” According to the complaint, the nH Predict AI Model, as used by UHC, directs the insurer’s medical review employees to cease covering care without considering an individual patient's needs. By “eliminating the labor costs associated with paying doctors and other medical professionals for the time needed to conduct an individualized, manual review of each of its insured’s claims,” the plaintiffs assert, UHC saves money by denying claims they otherwise would have paid. As alleged in the complaint, UHC’s use of the tool to deny the members' post-acute coverage is “systematic, illegal, malicious, and oppressive.”
UHC reportedly plans to defend the suit as meritless. A spokesperson for naviHealth, UHC’s care management company behind the algorithm, stated that the nH Predict tool was not used for making coverage determinations, but rather “as a guide to help us inform providers, families, and other caregivers about what sort of assistance and care the patient may need both in the facility and after returning home.”
UHC is not the only health insurer to face allegations of misuse of AI. In July, Cigna was sued by a purported class (represented by the same firm that filed the suit against UHC) in the U.S. District Court for the Eastern District of California. The suit alleges that Cigna Corp. and Cigna Health and Life Insurance Co. engaged in a scheme “to systematically, wrongfully, and automatically deny its insureds the thorough, individualized physician review of claims guaranteed to them by California law and, ultimately, the payments for necessary medical procedures owed to them under Cigna’s health insurance policies.” According to the complaint, Cigna developed an algorithm known as PXDX “to enable its doctors to automatically deny payments in batches of hundreds or thousands at a time for treatments that do not match certain preset criteria, thereby evading the legally-required individual physician review process.”
HHS Releases Strategic Plan to Improve Cybersecurity in the Healthcare Sector. On December 6, HHS issued a plan of action for improving cybersecurity protections in the healthcare sector. According to HHS, there was a 93% increase in large cybersecurity breaches reported to the HHS Office for Civil Rights in the 2018-2022 period, with a 278% increase in large breaches involving ransomware. To help prevent future cybersecurity incidents, the HHS plan of action, building on the National Cybersecurity Strategy announced by President Biden in 2022, sets forth four targeted strategic steps HHS will take in the coming year:
- Publish cybersecurity performance goals. HHS will publish “Healthcare and Public Health Sector-specific Cybersecurity Performance Goals” (HPH CPGs) designed to help healthcare institutions prioritize implementation of high-impact cybersecurity practices. The HPH CPGs will distinguish between “essential” goals for minimum foundational cybersecurity practices and “enhanced” goals encouraging adoption of more advanced practices.
- Provide resources to incentivize and implement cybersecurity practices. HHS plans to work with Congress to obtain new authority and funding to administer financial support and incentives for high-impact cybersecurity practices in the healthcare sector.
- Support greater enforcement and accountability. HHS will propose new enforceable cybersecurity standards, informed by the HPH CPGs, which would be incorporated into existing programs. CMS will propose new cybersecurity requirements for hospitals through Medicare and Medicaid, and the HHS Office for Civil Rights will propose adding new cybersecurity requirements to the HIPAA Security Rule.
- Enhance the HHS “one-stop shop” for healthcare sector cybersecurity. HHS will enhance its “one-stop shop” cybersecurity support function for the healthcare sector within the Administration of Strategic Preparedness and Response to make it easier for the healthcare industry to access the support and services the federal government has to offer. This would enable industry members more readily to obtain technical assistance and guidance from various federal agencies with sophisticated cybersecurity expertise.
EU and UK News
MedTech Europe Proposes Changes in the IVDR and MDR. On November 7, the European trade association for the medical technology industry, MedTech Europe, published a position paper proposing changes to the In-Vitro Diagnostic Medical Devices Regulation (IVDR) and Medical Devices Regulation (MDR). In the position paper, MedTech Europe outlines what it believes are the structural issues with the regulations, stressing that they are causing innovation to be hampered. The structural issues identified are:
- The unpredictability and inefficiency of the certification processes regarding the information expected from companies, the requirements, and the timelines
- The inefficiencies caused by the existing decentralized system of notified bodies
According to MedTech, these issues risk widening a gap in access to medical technology, and MedTech proposes certain measures, in particular:
- Introducing an efficient CE Marking System that guarantees access to devices and innovations, including solutions such as cutting down on bureaucracy or fully digitizing the EU system, allowing digital labelling
- Incorporating an innovation principle, including solutions such as creating accelerated assessment pathways for medical technologies innovations addressing unmet medical needs or pre-certification access models
- Introducing an Accountable Governance Structure that is able to coordinate and manage the decentralized network of notified bodies, take system level decisions, issue guidance, and represent them within Europe and globally
The MDCG Issues Revised Position Paper on Compliance With the MDR and IVDR. On November 29, the Medical Device Coordination Group (MDCG) published a revised version of the notice to manufacturers and notified bodies to ensure timely compliance with MDR and IVDR requirements that was published in June 2022. In the position paper, the MDCG calls on manufacturers to transition to the regulations and submit their certification applications as soon as possible, as delaying submissions could lead to a backlog of requests to notified bodies, resulting in delays and, ultimately, in product shortages. The call particularly urges manufacturers of class D IVD devices, which must transition to the IVDR by May 2025.
In addition, and in line with some of the recommendations from industry above, the MDCG calls on notified bodies to make the certification process more efficient, transparent, and predictable, and highlights the importance of properly guiding and assisting manufacturers in the conformity assessment application. The MDCG also calls on the notified bodies to regularly provide data on the situation regarding the certifications, and to increase the transparency about their capacity and timelines, ideally on a common website compiling that of every other notified body in Europe.
Updates on the Regulation of AI in the UK. On November 16, the UK government published its response to the interim report from the Science, Innovation and Technology Committee dated August 31, 2023 (discussed in our September digest). The interim report highlighted 12 key challenges in relation to the governance of AI and the government’s progress in addressing these challenges, as well as actions set out in its white paper published in March 2023 (see our April digest). The most notable updates are:
- The intent to still not introduce new AI-specific legislation at this stage and to continue an evidence-based and iterative approach to regulation
- The establishment of a “Central AI Risk Function” within the Department for Science, Innovation and Technology to identify and monitor developing risks from AI and coordinate their mitigation using broad expertise
- The plan to pilot a multi-agency advice service known as the “DRCF AI and Digital Hub” for innovators of AI technologies to access tailored support from multiple regulators simultaneously
- The establishment of the “AI Safety Institute” (previously called the Frontier AI Taskforce) to provide insights into the capabilities and risks of frontier AI and foundation models
The government’s response to the AI white paper consultation, with updates on its regulatory approach to AI, is expected before the end of 2023.
On the topic of AI regulation, on November 23, a Private Members’ Bill was introduced to the House of Lords. The main purpose of the bill is to establish a central AI authority to coordinate and monitor the regulatory approach to AI, while promoting transparency, reducing bias, and balancing regulatory burden against risk. This largely tracks the government’s white paper, but seeks to introduce the terms into law. While only a minority of Private Members’ Bills become legislation, it is clear there is a growing debate in the UK about whether the proposed approach to regulation is correct.
European Parliament Agrees on Text of the EHDS Regulation. On November 28, the members of the European Parliament working on the European Health Data Space regulation reached an agreement on the text for the regulation. The agreed text aims to promote the use of aggregated health data for public interest reasons, but introduces limits on the use of these data, including bans to its use (e.g., in advertising or sharing with third parties), and making access subject to a request to national bodies.
The agreed text includes the need to obtain explicit permission from patients to use aggregated sensitive health data, provides patients with an opt out mechanism for other health data, and the option to challenge a decision of a health data access body, either personally or through a non-profit organization on their behalf. In addition, the agreed text underlines the importance of providing for sanctions in case of misuse of personal health data and includes the obligation to store health data in the EU. The text will have to be formally adopted by the European Parliament in a plenary vote in December and, if approved, will then need to be adopted by the Council.
Council of the European Union Adopts Data Act. On November 27, the Council of the European Union formally adopted the regulation on harmonized rules on fair access to and use of data (Data Act), following the formal adoption by the European Parliament on November 9. The Data Act aims to make data more accessible and ensure fair access and use, and establishes harmonized rules on sharing data generated through the use of connected products and services. The adopted text includes measures related to:
- Trade secrets, including a definition and adequate safeguards
- Data sharing, including measures to prevent abuse of contractual imbalances in data sharing contracts, safeguards against unlawful data transfers, and the possibility for the European Commission, the European Central Bank, and EU bodies to access and use data held by private sector in case of public emergencies or public interest
- Governance, including an option for member states to have a data coordinator authority, which would act as a single point of contact
The Data Act will now be published in the EU Official Journal in the following weeks, and will enter into force 20 days after its publication. Note that the application of the new rules will be 20 months after its entry into force.
Global Guidelines for AI Security Published. On November 27, the UK’s National Cyber Security Centre published its Guidelines for Secure AI System Development, which were developed in collaboration with the U.S. Cybersecurity and Infrastructure Security Agency. The guidelines have been endorsed by cybersecurity agencies from 16 additional countries, including those in France, Germany, and Japan, and are intended to assist developers make informed cybersecurity decisions at all stages of the development process and beyond. The guidelines are split into four key areas (secure design, secure development, secure deployment, and secure operation and maintenance) with suggested considerations and mitigations to help improve security at each stage of the AI system life cycle. The guidelines are voluntary but all stakeholders are urged to read and take account of the guidelines. It is possible that the guidelines will inform the minimum cybersecurity requirements that are expected to be imposed through the proposed EU AI Act and AI Liability Directive.
Intellectual Property Updates
Landmark UK High Court Decision Makes It Easier to Patent AI-Related Inventions. On November 21, the UK High Court handed down its judgment on Emotional Perception AI Ltd. v. Comptroller-General of Patents, Designs, and Trade Marks, overturning the UK Intellectual Property Office’s (UKIPO) refusal to recognize a trained Artificial Neural Network (ANN) as patentable. The patentee’s invention concerned a process which identified semantically similar media files to the input media file via an ANN, to provide end users with recommended media files, for example, similar songs. The UKIPO refused to grant the patent on the basis that the application was a “computer program as such,” thus falling under the exclusion to patentability under s.1(2)(c) of the Patents Act 1977. On appeal, the High Court disagreed with the UKIPO concluding that the trained ANN was not a computer program, and in any case, the invention made a substantial technical contribution, and therefore could be patentable. Although the case was not directly related to digital health, this decision is welcome news for AI innovators including those involved in the development of medical technologies, as providing an important route to avoid engaging the computer program exclusion in the UK. The UKIPO has taken prompt action and already published a notification of an immediate change to practice for the examination of ANNs, such that inventions involving an ANN should not be objected to under the “program for a computer” exclusion.
*The following individuals contributed to this Newsletter:
Amanda Cassidy is employed as a Senior Health Policy Advisor at Arnold & Porter’s Washington, D.C. office. Amanda is not admitted to the practice of law.
Eugenia Pierson is employed as a Senior Health Policy Advisor at Arnold & Porter’s Washington, D.C. office. Eugenia is not admitted to the practice of law.
Mickayla Stogsdill is employed as a Senior Policy Specialist at Arnold & Porter’s Washington, D.C. office. Mickayla is not admitted to the practice of law.
Katie Brown is employed as a Policy Advisor at Arnold & Porter’s Washington, D.C. office. Katie is not admitted to the practice of law.
© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.