Virtual and Digital Health Digest
This digest covers key virtual and digital health regulatory and public policy developments during February and early March 2026 from the United States, United Kingdom, and European Union.
In this issue, you will find the following:
U.S. News
- Health Care Fraud and Abuse Updates
- Provider Reimbursement Updates
- Privacy and Artificial Intelligence (AI) Updates
- Policy Updates
U.S. Featured Content
Over the past month, U.S. developments have highlighted a converging focus on telehealth fraud enforcement and artificial intelligence (AI)-driven healthcare innovation. The U.S. Department of Justice (DOJ) continued to target telemedicine-enabled Durable Medical Equipment (DME) schemes involving alleged kickbacks, sham physician relationships, and medically unnecessary orders, reinforcing scrutiny under the Anti-Kickback Statute and False Claims Act. In parallel, policymakers advanced a more risk-based approach to AI oversight, including Senate proposals to streamline U.S. Food and Drug Administration (FDA) regulation and state-level pilots like Utah’s AI-enabled prescription renewal program. Federal agencies also accelerated digital health adoption, with the U.S. Department of Health and Human Services (HHS) promoting “clinical AI,” the Centers for Medicare & Medicaid Services (CMS) launching a Medicare App Library, and the administration’s Comprehensive Regulations to Uncover Suspicious Healthcare (CRUSH) initiative signaling expanded use of AI to detect fraud, alongside ongoing legislative efforts on cybersecurity and digital health policy.
EU and UK News
EU/UK Featured Content
February 2026 saw a period of substantial regulatory activity across both the UK and EU, particularly in relation to AI governance, medical technologies, and data protection. In the UK, the policy landscape continued to evolve with initiatives affecting the regulation of medical devices, clinical research, and AI deployment. Key developments included the Medicines and Healthcare products Regulatory Agency’s (MHRA) consultation on the indefinite recognition of CE-marked medical devices, record levels of medical device testing, and the Prescription Medicines Code of Practice Authority’s (PMCPA) revised guidance on the use of social media. AI remained a major focus in the UK, with the UK government’s response to the consultation on the AI Management Essentials tool, increased industry involvement in the UK AI Security Institute’s alignment program, and feedback relating to governmental research on AI adoption across UK businesses. Additional international collaboration efforts included UK engagement at the India AI Impact Summit and an expanded science and technology partnership with Japan, as well as the launch of the first-ever AI Strategy for UK Research and Innovation.
At the EU level, regulatory activity centered predominantly on data protection, with the adoption of several important outputs from the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS). These included a joint opinion on the European Commission’s proposed Digital Omnibus amendments, a report following public consultation on anonymization and pseudonymization, and the publication of the EDPB’s 2026-2027 work program. These developments indicate a renewed emphasis on maintaining high standards of data protection while ensuring clarity for organizations navigating complex digital and AI-driven ecosystems.
In parallel, the UK implemented major reforms to its domestic data protection framework through the Data (Use and Access) Act 2025, which entered into force this month. Together, these UK and EU developments highlight a regulatory environment increasingly focused on the safe deployment of advanced technologies, the strengthening of data protection safeguards, and the continued modernization of medical device oversight.
U.S. News
Health Care Fraud And Abuse Updates
DOJ Pursues Medically Unnecessary Telemedicine Schemes Across the U.S. Recently, four different individuals were sentenced for their roles in Medicare Fraud schemes, demonstrating DOJ’s continued interest in DME fraud enforcement. In all four cases, the individuals allegedly used telemedicine companies to order medically unnecessary DME, which resulted in fraudulent claims to Medicare.
On February 26, 2026, Reinaldo Wilson, owner of two New Jersey-based telemedicine companies, was sentenced to prison and ordered to pay restitution for orchestrating a Medicare fraud scheme. As a part of the scheme, Wilson’s companies allegedly paid illegal kickbacks to providers to sign orthotic brace orders for Medicare beneficiaries who had no clinical need. The signed orders were then sold to marketing companies, which resold them to brace suppliers that submitted over $56 million in Medicare claims. Wilson then allegedly attempted to conceal the fraud by establishing a successor company in a third party’s name while retaining actual control.
In another case, on March 6, 2026, Kartik Bhatia, an Illinois man, was sentenced to prison for conspiring to defraud Medicare of over $2 million through a scheme involving medically unnecessary orthotic braces. Bhatia’s DME company allegedly paid telemarketing companies for orders, shipped braces that beneficiaries neither needed nor requested, and used physician signatures on orders from doctors who had no treating relationship with those patients. Similar to Wilson, Bhatia opened a second company to conceal his continued fraudulent conduct.
Additionally, on March 6, 2026, Dr. Scott Taggart Roethle, a Kansas anesthesiologist, was sentenced to prison and ordered to pay restitution for his role in a telemarketing scheme where overseas call centers collected Medicare beneficiary information. Roethle then signed fraudulent DME prescriptions without examining patients or establishing treating physician relationships, falsely certifying medical necessity. Medicare paid out at least $8 million based on his orders.
In another case, Georgia chiropractor, Teflyon Cameron, was sentenced on March 9, 2026, after pleading guilty to conspiracy to commit health care fraud and conspiracy to violate the Federal Anti-Kickback statute. In addition to using marketing call centers and telemedicine companies to order medically unnecessary DME, Cameron and her co-conspirators allegedly entered sham contractual arrangements designed to disguise kickback payments to a clinical laboratory on a per-beneficiary lead basis. The scheme resulted in over $14.9 million in Medicare losses.
Provider Reimbursement Updates
CMS Issues Request for Information on AI Tools for Medicare Plan Selection. On February 24, CMS issued a Request for Information (RFI) seeking input from companies developing artificial intelligence tools that could assist Medicare beneficiaries in evaluating health plan options. The agency noted that approximately 70 million Medicare beneficiaries currently evaluate coverage options across Medicare.gov, Medicare Plan Finder, and the Medicare Call Center, but that these tools may be difficult to navigate or subject to extended wait times.
CMS expressed its interest in AI solutions that can provide personalized plan recommendations, real-time conversational support, predictive analytics, accessible decision-support tools, and call center automation to help beneficiaries make informed coverage decisions. The RFI seeks information on companies’ existing AI tools, pricing models, and experience working with Medicare plan selection, coverage guidance, or beneficiary support systems. Respondents must not be affiliated with or owned by insurance carriers, health plans, or any entity with a financial incentive to steer beneficiaries toward specific plans or carriers.
CMS stated that following this RFI, it anticipates issuing formal solicitations for “AI Tools for Medicare Experience Modernization,” subject to funding availability and agency priorities. Responses to the RFI are due by March 31, 2026.
Privacy and AI Updates
Senate Health Committee Recommends Streamlined FDA Regulation of Digital Health Technologies. On February 17, 2026, Senate Health Committee Chair Bill Cassidy (R‑LA) released a paper titled “Patients and Families First: Building the FDA of the Future.” The paper describes various proposed reforms to modernize FDA procedures through the use of AI powered tools. It recommends that the FDA focus AI regulation on AI uses that directly influence regulatory submissions or risk-benefit assessments, in cases where risk-mitigation safeguards are not already in place, and expand its internal AI expertise by hiring qualified specialists as well as building partnerships with external AI experts. The paper cautions against regulating clinical decision support tools, which often integrate AI to analyze patient data and produce in-house data.
Utah Partners With AI-Native Health Platform to Establish an AI Prescription Renewal Program. The Office of Artificial Intelligence Policy within the Utah Department of Commerce is facilitating a partnership between Utah and the AI-native healthcare startup Doctronic to provide the first state-approved AI system to autonomously renew certain prescription medications. The plan is for a pilot program, pursuant to Utah’s regulatory sandbox framework, allowing the state to temporarily modify or suspend regulatory requirements, including telehealth requirements, while gathering empirical evidence to use for regulatory decision-making. The prescription renewal platform will be available only for patients with chronic conditions who are physically present in Utah, but may be used to renew prescriptions for a wide range of commonly prescribed medications. Doctronic’s AI system will evaluate renewal based on prescription history and clinical questions designed to detect contraindications, adverse effects, or changes in condition. If any risk is detected, the system will flag the request and escalate it to a physician for human review. Utah intends to make the findings of the pilot program public in an effort to inform future AI policy.
Policy Updates
HHS’ ASTP/ONC Holds 2026 Annual Meeting and Launches EHIgnite Challenge. On February 11-12, 2026, HHS’ Office of the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health IT (ASTP/ONC) held its 2026 Annual Meeting focusing on health IT policy and technology. The meeting prioritized the administration’s directives to lower healthcare costs through leveraging tools in AI and other healthcare infrastructure. During the keynote presentation, Assistant Secretary for Technology Policy and National Coordinator for Health IT, Dr. Thomas Keane, highlighted recent ASTP/ONC accomplishments, including “leading the Department-wide push to unlock clinical AI.”
- The agency recently issued a Request for Information seeking comments on the adoption and use of AI as part of clinical care. The deadline to submit comments was February 23, 2026.
- ASTP/ONC also launched the EHIgnite Challenge, which solicits health IT developers to create tools or platforms to “transform” electronic health information into actionable and usable information for clinical care and patient engagement. Submissions for proposals for Phase 1 are due by May 13, 2026.
FDA Hires New Director for Digital Health Center of Excellence. The FDA has reportedly hired Dr. Rick Abramson as the agency’s new director of FDA’s Digital Health Center of Excellence. Abramson was most recently a contracted consultant in the Office of the FDA Commissioner until his appointment to this new role. He was also formerly the Chief Medical Officer at a subsidiary of Harrison.ai.
CMS Launches Medicare App Library. On February 23, 2026, CMS launched a new Medicare App Library as part of the agency’s Digital Health Tech Ecosystem. The library will compile vetted “health apps,” such as mobile and web applications, technology-enabled care services, digital health platforms, and care delivery tools, that are compliant with CMS’ Aligned Networks and Health Insurance Portability and Accountability Act. These apps must meet one of three use cases, including “killing the clipboard,” conversational AI assistants, and diabetes and obesity prevention and management. The agency is also requiring that vetted apps include a feature for digital identity verification through ID.me or CLEAR, which the agency has recently rolled out for identity verification for Medicare.gov accounts.
Trump Administration Announces Efforts To Address Health Care Fraud. On February 25, 2026, Vice President J.D. Vance, HHS Secretary Robert F. Kennedy Jr., and CMS Administrator Dr. Mehmet Oz announced the CRUSH initiative, which includes an RFI seeking input on strategies to strengthen CMS’ ability to respond and prevent fraud, waste, and abuse in Medicare, Medicaid, the Children’s Health Insurance Program, and the Health Insurance Marketplace. Secretary Kennedy noted that the initiative and related strategies aim to replace older methods with AI tools to identify fraud and prevent improper payments. Responses to the RFI are due by March 30, 2026. Feedback may be used to inform an upcoming CRUSH proposed rule.
Senate Markup Includes Health Care Cybersecurity Bill. On February 26, 2026, the Senate HELP Committee held a markup of four bills, including the Health Care Cybersecurity and Resiliency Act of 2025 (S. 3315). The bill is led by HELP Chairman Cassidy (R-LA), Sen. Maggie Hassan (D-NH), Sen. John Cornyn (R-TX), and Sen. Mark Warner (D-VA). Chairman Cassidy emphasized the necessity to safeguard patients and ensure providers can provide care without disruption, referencing the Change Healthcare cyberattack in 2024. The bill passed out of committee, as amended, by a vote of 22-1.
House Markup of Kids Online Safety Bills. On March 5, 2026, the House Energy and Commerce Committee held a markup of eight bills, including three bills related to protecting minors online: the Kids Internet and Digital Safety (KIDS) Act (H.R. 7757), Sammy’s Law (H.R. 2657), and the App Store Accountability Act (H.R. 3149) — all of which were approved, largely along party lines. The Children and Teens’ Online Privacy Protection Act (COPPA 2.0, H.R. 6291) also was on the agenda, but Chairman Brett Guthrie (R-KY) announced mid-markup that the committee would postpone consideration to allow Republican and Democratic staff more time to negotiate a bipartisan agreement.
EU and UK News
Regulatory Updates
PMCPA Publishes Revised Guidance for the Use of Social Media. The PMCPA has issued revised guidance on the use of social media, reflecting the rising number of code breaches linked to online activity and the growing complexity of digital engagement. The update replaces the 2023 version with a redesigned, web based format that includes Q&As, practical examples, and links to PMCPA cases, making it easier for companies to navigate the rules in an evolving social media environment. Notably, the guidance introduces expanded sections on clinical trial recruitment, responding to misinformation, news for an investor audience/the media, pharmacovigilance responsibilities, and engaging with influencers. For more details on the guidance, read our February 2026 BioSlice Blog.
MHRA Consultation on Indefinite Recognition of CE-Marked Medical Devices. The MHRA has launched a consultation seeking views on proposed changes to the recognition of EU CE marked medical devices in Great Britain, as part of wider efforts to protect patient access to safe and effective medical technologies and refine the UK’s post Brexit regulatory landscape. The proposals include: (1) extending the existing transitional arrangements for devices certified under the former Medical Devices Directive to align with the EU’s transition timelines under the Medical Devices Regulation 2017/745 (EU MDR), (2) the indefinite recognition of devices compliant with the EU MDR and the In Vitro Diagnostic Medical Devices Regulation 2017/746 (EU IVDR), and (3) a proposed international reliance route for devices that comply with the EU MDR or EU IVDR but are classified at a higher risk level under UK MDR 2002. For more details, read our February 2026 BioSlice Blog. In the meantime, the MHRA has published an infographic of the current timelines in place for placing CE-marked medical devices on the Great Britain market.
UK Medical Device Testing Hits Record High. The MHRA has announced that UK medical device testing reached a record high in 2025, with a 17% rise in approved clinical investigations. This growth has been driven by investments in neurotechnology and a surge in AI-powered medical devices. These developments form part of the MHRA’s broader work to promote innovation and remove barriers for smaller companies, including initiatives such as a fee waiver pilot, early market access to promising devices, and enhanced support for high-impact technologies.
MHRA Sponsors a New Standard on Clinical Studies for Digital Mental Health Technologies. The MHRA has sponsored the British Standards Institute to develop a standard providing recommendations for performing clinical studies to generate clinical evidence for digital mental health technologies. The MHRA intends for the standard to apply to the pre-market phase and to the real-world data collection in the early implementation, post-market phase. The standard is likely to include factors such as controls, sample characteristics, safety, effectiveness, and engagement end points, as well as follow up periods. A public consultation on a draft will take place in mid-2026.
The International AI Safety Report 2026 Published. Released on February 3, 2026, the report, which was led by Turing Award winner Yoshua Bengio and authored by over 100 international experts, provides a scientific assessment of general-purpose AI capabilities, focusing on three key questions: (1) what can general-purpose AI do today, (2) what emerging risks does it pose, and (3) how can those risks be mitigated. The report seeks to support policymakers in addressing the difficulties of gathering and evaluating evidence on the risks associated with rapidly developing and increasingly capable AI systems, a challenge described as the “evidence dilemma.” It highlights that performance remains uneven and “jagged,” with capabilities varying widely across tasks and contexts, as AI systems that deliver in controlled settings such as pre deployment evaluations often perform less effectively in real world conditions. In order for general purpose AI to reach its full potential, the report emphasizes the need to prioritize the effective management of risks such as malicious use, malfunctions, and systemic disruption.
UK Government Publishes Response to Consultation on AI Management Essentials (AIME) Tool. In November 2024, the UK government sought feedback on AIME, a self-assessment tool that distils key principles from existing AI governance frameworks to help businesses establish robust governance and management practices for AI development and use. The consultation outcome was published on February 6, 2026. An analysis of 65 responses indicated that organizations view AIME as a valuable foundation for AI governance, although concerns were raised regarding its complexity for non expert users, particularly small and medium enterprises (SMEs) that struggled to operate under the tool’s size and occupation agnostic approach. This feedback will inform both the refinement of the tool and the development of further guidance focused on the foundational governance measures necessary to support responsible AI deployment, with a specific emphasis on improving accessibility for SMEs.
OpenAI and Microsoft Join AI Security Institute’s Flagship Alignment Project. Contributions from OpenAI and Microsoft have increased the total funding available through the UK AI Security Institute’s initiative to more than £27 million, supporting international research that aims to enhance the international reliability and safety of AI systems. The project combines funding for research, access to compute infrastructure and ongoing academic mentorship to drive progress on alignment. The first Alignment Project grants have been awarded to 60 projects from across eight countries, with a second round expected to open later this year.
UK Government Publishes Analysis of Research on AI Adoption. Consistent with the ambitions set out in the January 2025 AI Opportunities Action Plan to embed AI across the UK economy, the government conducted research to assess the use of AI among UK businesses. The study, published on February 13, 2026 and based on 3,500 interviews (weighted to reflect business size and sector), indicates that AI adoption remains modest, with only 16% of businesses using at least one AI technology and many citing a lack of identified need and limited AI skills as key barriers. Businesses reported the greatest difficulties when implementing agentic AI, while natural language processing and text generation presented comparatively fewer barriers. For organizations that raised ethical concerns, these concerns were regarded as the most significant obstacle to adoption, followed by high costs and regulatory uncertainty. While the research demonstrates varying levels of trust in AI systems, most organizations remain willing to explore new technologies, with 75% of businesses reporting that AI has increased workforce productivity.
UK and International Partners Support Commitment To AI at India AI Impact Summit. The UK government, together with international partners, has engaged in discussions on the potential for AI to drive growth, create new jobs, improve public services, and deliver benefits globally. These discussions form part of the UK’s broader collaboration with India to advance shared priorities in science, technology, and innovation. The New Delhi Declaration on AI, presented at the India AI Impact Summit, seeks to build an inclusive, accessible and efficient global AI framework. The declaration has been endorsed by 92 countries, including the UK, and is expected to be signed at an international summit later this year.
UK and Japan Strengthen Science and Technology Partnership. On February 3, 2026, the UK and Japan announced a package of life sciences and technology collaborations, placing a strong emphasis on developing treatments for rare genetic diseases. The projects include an £11 million investment into drug manufacturing in the UK, undertaking joint quantum technologies research to address challenges in drug discovery, and a multi-year strategic partnership to establish a national pilot focused on transforming screening for rare diseases.
First-Ever AI Strategy for UK Research and Innovation. On February 19, 2026, the UK government announced the first-ever AI Strategy for the UK’s largest public research funder: UK Research and Innovation (UKRI). The investment is intended to ensure AI delivers “cutting-edge science and research efforts” in the UK. Under the new strategy, UKRI will provide up to £137 million as part of the government’s AI for Science Strategy to back AI-enabled scientific discovery starting with drug discovery and new treatments. It will also help to deliver £36 million to upgrade the University of Cambridge’s “DAWN” supercomputer supporting breakthroughs in areas like healthcare and environmental modelling.
Privacy Updates
Implementation of UK Data (Use and Access) Act. The Data (Use and Access) Act 2025 (DUAA) represents the UK’s first major reform of data protection law since leaving the EU. On February 5, 2026, most of the data protection provisions of the DUAA came into force. The reforms expand the use of automated decision-making capabilities, but this does not apply to special categories of data such as health information. The new standard for international transfers has changed from ensuring UK General Data Protection Regulation (GDPR) protections are “not undermined” to requiring protection that is “not materially lower” than UK standards. For more details, see our February 2026 BioSlice Blog and May 2025 Advisory.
EDPB and EDPS Issue Joint Opinion on the European Commission’s Proposal To Amend the Digital Legislation (Digital Omnibus). The joint opinion, adopted on February 10, 2026, follows a formal consultation by the Commission on its proposal for a Digital Omnibus. (See our December 2025 Digest.) While supporting the efforts to reduce compliance burdens, the EDPB and EDPS stress that simplification must not weaken key safeguards of the EU GDPR. In particular, the EDPB and EPDS urge the European Parliament and Council of the European Union not to adopt: (1) the amended definition of personal data, which would assess identifiability based on the means reasonably available to the specific company, which, according to the joint opinion, could narrow the GDPR’s scope and create legal uncertainty, and (2) the proposal to include an exhaustive list of permitted cases for automated decision-making, whereas currently fully automated decision-making is prohibited. At the same time, the EDPB and EPDS support: (1) raising the threshold for personal data breach notifications to cases “likely to result in a high risk” to individuals’ rights, and (2) the development of EU-level Data Protection Impact Assessment (DPIA) tools, provided supervisory authorities retain primary responsibility. Further details on the joint opinion and Commission proposal can be read in our February 2026 Advisory.
EDPB Publishes Report on Results of Public Consultation on Anonymization and Pseudonymization. The report summarizes the feedback received during an event held in December 2025 to support the preparation of EDPB guidelines on anonymization and pseudonymization, following the Court of Justice of the European Union (CJEU) judgment in Case C 413/23 P. In that judgment, the CJEU clarified how identifiability must be assessed when determining whether pseudonymized data qualify as personal data. (See our October 2025 Digest and September 2025 BioSlice Blog.) Participants, who were mainly companies, highlighted the need for further guidance on joint controllership scenarios, controller-to-controller/third-party data sharing, and on specific contexts such as clinical trials. Participants also requested clarification on when data processing agreements are required, the concept of “means reasonably likely to be used” to identify individuals, and the safeguards that can limit re-identification risks. Debate also arose on topics such as whether online identifiers should always be treated as personal data and whether a separate legal basis under Article 6 GDPR is required when transmitting pseudonymized data.
EDPB Publishes Its Work Program for 2026-2027. The work program aims to facilitate compliance with the EU GDPR and sets out the actions that the EDPB plans to undertake over the next two years. Key actions of the EDPB include developing guidance on AI, telemetry, and diagnostic data; further guidance on data anonymization; and developing guidelines on the interplay between the AI Act and the GDPR, as previously announced by the EDPB. The EDPB also expects to adopt guidance on data pseudonymization and on data processing for research purposes. In addition, the EDPB plans to publish practical templates to support SMEs, including templates for DPIAs, legitimate interest assessments, records of processing activities, and privacy notices and policies. The EDPB also intends to issue opinions on standard and ad-hoc contractual clauses.
IP Updates
UK Supreme Court Decision in Emotional Perception AI Limited v. Comptroller General of Patents, Designs and Trade Marks. On February 11, 2026, the UK Supreme Court handed down its much-anticipated judgment in Emotional Perception AI Limited v. Comptroller General of Patents, Designs and Trade Marks [2026] UKSC 3. Following the approach endorsed by the Enlarged Board of Appeal of the European Patent Office (EPO) in its G1/19 decision, the UK Supreme Court firmly rejected the long-standing Aerotel four-step test for assessing patentability in the UK for failing to be a good-faith implementation of the European Patent Convention (EPC). In doing so, the UK Supreme Court has now, at least in part, aligned the UK’s approach to computer-implemented inventions with the EPO.
The UK Supreme Court has also confirmed that Artificial Neural Networks constitute a “program for a computer” and thereby fall within the exclusion to patentability under Article 52(2)(c) EPC. Whether the claimed subject matter falls within that exclusion depends on the application of the “any hardware” approach endorsed in G1/19, according to which an application will not be excluded from patentability if it embodies or involves physical hardware within the subject matter of the claims. Applying the G1/19 decision has also introduced an “intermediate step” in the UK, whereby elements not contributing to (or interacting with) the invention’s technical character are excluded when subsequently considering the novelty and inventive step.
This decision represents a major shift in the UK approach to patentability of AI-related and computer-implemented inventions.
Mickayla Stogsdill is employed as a senior policy specialist at Arnold & Porter’s Washington, D.C. office. Mickayla is not admitted to the practice of law.
Aishwarya Grandhe is employed as a policy specialist at Arnold & Porter’s Washington, D.C. office. Aishwarya is not admitted to the practice of law.
Caroline Oliver is employed as a policy specialist at Arnold & Porter’s Washington, D.C. office. Caroline is not admitted to the practice of law.
Amalia White is employed as a trainee solicitor at Arnold & Porter’s London office. Amalia is not admitted to the practice of law.
Jack Chisem is employed as a paralegal at Arnold & Porter’s London office. Jack is not admitted to the practice of law.
© Arnold & Porter Kaye Scholer LLP 2026 All Rights Reserved. This Newsletter is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.