Virtual and Digital Health Digest
This digest covers key virtual and digital health regulatory and public policy developments during January and early February 2026 from the United States (U.S.), United Kingdom (UK), and European Union (EU).
In this issue, you will find the following:
U.S. News
- Health Care Fraud and Abuse Updates
- Provider Reimbursement Updates
- Privacy and Artificial Intelligence (AI) Updates
- Policy Updates
U.S. Featured Content
In this issue, we cover significant federal health care fraud and abuse developments, including guilty pleas and convictions in major Medicare and Medicaid schemes involving telehealth and durable medical equipment, as well as a $197 million fraud case tied to medically unnecessary orthotics. We also highlight the Centers for Medicare & Medicaid Services’ (CMS’s) release of payment rates and performance targets for the new Advancing Chronic Care with Effective, Scalable Solutions (ACCESS) Model, as well as a multi-payer pledge to align on outcomes-based reimbursement for technology-supported care. In privacy and AI news, we report on the Federal Trade Commission’s (FTC’s) upcoming workshop on consumer harms in the data-driven economy and the National Institute of Standards and Technology’s (NIST’s) request for information on risks posed by “agentic AI.” Finally, we summarize key policy updates, including the extension of Medicare telehealth flexibilities through 2027, new federal AI initiatives and leadership appointments at the Department of Health and Human Services (HHS), congressional perspectives on AI regulation, the U.S. Food and Drug Administration (FDA) and European Medicines Agency's (EMA) guiding principles for AI in drug development, and the Advanced Research Projects Agency for Health’s launch of a new agentic AI program focused on cardiovascular care.
EU and UK News
EU/UK Featured Content
January 2026 saw significant activity as UK and EU authorities advanced major initiatives affecting the use of AI, digital technologies, data governance, and cybersecurity in healthcare and life sciences. Notable developments include EMA’s and FDA joint principles on the use of AI across the medicinal product lifecycle, the European Commission’s call for evidence on the proposed amendments to the Medical Devices Regulation (EU) 2017/745 (MDR) and In Vitro Diagnostic Regulation (EU) 2017/746 (IVDR), proposals to strengthen the EU Cybersecurity Act, and important data protection interventions. In parallel, UK and EU regulators continued to focus on the safe deployment of digital tools in healthcare, including new Medicines and Healthcare products Regulatory Agency (MHRA) guidance on mental health technologies and ongoing work to refine AI governance. These updates, alongside developments in Intellectual Property (IP) and product liability, signal a rapidly evolving regulatory environment that will help to shape digital innovation and compliance expectations throughout 2026.
U.S. News
Health Care Fraud And Abuse Updates
- Florida DME Owner and Manager Pleads Guilty to Conspiracy to Violate the Anti-Kickback Statute. On January 30, 2026, Deane Gilmore, the owner and manager of two durable medical equipment companies, pleaded guilty to conspiring to violate the Anti Kickback Statute. Gilmore allegedly paid telemarketers and call centers on a per-order basis to collect Medicare beneficiaries’ information, which was then used to generate orders for unnecessary durable medical equipment. Over the course of the scheme, Gillmore submitted or caused to be submitted $6.5 million in claims to Medicare, of which $3 million were ultimately paid by Medicare.
- Florida Man Sentenced for New Hampshire Medicaid Telehealth Fraud. On February 3, 2026, Erik X. Alonso was sentenced for defrauding New Hampshire Medicaid. According to court documents, beginning in March 2022, Alonso worked for a telehealth mental health provider, where he provided services billed to New Hampshire Medicaid despite his inclusion on the Medicaid exclusion list. Additionally, Alonso caused the telehealth provider to submit claims to New Hampshire Medicaid for services that were not provided. As a result of this scheme, New Hampshire Medicaid paid $173,998 in false and fraudulent claims.
- Former Professional Football Player Convicted of $197M Medicare Fraud. On February 3, 2026, a federal jury convicted a former professional football player, Joel Rufus French, for his role in $197 million Medicare and Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA) fraud scheme. French allegedly worked with call centers to collect beneficiary information, which was routed to telemedicine providers. Those providers signed orders for medically unnecessary durable medical equipment, including orthotic braces, without examining or speaking with patients. French allegedly sold the orders to marketers and medical supply companies who submitted claims to Medicare. He also allegedly billed the CHAMPVA program for orthotic braces through durable medical equipment supply companies that he owned and managed, using false documents to conceal his ownership of the companies from Medicare.
Provider Reimbursement Updates
-
CMS Announces Payment Rates for ACCESS Model. As we covered in our December 2025 Digest, in early December, CMS announced a voluntary pilot model called ACCESS, which will test an outcome-aligned payment approach in Original Medicare to “expand access to new technology-supported care options that help people improve their health and prevent and manage chronic disease.” Under the model, care organizations are expected to offer “integrated, technology-supported care” to manage beneficiaries’ qualifying conditions within one of four clinical “tracks”: (1) early cardio-kidney-metabolic; (2) cardio-kidney-metabolic; (3) musculoskeletal; and (4) behavioral health.
CMS has now released the payment amounts and performance targets for model participants. Under the model, CMS will pay ACCESS participants an annual amount per enrolled beneficiary, with the maximum payment varying by beneficiary’s “track.” This annual payment will be split into monthly payments, with 50% of the total withheld and reconciled at the end of the 12-month care period. If at least 50% of an ACCESS participant’s beneficiaries meet the required clinical and patient-reported outcomes during a care period, the participant can earn the remaining payment amount.
Separately, on February 12, CMS announced that several major health payers have pledged to adopt an outcomes-based payment structure “aligned to” the ACCESS Model. The agency stated that the payers cover 165 million Americans through Medicare Advantage, Medicaid, and private health insurance plans and expressed the pledge would help align payments across payers for “technology-supported care” that delivers “measurable improvements in patient health outcomes.” CMS also announced it is developing a set of optional “alignment resources” for health plans, including sample provider agreements, standardized billing codes, and outcome reporting infrastructure, which the agency expects to be available later in 2026.
Privacy and AI Updates
- FTC Announces Workshop on Consumer Injuries and Benefits in the Data-Driven Economy. FTC has announced a workshop to be held on February 26 to examine “consumer injuries and benefits that may result from the collection, use, or disclosure of consumer data.” The FTC held a similar workshop almost a decade ago, and the plan for the upcoming event is to gather current information on, among other things, consumer privacy preferences and the impacts of data breaches on consumers. In a document summarizing findings from the 2017 workshop, the FTC staff reported that participants emphasized medical identity theft as a serious harm resulting from data breaches or unauthorized disclosure of data. This year’s workshop is free and open to the public and will be held in person at the FTC’s Constitution Center at 400 7th St SW, Washington, DC 20024.
- NIST AI Center Issues Request for Information on Agentic AI. On January 8, the Center for AI Standards and Innovation at NIST issued a Request for Information seeking comment on information and insights on methods for measuring and improving the development and deployment of AI agent systems (which NIST defines as systems consisting of “at least one generative AI model and scaffolding software that equips the model with tools to take on a range of discretionary actions” that “can be deployed with little to no human oversight”). The Request focuses on three specific risks posed by agentic AI: (1) security risks arising from adversarial attacks, (2) security risks posed by “models with intentionally placed backdoors,” and (3) risks that otherwise uncompromised models may still pose a threat to “confidentiality, availability, and integrity.” The deadline for submitting responses to the NIST request is March 9, 2026.
Policy Updates
- Medicare Telehealth Flexibilities Extended in FY 2026 Appropriations Package. On February 3, President Trump signed into law a $1.2 trillion appropriations package, Consolidated Appropriations Act, 2026 (P.L. 119-75). The package included five of the six outstanding Fiscal Year (FY) 2026 appropriations bills: Labor, Health and Human Services, and Education; Defense; Transportation, Housing and Urban Development; State-Foreign Operations; and Financial Services-General Government (FSGG), as well as a two-week continuing resolution through February 13 for the Department of Homeland Security. In addition to federal appropriations, the minibus contained several health policy items, including the extension of COVID-era Medicare telehealth flexibilities through December 31, 2027.
- ACL Launches Caregiver AI Prize Competition. On February 5, the Administration for Community Living (ACL) announced the Phase 1 launch of the Caregiver AI Prize Competition, a national challenge to support the development of AI tools that strengthen caregiving and the caregiving workforce. Phase 1 will award up to $2.5 million to as many as 20 winners and includes two tracks: (1) AI tools to support family and professional caregivers, and (2) AI workforce tools to improve efficiency, scheduling, and training for home care organizations. ACL plans to host an informational webinar in March, and Phase 1 applications will be due in July, with winners announced in September.
- HHS Hires New Deputy Chief Artificial Intelligence Officer. HHS has reportedly hired Arman Sharma as the agency’s new deputy chief artificial intelligence officer. Sharma recently graduated from Stanford University in 2024. While at Stanford, he co-authored a health economics textbook with Dr. Jay Bhattacharya, who is now the Director of the National Institutes of Health. The new hire follows HHS’s release of its AI Strategy in early December.
- E&C Chairman Authors Essay on The Path for American AI Leadership. House Energy and Commerce (E&C) Chairman Brett Guthrie (R-KY) recently authored an essay titled “Dominance, Deployment, and Safeguards: The Path for American AI Leadership,” which was published in the Orrin G. Hatch Foundation’s 2025 Hatch Center Policy Review. In the essay, Chairman Guthrie expresses concern that America faces a threat to its leadership in AI from China. The Chairman says the committee’s approach to regulating AI will be guided by three pillars: “dominance, deployment, and safeguards.” Notably, Chairman Guthrie supports the AI Action Plan and warns against a patchwork of state AI laws and regulations.
- Food and Drug Administration and European Medicines Agency Release Principles for AI Use in Drug Development. On January 14, FDA’s Center for Drug Evaluation and Research and Center for Biologics Evaluation and Research, in collaboration with the EMA, released ten “Guiding Principles of Good AI Practice in Drug Development.” These principles include (1) Human-centric by design, (2) Risk-based approach, (3) Adherence to standards, (4) Clear context of use, (5) Multidisciplinary expertise, (6) Data governance and documentation, (7) Model design and development practices, (8) Risk-based performance assessment, (9) Life cycle management, and (10) Clear, essential information. The announcement comes as FDA is reportedly developing a new regulatory framework for AI.
- Advanced Research Projects Agency for Health Announces Agentic AI Program. On January 13, the Advanced Research Projects Agency for Health announced a new research opportunity through the Agentic AI-EnableD CardioVascular CAre TransfOrmation (ADVOCATE) program, which aims to develop an FDA-authorized agentic AI system that can provide 24/7 care for advanced cardiovascular disease management. Funding opportunities will “support the development of clinical AI agents that can be trusted to autonomously adjust changes in appointments, medications, diet, and exercise.” The program also aims to develop a “supervisory” AI agent to monitor other clinical agents to ensure safety and efficacy. Summaries for proposals are due February 27.
EU and UK News
Regulatory Updates
- EMA and FDA issue joint principles on AI in the medicines lifecycle. The joint principles are intended to promote the use of AI while ensuring the safe, responsible, and reliable application throughout the lifecycle of medicines, providing broad guidance to cover all phases of a medicine, from early research and clinical trials through to manufacturing and post-market safety monitoring. The ten guiding principles set out regulatory expectations for the development and use of AI systems. Some examples include: implementing robust data governance and privacy protections, and providing clear, accessible information on intended use, performance, limitations, data sources, update processes, and explainability. Although not legally binding, the principles provide helpful insight into the aligned regulatory expectations of the EMA and FDA and are expected to inform future EU-level and national regulatory guidance, to advance good practice in medicines development.
- European Commission Publishes an Open Call for Evidence on the Revisions to the EU MDR and IVDR. Following the publication of the European Commission’s proposals to amend the MDR and the IVDR (the Proposals) in December 2025 (see our January 2026 Digest), the Commission is now seeking views, through its call for evidence, on whether the Proposals adequately address implementation challenges, including by reducing administrative burdens, enhancing predictability and alignment across EU Member States in the certification process, and ensuring that regulatory requirements are proportionate and aligned with other relevant EU legislation. The feedback received will inform discussions within the European Parliament and the Council of the European Union during the negotiations on the Proposals. The call for evidence is open until March 23, 2026. See our recent Advisory for an overview of what international companies should be preparing for and understand the impact on AI-based software.
- European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) Issue Joint Opinion on the European Commission's Proposal to Amend the AI Act (Digital Omnibus on AI). The joint opinion was adopted following a formal consultation by the Commission on its proposal for a Digital Omnibus on AI (See our December 2025 Digest). While welcoming the efforts to reduce administrative burdens, the opinion stresses that simplification should not undermine fundamental rights or the accountability of AI system providers. In particular, the EDPB and EDPS caution against removing registration obligations for AI systems listed in Annex III of the AI Act when providers classify them as “non-high risk,” noting that this could weaken accountability and regulatory oversight. They also support the creation of EU-level AI regulatory sandboxes to promote innovation and help small and medium-sized enterprises, provided that Data Protection Authorities are involved in supervising the data processing activities. The EDPB and EDPS have indicated that joint guidelines on the interaction between the EU General Data Protection Regulation (EU GDPR) and the AI Act are under development and expected later in 2026.
- Publication of Impact Statement for 10 Year Health Plan for England. Building upon the announcement of the Government’s 10 Year Health Plan in July 2025 (as described in our blog), this impact statement now explains the rationale and potential effects of digital transformation of the National Health Service (NHS). It highlights the need for digitally enabled care pathways, improved data sharing, and expanded use of digital tools. This will enhance system efficiencies, patient empowerment, and financial sustainability of the NHS. For example, the adoption of AI technologies is expected to result in operational efficiencies (e.g., reduction in reporting times and triage times), improve data quality (through standardization of reporting), and improve health outcomes (e.g., earlier diagnoses). This is particularly true as patients become increasingly accustomed to using technology to self-manage their health. Many of the digital reforms will be designed and implemented locally, meaning their full impacts will evolve over time.
- MHRA publishes new guidance on the use of mental health apps and technologies. The MHRA has published new guidance to promote the safe and effective use of digital mental health technologies and strengthen the regulatory framework governing them. The guidance outlines five key areas to consider before using the tools, including: what the technology claims to do, who the intended audience is, the available evidence supporting its use, how personal data is collected and used, and whether it is regulated as a medical device. For those regulated as devices (for example, those that claim to diagnose, treat, or manage a mental health condition), the public can check that the technology has the appropriate marking and has therefore met UK safety standards. A package of new online resources has also been made available, consisting of animations and real-world examples of safe, well-evidenced digital mental health technologies in practice. These resources have been tailored for the general public and healthcare professionals.
Privacy Updates
- European Commission Publishes Proposal to Revise and Replace the EU Cybersecurity Regulation 2019/881. The proposal forms part of a broader EU cybersecurity package aimed at strengthening resilience, aligns with previous plans to strengthen cybersecurity in the health sector (see also our May 2025 Digest), and is linked to the Commission's proposal to amend Directive 2022/2555 (also known as NIS2). Some measures of the proposal include (i) establishing an EU-level framework for information and communications technology (ICT) supply chain security across NIS2 sectors, including the health sector. Under that framework, the Commission could restrict or require mitigation measures for the use of ICT components from designated non-EU high-risk suppliers in certain identified key ICT assets; (ii) expanding the EU cybersecurity certification framework, including by allowing certification to cover a company’s overall cybersecurity posture; and (iii) expanding the mandate of the European Union Cybersecurity Agency, in areas such as risk assessments, incident response, and certification. The proposal will now be reviewed by the European Parliament and the Council of the European Union.
- Information Commissioner’s Office (ICO) publishes updated guidance on international data transfers. On January 15, 2026, the ICO issued updated and simplified guidance on international data transfers to assist businesses with compliance with the UK GDPR. The guidance includes a three-part test for businesses to identify if they are making restricted transfers: (i) confirm the UK GDPR applies to the data, (ii) determine whether the transfer is to a country outside the UK, and (iii) check whether the recipient is a separate legal entity. If all three conditions are met, organizations must comply with the UK GDPR transfer regime, which may include using adequacy decisions, appropriate safeguards, or specific derogations. The guidance includes additional information on multi-layered transfers, the roles and responsibilities for controllers and processors, and a set of FAQs. The ICO has also indicated that it intends to revisit its guidance on transfer risk assessments, and its International Data Transfer Agreement and cloud services.
- ICO publishes Tech Futures report on agentic AI. The ICO explains that emerging agentic AI systems – AI tools that can autonomously plan and act – pose novel data protection risks beyond those seen in standard generative AI. For digital health, these risks are highly relevant because agentic systems may inadvertently process special category data (including health data), scale automated decision‑making, and create complex controller/processor chains. Furthermore, the purposes for processing personal information may be set too broadly, exceeding what is necessary to achieve the aim. The ICO stresses that autonomy in AI does not absolve organizations of their accountability for responsible deployment.
IP Updates
- Digital Europe publishes policy paper on EHDS implementation and IP protection. Digital Europe, together with the European Federation of Pharmaceutical Industries and Associations, the European Confederation of Pharmaceutical Entrepreneurs, the European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry, and MedTech Europe, have published a joint policy paper setting out recommendations on the implementation of the European Health Data Space (EHDS) to protect intellectual property, trade secrets and commercially confidential information while enabling the secondary use of health data for research, innovation and public health. The paper states that although the EHDS offers significant opportunities for data-driven innovation and improved patient outcomes, its success depends on a governance framework that balances data accessibility with the protection of proprietary information that underpins investment and innovation. It highlights that the EHDS extends data-sharing obligations to privately held and pre-commercial datasets and warns that, in the absence of implementing acts under Article 52, divergent national approaches risk fragmentation and a loss of trust among data holders.
-
Digital Europe Policy Paper Calling for Stronger Safeguards in EHDS. On January 15, 2026, Digital Europe published a policy paper with its recommendations for implementing the EHDS. While recognizing the EHDS’ potential to support research, innovation, and improved patient outcomes, the paper stresses that its success depends on a robust governance framework that safeguards intellectual property, commercially confidential information, and trade secrets.
The paper highlights that the EHDS may require access to privately held and pre commercial datasets, including early stage Research & Development, clinical trial, and device-generated data, with associated risks if proprietary information is not adequately safeguarded. It warns that inconsistent interpretation of Article 52 of the EHDS Regulation by Member States may cause fragmentation, legal uncertainty, and reduced trust among data holders.
Key recommendations for consistent EU wide implementation guidelines include establishing specialized IP and trade secret task forces within Health Data Access Bodies to support classification of datasets and metadata by confidentiality level, and to promote structured cooperation between authorities, data holders, and rights holders.
For healthcare companies, the EHDS presents both a significant opportunity and material compliance challenges. Industry confidence will depend on harmonized safeguards that protect sensitive information while enabling responsible secondary use of health data.
-
AI-Related Patent Filings Quadruple in a Decade. On January 28, 2026, the UK government’s AI Skills for Life and Work: Patent Analysis reported that AI-related patents grew sharply from 5.2% in 2014 to 20.3% in 2023, reinforcing the rapid pace of AI innovation and adoption. Notably, the dominant technologies remain algorithms, artificial intelligence, neural networks, and machine learning, while technologies such as deep learning and generative adversarial networks are growing quickly in prominence.
The analysis also shows that patents now draw on a wider range of AI technologies, increasing from an average of around two AI related concepts per patent in 2014 to more than three and a half by 2023. These technologies cluster into distinct “knowledge packages,” some focused on developing AI itself and others on applying AI in areas such as healthcare, chemistry, and medical technology. This highlights the rising need to combine AI expertise with sector-specific knowledge.
For healthcare companies, these findings suggest that AI will play an increasingly significant role across research, development, and clinical workflows. Life sciences corporations will likely need to prioritize cross-disciplinary talent, data capabilities, and robust IP strategies reflecting the shift from AI being optional to becoming a central driver of healthcare innovation.
-
UK Court of Appeal Reinstates Abbott’s Patent Emphasizing Importance of Consistent Claim Construction. In the July 2024 digest, we reported on the UK High Court’s decision that an Abbott patent relating to continuous glucose monitoring technology was invalid for obviousness following a challenge by Dexcom, as part of a broader global dispute between the parties. On December 18, 2025, the UK Court of Appeal overturned that decision and reinstated Abbott’s patent.
By the time of the appeal, Abbott accepted the first instance judge’s narrow construction of claim 1, which required (among other features) that the introducer needle be coupled to the device housing and manually inserted. However, when assessing obviousness, the judge had relied on prior art systems involving automatic insertion of an integrated sensor and sensor electronics as satisfying key integers of claim 1. Abbott successfully argued in the appeal that this amounted to applying a different interpretation to claim 1 for the purposes of obviousness and that this was flatly inconsistent with the judge’s construction of claim 1. The Court of Appeal agreed, holding that obviousness must be assessed by reference to the claim as properly construed, and not by reference to a system that falls outside that construction. As there was no evidential basis to support a finding of obviousness on the accepted narrow construction, the appeal was allowed.
Product Liability Updates
- Draft statement on liability for AI harms from the UK Jurisdiction Taskforce. The UK Jurisdiction Taskforce (UKJT) of Lawtech UK launched a public consultation on its draft legal statement addressing liability for AI harms under English law. Whilst the lack of an AI-specific liability regime in the UK gives a perception of legal uncertainty, the statement explains that England’s common law system already provides a flexible framework for addressing the majority of potential physical or economic harm caused by AI. It emphasizes that AI itself cannot bear legal responsibility, so liability must be attributed to developers, users, and other human or corporate actors through established principles such as duty of care, foreseeability, and contractual allocation of risk. The statement also addressed whether vicarious liability applies to loss caused by AI, whether a professional can be liable for using or failing to use AI in the provision of their services, and whether liability attaches to false statements made by an AI chatbot. The UKJT has requested feedback on the draft statement before publication in final form.
Mickayla Stogsdill is employed as a senior policy specialist at Arnold & Porter’s Washington, D.C. office. Mickayla is not admitted to the practice of law.
Caroline Oliver is employed as a policy specialist at Arnold & Porter’s Washington, D.C. office. Sonja is not admitted to the practice of law.
Sophia Kim is employed as a trainee solicitor at Arnold & Porter’s London office. Sophia is not admitted to the practice of law.
© Arnold & Porter Kaye Scholer LLP 2026 All Rights Reserved. This Newsletter is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.