Three Regulators, One Radiology Algorithm
A mid-size health system deploys an AI-powered chest X-ray triage tool. The FDA has cleared it as a Class II Software as a Medical Device. HIPAA requires a Business Associate Agreement with the vendor. And because the system serves patients across EU member states, Annex III of the EU AI Act classifies it as high-risk — triggering conformity assessments that don't exist yet in the health system's compliance program.
This is not a hypothetical. We've worked with compliance teams facing exactly this convergence. The regulatory surface area for medical AI is expanding faster than most governance structures can absorb.
As of mid-2025, the FDA's public database lists over 1,250 AI-enabled medical devices authorized for marketing in the United States — up from 950 just a year earlier. Roughly 76% of those sit in radiology. Cardiology accounts for another 10%. The pace is accelerating, and the governance requirements are stacking.
The FDA's Evolving Approach to AI/ML-Based SaMD
The FDA's January 2025 draft guidance on Artificial Intelligence-Enabled Device Software Functions introduced a Total Product Life Cycle (TPLC) approach. This is a significant shift. Rather than treating an AI device as a static product that gets cleared once, the FDA now expects manufacturers to document model description, data lineage and splits, performance tied to clinical claims, bias analysis and mitigation strategies, human-AI workflow design, ongoing monitoring plans, and predetermined change control plans (PCCPs).
That last item — PCCPs — deserves attention. A PCCP lets manufacturers modify their AI model after clearance without submitting a new 510(k), provided the changes fall within pre-specified boundaries. The December 2024 final guidance on PCCPs formalized this pathway. For compliance officers, this means your vendor management program now needs to track whether a cleared AI device is operating under a PCCP and whether its modifications remain within approved parameters.
Here's the uncomfortable statistic: of 717 radiology AI devices with submission documentation reviewed in a 2025 JAMA Network Open study, only 33 (5%) underwent prospective testing, 56 (8%) included a human-in-the-loop evaluation, and 208 (29%) incorporated any clinical testing at all. The FDA cleared them. But clinical governance demands a higher bar.
Clinical Decision Support vs. Autonomous AI — A Line That Keeps Moving
The 21st Century Cures Act carved out certain Clinical Decision Support (CDS) software from the definition of a medical device. If the software meets four criteria — it doesn't acquire or process signals from a device, it's intended for a clinician, it presents recommendations rather than directives, and the clinician can independently review the basis for the recommendation — it falls outside FDA jurisdiction.
This exclusion matters enormously for governance scoping. An AI tool that flags potential sepsis risk and presents the underlying data to a physician may qualify as excluded CDS. The same algorithm, deployed to automatically adjust medication dosing without clinician review, is regulated SaMD.
In practice, the line blurs. We've seen vendors market the same product as CDS in one deployment configuration and as SaMD in another. Your governance program needs to classify each AI system based on how it's actually used, not how the vendor describes it.
HIPAA's Intersection with AI Training and Deployment
HIPAA was not designed for machine learning. But it governs every piece of electronic Protected Health Information that touches an AI system — training data, inference inputs, and outputs.
The Office for Civil Rights now explicitly states that the HIPAA Security Rule governs ePHI used in AI training data and in algorithms developed by regulated entities. Training an AI model may not qualify as treatment, payment, or healthcare operations under HIPAA, which means covered entities need individual patient authorization to use PHI for model training purposes. This is a significant constraint that many AI vendors underestimate.
The January 2025 proposed update to the HIPAA Security Rule — the first major revision in 20 years — tightens expectations for encryption, risk analysis, and resilience. Organizations deploying AI must now update their risk analysis to explicitly address how AI software interacts with and processes PHI.
A $12.5 million penalty against a major health system in 2025 demonstrated that standard Business Associate Agreements cannot adequately address AI-related data risks. Your BAAs need specific provisions for model training data use, de-identification standards, and ongoing monitoring obligations.
Nearly half of healthcare organizations permitting generative AI use lack governance frameworks, and only 31% actively monitor these systems. That gap is where enforcement actions will land.
EU AI Act: Healthcare as Annex III High-Risk
Annex III of the EU AI Act designates several healthcare AI applications as high-risk. These include AI systems intended as safety components of medical devices, AI used to evaluate eligibility for healthcare services and benefits, and AI used to evaluate and classify emergency calls or dispatch emergency first response services, including emergency healthcare patient triage.
For health systems operating in or serving EU patients, the compliance timeline is critical. High-risk AI system obligations take full effect on August 2, 2026. However, because medical AI is already regulated under the EU Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), the AI Act requirements for those products extend to August 2027. The European Commission intends to integrate AI Act conformity assessments into existing MDR/IVDR Notified Body reviews to prevent duplicate audits.
The practical challenge: the AI Act requires proof that an AI algorithm is trustworthy, its training data is representative, its logic is explainable, and its performance is stable across populations. These requirements layer on top of MDR clinical evaluation requirements. The WHO has warned of a "regulatory vacuum" in the interim, with Regional Director Hans Kluge stating that "the rapid rise of AI in healthcare is happening without the basic legal safety nets needed to protect patients and healthcare workers."
Building a Healthcare AI Governance Program That Works
Stop treating AI governance as a bolt-on to your existing compliance structure. A radiology AI triage tool touches FDA post-market surveillance, HIPAA privacy and security, clinical quality and patient safety, AI bias and fairness evaluation, and potentially EU AI Act conformity. No single compliance silo owns all of that.
We recommend a dedicated AI governance committee with representation from clinical informatics, privacy, legal, quality, and IT security. This committee should maintain a living AI risk assessment registry that maps each deployed AI system to its applicable regulatory requirements.
For each AI system, document the regulatory classification (FDA-cleared SaMD, excluded CDS, or unregulated), the data flows including PHI touchpoints, the clinical workflow integration and human oversight design, performance monitoring metrics and drift detection thresholds, and the vendor's PCCP status if applicable.
Align your documentation with NIST AI RMF categories. The GOVERN and MAP functions map directly to the organizational and AI-specific risk assessments that both the FDA TPLC approach and the EU AI Act require.
Transparency is non-negotiable. Clinicians need to understand what an AI system does, what it doesn't do, and when to override it. Patients increasingly expect disclosure when AI influences their care. Both the FDA and the EU AI Act require clear user information about AI system capabilities and limitations.
Drug discovery, patient risk scoring, population health analytics, clinical documentation — AI is embedding itself across the care continuum. The organizations that build governance infrastructure now will have a structural advantage as enforcement catches up to deployment.
Healthcare AI governance is not a compliance exercise you finish. It's a capability you build.