Field guide · AI governance for healthcare · Updated April 2026

Healthcare AI compliance.
Operationalized.

Healthcare AI compliance covers the regulatory, governance, and operational discipline required to deploy AI in HIPAA-regulated environments. AI Readiness, AI Risk Assessment, AI policy enforcement, and the audit trail that survives a CMS or OCR review. A guide written by the team behind CPS One — the privacy and AI governance platform with 96% three-year customer retention.

See CPS One Talk to the privacy team
The short version

AI compliance is HIPAA, plus four new layers.

Regular HIPAA compliance covers privacy and security of PHI in conventional systems — access controls, BAAs, breach notification, encryption. Healthcare AI compliance adds four new layers: training-data provenance (was PHI used to train the model?), inference-time PHI handling (does the model see PHI in prompts?), model behavior assurance (does the model do what its policy says it does?), and AI-specific audit trails (which model version produced this output?).

Most existing compliance programs were not built for these questions. The 2026 update to the HIPAA Security Rule, NIST's AI Risk Management Framework, and ONC's HTI-1 rule are forcing the questions whether the program is ready or not.

The five regulatory anchors

What healthcare AI regulations matter most in 2026.

Five anchors collectively define the floor for healthcare AI compliance. Each is enforceable, each maps to specific controls, each gets weight in OCR and CMS audits.

Anchor Scope 2026 status
HIPAA Security Rule (2026 update) Foundation for any system handling PHI; tightening AI-relevant controls (access logging, encryption-at-rest, vendor risk management) Effective; OCR enforcing
FDA AI/ML SaMD framework Software as a Medical Device pathway for AI used in clinical decision-making Active; predetermined change-control plans (PCCPs) now standard
State-level AI laws Bias, transparency, and clinical-decision-support disclosure (Colorado SB 24-205, California, Illinois, NY) Effective in 12+ states; expanding
NIST AI Risk Management Framework The federal benchmark for AI risk assessment; cited by OCR, CMS, and federal procurement Reference standard; healthcare profile in development
ONC HTI-1 Final Rule Transparency on Decision Support Interventions (DSI) in certified EHRs Compliance phased through 2026
The three operational disciplines

What it takes to actually be compliant.

Compliance is not a document; it's an ongoing operational discipline. Three named workflows in CPS One — each tied to a regulatory requirement, each producing audit-grade artifacts.

Discipline 1

AI Readiness assessment

An organizational evaluation of whether the entity has the policies, controls, and operational maturity to deploy AI safely. AI governance policies, vendor due diligence, BAA terms for AI vendors, model documentation, monitoring capabilities, incident response, training-data provenance, surrounding data governance.

Discipline 2

AI Risk Assessment

A structured evaluation of a specific AI use case across dimensions: clinical risk (could the AI's output harm a patient?), privacy risk (does it touch PHI?), regulatory risk (does CMS, state insurance, or OCR have jurisdiction?), bias risk (does the AI behave differently across populations?), operational risk (what happens when the AI is wrong?). Produces a risk score and required controls.

Discipline 3

AI policy enforcement

Continuous monitoring that deployed AI systems are operating within their stated policies. Drift detection, evidence-trail review, BAA-term verification, audit-log analysis. The discipline closes the loop between "we have a policy" and "the AI actually follows the policy."

See CPS One in detail →
A nuance worth stating

Most of CPS One uses no AI by design.

Privacy and AI compliance is not an "AI everywhere" category. The controls that matter most are deterministic by design — and that's a feature, not a limitation.

Deterministic by design

Privacy operations don't need AI.

Incident management, breach notification, BAA tracking, disclosure accounting, policy enforcement, AI Readiness, and AI Risk Assessment all run on deterministic, rule-based workflows in CPS One. Audit-grade by construction. No model drift, no hallucination, no "we trust the AI" risk.

The single AI exception

CPS Insights uses Aether One — for analytics only.

The CPS Insights reporting module uses Aether One™ for analytical pattern detection across aggregated reporting data. PHI is never used for AI training. The boundary is contractual and architectural — not a marketing choice. Privacy officers expect this discipline; CPS One delivers it.

What buyers ask

Buyer's questions, answered.

The seven questions that surface in every privacy officer's CPS One evaluation.

What is healthcare AI compliance?

Healthcare AI compliance covers the regulatory, governance, and operational discipline required to deploy AI in HIPAA-regulated environments. It includes: AI Readiness assessments (does the organization have the policies and controls in place?), AI Risk Assessment (what's the risk profile of a specific AI use case?), AI policy enforcement (are deployed AI systems actually operating within policy?), incident response when AI behaves unexpectedly, and audit trail generation that survives a CMS or OCR review.

How is healthcare AI compliance different from regular HIPAA compliance?

Regular HIPAA compliance covers the privacy and security of PHI in conventional systems — access controls, BAAs, breach notification, encryption. Healthcare AI compliance adds: training-data provenance (was PHI used to train the model?), inference-time PHI handling (does the model see PHI in prompts?), model behavior assurance (does the model do what its policy says it does?), and AI-specific audit trails (which model version produced this output?). Most existing compliance programs were not built for these questions.

What is AI Risk Assessment in healthcare?

AI Risk Assessment is a structured evaluation of a specific AI use case across dimensions like: clinical risk (could the AI's output harm a patient?), privacy risk (does it touch PHI?), regulatory risk (does CMS, state insurance, or OCR have jurisdiction?), bias risk (does the AI behave differently across populations?), and operational risk (what happens when the AI is wrong?). The assessment produces a risk score and a set of required controls. CPS One operationalizes this workflow.

Does CPS One use AI?

Most of CPS One uses no AI by design. The core privacy operations modules — incident management, breach notification, BAA tracking, disclosure accounting, policy enforcement, AI Readiness, and AI Risk Assessment — run on deterministic, rule-based workflows. The CPS Insights reporting module is the single exception; it uses Aether One™ for analytical pattern detection across aggregated reporting data only. PHI is never used for AI training. This scoping is contractual and architectural — not a marketing choice.

What healthcare AI regulations matter most in 2026?

Five anchors: HIPAA — still the foundation for any system handling PHI, with the 2026 Security Rule update tightening AI-relevant controls; the FDA's framework on AI/ML medical software, including the SaMD pathway; state-level AI regulations on bias, transparency, and clinical-decision support; the NIST AI Risk Management Framework, increasingly cited as the federal benchmark; ONC's HTI-1 rule, requiring transparency on predictive Decision Support Interventions in certified EHRs.

Is CPS One HIPAA compliant?

Yes. CPS One is built specifically for HIPAA privacy and security operations: incident management, breach notification, BAA tracking, disclosure accounting, and audit-ready compliance reporting. CPS One has been deployed in healthcare privacy programs for over a decade — the platform was previously called CompliancePro, with 96% three-year customer retention.

What is AI Readiness?

AI Readiness is an organizational assessment of whether a healthcare entity has the policies, controls, and operational maturity to deploy AI safely. It typically covers: AI governance policies, vendor due diligence, BAA terms for AI vendors, model documentation, model monitoring capabilities, incident response, training data provenance, and the surrounding data-governance program. AI Readiness is a prerequisite to deploying any AI system that touches PHI or affects clinical decisions.

Ready to operationalize?

A privacy conversation, not a vendor pitch.

A 30–45 minute conversation with the team running CPS One in privacy programs at named US health systems. We bring the AI Readiness rubric and the audit packet templates. You bring the program you're trying to mature.

Talk to the privacy team Explore CPS One