APIsPricingDocsWhite PaperTokenBlogAboutSecurity Demo
Log InGet API Key
Healthcare

Is ChatGPT HIPAA Compliant? How FHE Changes the Equation

Why AI language models fail HIPAA requirements by default, and how homomorphic encryption enables medical AI without exposing patient data

The short answer is no. ChatGPT is not HIPAA compliant in its standard configuration. The longer answer involves understanding what HIPAA actually requires, where AI language models create compliance gaps, and how a fundamentally different approach to encrypted computation can close those gaps permanently. Every healthcare organization exploring AI faces this question, and the answer matters because getting it wrong carries penalties of up to $1.5 million per violation category per year, plus criminal liability for willful neglect.

Healthcare organizations are under enormous pressure to adopt AI. Clinical decision support, diagnostic assistance, administrative automation, patient communication, and research analysis all benefit from large language models. But HIPAA was written for a world where data processing required access to plaintext, and AI models require exactly that access. This creates a fundamental tension: the most powerful AI tools require the most access to sensitive data, and HIPAA exists specifically to limit that access.

What HIPAA Actually Requires

HIPAA (the Health Insurance Portability and Accountability Act) establishes national standards for protecting sensitive patient health information, known as Protected Health Information (PHI). The HIPAA Security Rule requires covered entities and their business associates to implement administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and availability of electronic PHI (ePHI).

The technical safeguards relevant to AI include access controls ensuring only authorized persons access ePHI, audit controls recording who accessed what and when, integrity controls ensuring ePHI is not improperly altered or destroyed, and transmission security protecting ePHI during electronic transmission. The critical requirement for AI systems is the minimum necessary standard: covered entities must limit the use and disclosure of PHI to the minimum necessary to accomplish the intended purpose.

A Business Associate Agreement (BAA) is a contract required between a covered entity and any third-party vendor that creates, receives, maintains, or transmits PHI on behalf of the covered entity. The BAA specifies what the business associate can do with PHI, what safeguards it must implement, and what happens in the event of a breach. Without a BAA in place, sharing PHI with any third party including an AI service is a HIPAA violation regardless of the security measures the third party has in place.

Where ChatGPT Falls Short

OpenAI offers a BAA for ChatGPT Enterprise and certain API configurations. This means that technically an organization with a ChatGPT Enterprise subscription and a signed BAA can use the service with PHI under certain conditions. However, having a BAA does not make the service HIPAA compliant. The BAA is a necessary condition, not a sufficient one.

The fundamental problem is that ChatGPT processes data in plaintext. When you send a patient's medical history to ChatGPT for analysis, the model sees the full text of that medical history. The text is processed on OpenAI's servers, stored in logs for quality assurance and safety monitoring, and potentially used to improve the model though enterprise agreements may exclude training use. At every step, the plaintext PHI exists in memory on infrastructure that the healthcare organization does not control.

Even with a BAA, the healthcare organization is trusting that OpenAI's infrastructure is secure, that its employees follow access policies, that its logging systems do not retain PHI beyond agreed periods, and that a breach of OpenAI's systems will not expose patient data. This is considerable trust to place in a third party. HIPAA's enforcement history shows that trust is not a security control. The Office for Civil Rights has imposed penalties on organizations that relied on vendor assurances without verifying the technical controls themselves.

The consumer and Plus tiers of ChatGPT do not offer a BAA at all. Using these tiers with any PHI is an unambiguous HIPAA violation. Yet healthcare workers routinely paste patient information into ChatGPT to draft clinical notes, summarize lab results, or research treatment options. Each instance is a potential breach that the organization may not even detect because there is no audit trail connecting the ChatGPT session to the patient record.

The Training Data Problem

A subtler compliance issue involves model training. If PHI is used to train or fine-tune an AI model, that PHI is effectively encoded into the model's parameters. The model may generate outputs that contain fragments of the training data, a phenomenon known as memorization. If a language model memorizes a patient's name, diagnosis, or treatment history from training data, it can potentially reproduce that information in responses to other users. This constitutes unauthorized disclosure of PHI, and it can happen long after the original data was supposedly deleted.

De-identification does not fully solve this problem. HIPAA's Safe Harbor method requires removing 18 specific identifiers, but de-identified medical records can often be re-identified using auxiliary information. Research has shown that combinations of diagnoses, procedures, and temporal patterns can uniquely identify individuals even without explicit identifiers. If a model is trained on data that can be re-identified, the resulting model carries the compliance risk of the original identifiable data.

How FHE Changes Everything

Fully homomorphic encryption offers a fundamentally different approach. Instead of trusting the AI provider not to mishandle plaintext PHI, you encrypt the data before it leaves your infrastructure and the AI processes it while it remains encrypted. The AI provider never sees plaintext. The servers never store plaintext. The model never processes plaintext. There is no plaintext to breach, to memorize, to log, or to disclose.

Here is how this works in practice. A healthcare organization encrypts a patient record using FHE. The encrypted record is sent to the AI service. The AI performs inference on the encrypted data, producing an encrypted result. The encrypted result is returned to the healthcare organization, which decrypts it using its private key. At no point during this process does any entity other than the healthcare organization have access to plaintext PHI.

This is not a theoretical concept. H33's MedVault platform implements exactly this architecture. Medical records are encrypted with BFV homomorphic encryption, processed through H33's pipeline at 2,293,766 operations per second, and every computation is verified with STARK proofs and signed with post-quantum signatures. The 74-byte H33-74 attestation primitive provides a permanent, verifiable record that the computation was performed correctly without ever exposing the underlying patient data.

FHE and the Minimum Necessary Standard

HIPAA's minimum necessary standard requires that PHI access be limited to what is needed for the intended purpose. With traditional AI, this is difficult to enforce. A language model processing a patient summary has access to the entire summary, even if it only needs specific fields. Access controls can limit which records reach the AI, but they cannot limit what the AI sees within a record once it has been sent.

FHE provides a mathematical guarantee that exceeds the minimum necessary standard. The AI does not see any of the PHI, not just the minimum necessary portion. It computes on ciphertext that reveals nothing about the underlying data. This is the strongest possible implementation of minimum necessary: zero necessary disclosure to the compute environment. The only entity that ever sees plaintext is the healthcare organization that holds the decryption key.

Audit Trails and Attestation

HIPAA requires audit controls that record who accessed PHI, when, and what was done with it. For traditional AI services, audit trails depend on the service provider's logging infrastructure. If the provider's logs are incomplete, tampered with, or lost, the audit trail is compromised. The healthcare organization has no independent verification that its audit trail is accurate.

H33's pipeline produces a cryptographic attestation for every computation. Each operation generates a STARK proof that the computation was performed correctly and a three-family post-quantum signature that binds the proof to the input and output. This attestation is distilled into a 74-byte H33-74 primitive that can be stored alongside the patient record. Years later, the healthcare organization can independently verify that a specific computation was performed on specific encrypted data at a specific time, without relying on any third party's logs.

This cryptographic audit trail is qualitatively different from log-based audit trails. It cannot be fabricated, altered, or deleted without breaking the cryptographic signature. It is independently verifiable by any party with access to the public verification key. And it is quantum-resistant, meaning it will remain valid even after quantum computers can forge traditional digital signatures based on RSA or elliptic curves. For HIPAA compliance, this provides a level of accountability that no log-based system can match.

Practical Considerations

FHE does not solve all HIPAA compliance requirements. Administrative safeguards including policies, training, and risk assessments, physical safeguards including facility access and workstation security, and certain technical safeguards including user authentication and automatic logoff are organizational controls that must be implemented regardless of the encryption technology in use. FHE addresses the hardest technical challenge, protecting PHI during computation by a third party, but it is one component of a comprehensive compliance program.

Performance is no longer a barrier. Early FHE implementations were too slow for practical healthcare applications, but H33's pipeline processes operations at 38 microseconds each with full verification and signing. A typical medical record query that involves comparing encrypted patient features against a reference database can be completed in milliseconds. This is fast enough for interactive clinical decision support, not just batch processing.

Integration requires encrypting PHI before it leaves the healthcare organization's infrastructure. This means the FHE encryption happens on-premises or in the organization's cloud environment, using keys that the organization controls exclusively. The encrypted data can then be sent to any compute environment including H33's API without HIPAA exposure. The decryption also happens within the organization's boundary. The key management is the organization's responsibility, which is appropriate because HIPAA holds the covered entity accountable for PHI protection.

The MedVault Approach

H33 MedVault is built specifically for healthcare organizations that need AI-powered analytics on sensitive patient data. It provides a HIPAA-aligned architecture where PHI is encrypted at the point of origin using BFV or CKKS homomorphic encryption, processed through H33's pipeline, and returned with post-quantum attestation. BFV handles exact queries and matching. CKKS handles statistical and machine learning workloads where approximate arithmetic is acceptable.

MedVault supports common healthcare AI workloads: encrypted patient similarity matching for clinical trials recruitment, encrypted risk scoring for population health management, encrypted anomaly detection for fraud prevention, and encrypted feature comparison for diagnostic assistance. Each workload runs entirely on encrypted data, and every result includes a verifiable attestation that can serve as part of the HIPAA audit trail.

The attestation uses three independent hardness assumptions for its post-quantum signatures: MLWE lattices, NTRU lattices, and stateless hash functions. This means the attestation record remains verifiable and tamper-proof even in a post-quantum future. A HIPAA audit in 2040 can verify computations performed in 2026 with the same mathematical certainty as the day they were created.

The Bottom Line for Healthcare AI

ChatGPT and similar AI language models are not HIPAA compliant by default. Enterprise tiers with BAAs provide a contractual framework, but the fundamental architecture still processes plaintext PHI on third-party infrastructure. This creates compliance risk that no contract fully mitigates, because the risk is technical, not contractual. Data exposure during processing is a technical problem that requires a technical solution.

FHE eliminates this technical risk by ensuring that PHI is never exposed during computation. H33's MedVault provides the production infrastructure to make this practical: 2,293,766 operations per second, 38-microsecond latency, STARK-verified computation, and post-quantum attestation in 74 bytes. For healthcare organizations that want to use AI without compromising patient privacy or HIPAA compliance, encrypted computation is not a future technology. It is available today, at production scale, with cryptographic guarantees that no BAA can match.

Explore MedVault for Healthcare AI

See how FHE enables HIPAA-aligned AI on encrypted patient data.

Verify It Yourself