Every regulated industry in the United States has the same AI problem. The technology works. The models are ready. The use cases are obvious. And the legal department says no.
The legal department is not being difficult. It is being accurate. Deploying AI in healthcare, banking, or insurance means sending regulated data to a system that processes it. If that system is operated by a third party, the data has left your control. Once data leaves your control, an entire body of law activates: HIPAA requires a Business Associate Agreement before any vendor touches protected health information. OCC Bulletin 2023-17 requires a full third-party risk management program before a bank sends customer data to an AI vendor. State insurance regulators require transparency about every data source and every algorithm used in underwriting and claims decisions. GDPR requires a Data Processing Agreement before personal data crosses any organizational boundary.
These are not bureaucratic inconveniences. They are legal requirements with real consequences. A HIPAA violation for sharing PHI without a BAA carries penalties up to $2.1 million per violation category per year. An OCC enforcement action for inadequate third-party risk management can result in consent orders, civil money penalties, and restrictions on the bank's operations. State insurance commissioners can revoke licenses for unauthorized use of consumer data in AI-driven underwriting.
The result is a bottleneck that has nothing to do with technology and everything to do with data access. AI vendors need data to process. Regulated entities cannot share data without legal agreements. Legal agreements take three to six months to negotiate. The agreements create ongoing monitoring obligations that cost six figures per year per vendor relationship. And even after all of that, the agreements do not prevent breaches at the vendor. They merely allocate liability after a breach occurs.
This post is a practical guide to eliminating that bottleneck entirely. The mechanism is fully homomorphic encryption. The principle is simple: if the AI vendor never accesses your data, the vendor is not a data processor, not a business associate, and not a third party that requires risk management. The legal agreements become unnecessary because the legal trigger -- data access by the vendor -- never occurs.
The Legal Trigger That Creates the Bottleneck
Every data sharing regulation follows the same logical structure. There is a category of protected data. There is an event that triggers legal obligations. And there are requirements that must be satisfied before and after the triggering event.
For HIPAA, the protected data is PHI -- protected health information. The triggering event is a business associate's access to PHI. The requirements include executing a BAA, implementing HIPAA-compliant safeguards, conducting risk assessments, maintaining breach notification procedures, and ongoing compliance monitoring.
For banking regulations under OCC Bulletin 2023-17 and FFIEC guidance, the protected data is customer financial information. The triggering event is a third party's access to that information. The requirements include due diligence, risk assessment, contract negotiation, ongoing monitoring, and contingency planning for the third party's failure.
For insurance regulations under state DOI rules and NAIC model bulletins, the protected data is policyholder information used in underwriting and claims. The triggering event is the use of that data by an AI system. The requirements include transparency about the data used, documentation of the algorithm's decision-making process, and demonstration that the AI does not produce unfairly discriminatory outcomes.
In every case, the triggering event is access. A vendor that never accesses the data does not trigger the obligations. This is not a loophole. It is the fundamental logic of the regulation. HIPAA does not require a BAA with your electricity provider, even though your servers run on their power grid, because the electricity provider does not access PHI. OCC does not require third-party risk management for the company that manufactures your server racks, because the rack manufacturer does not access customer data. The obligations attach to data access, not to the existence of a vendor relationship.
Fully homomorphic encryption makes the AI vendor equivalent to the electricity provider in terms of data access. The vendor processes ciphertext. The vendor never holds decryption keys. The vendor cannot access the underlying data. The vendor is, from a regulatory perspective, not accessing regulated data at all.
Healthcare: Eliminating the BAA Requirement
HIPAA's Privacy Rule defines a business associate as any person or entity that performs functions or activities on behalf of a covered entity that involve the use or disclosure of PHI. The Security Rule requires that covered entities enter into BAAs with all business associates and that business associates implement administrative, physical, and technical safeguards for PHI.
When a hospital sends patient records to an AI vendor for clinical decision support, the AI vendor is a business associate. The vendor receives PHI, processes it, and returns results derived from it. The hospital must execute a BAA with the vendor before any data is transmitted. The BAA must specify the permitted uses and disclosures of PHI, require the vendor to implement HIPAA-compliant security measures, require the vendor to report breaches, and require the vendor to return or destroy PHI at the termination of the relationship.
Negotiating a BAA with a major AI vendor is not a simple process. The vendor's standard BAA may not meet the covered entity's requirements. The covered entity's legal counsel will review and redline the agreement. The vendor's legal counsel will respond. Provisions around breach notification timing, indemnification limits, subcontractor management, and data return obligations are routinely contested. The average BAA negotiation takes three to six months. For large health systems negotiating with major AI platforms, it can take longer.
And after the BAA is signed, the obligations continue. The covered entity must monitor the business associate's compliance. It must conduct periodic risk assessments that include the business associate's environment. It must update its notice of privacy practices to reflect the business associate relationship. It must maintain documentation of the BAA and all compliance monitoring activities for six years.
All of this exists because the AI vendor accesses PHI. Remove the access, and the entire apparatus becomes unnecessary.
How FHE Eliminates Business Associate Status
With fully homomorphic encryption, the hospital encrypts patient data before it reaches the AI system. The AI system receives ciphertext. It performs inference on the ciphertext using homomorphic operations -- additions and multiplications on encrypted values that produce encrypted results mathematically equivalent to what the same operations would produce on plaintext. The AI system returns encrypted results to the hospital. The hospital decrypts the results using its private key, which never leaves the hospital's environment.
At no point does the AI system access PHI. It processes polynomials in a lattice-based encryption scheme. Without the hospital's private key, these polynomials are computationally indistinguishable from random noise. The AI vendor cannot determine whether a ciphertext encodes a blood pressure reading, a diagnosis code, or a patient name. It cannot extract any information about any individual patient from the ciphertext it processes.
Because the AI vendor never uses or discloses PHI, it is not a business associate under HIPAA's definition. Because it is not a business associate, no BAA is required. Because no BAA is required, the three-to-six-month negotiation cycle is eliminated. The hospital can deploy the AI system as soon as the technical integration is complete, without waiting for legal clearance.
The HIPAA Security Rule requires "access controls" to protect PHI. Traditional AI deployments satisfy this with role-based access policies, audit logs, and authentication systems. FHE satisfies it with mathematical impossibility. The AI system does not have controlled access to PHI. It has no access to PHI. The access control is not a policy that could be misconfigured or bypassed. It is a property of the encryption scheme that holds as long as the underlying lattice problem remains hard.
This distinction matters enormously in a breach scenario. If a traditional AI vendor suffers a breach, every patient whose PHI was processed through that vendor's system is potentially affected. The covered entity must conduct a risk assessment to determine the probability that PHI was compromised. Breach notification to affected individuals, HHS, and potentially the media may be required. The covered entity's reputation suffers even if the breach occurred entirely at the vendor's facility.
If an FHE-based AI vendor suffers a breach, the attacker obtains ciphertext. The ciphertext is useless without the hospital's private key. No PHI is compromised. No breach notification is required. The hospital's patients are unaffected because their data was never present on the vendor's systems in any accessible form.
Banking: Eliminating Third-Party Risk Management
OCC Bulletin 2023-17, titled "Third-Party Relationships: Risk Management Guidance," replaced the previous OCC Bulletin 2013-29 and establishes comprehensive requirements for banks that use third-party services. The guidance applies to all third-party relationships, but it specifically calls out technology vendors and AI systems as areas requiring heightened scrutiny.
The guidance requires banks to conduct risk-based due diligence on third parties before entering into relationships, to negotiate contracts that include specific provisions for data protection and access controls, to implement ongoing monitoring programs, and to develop contingency plans for the third party's failure or inability to perform. For AI vendors that process customer financial data, the requirements are particularly demanding because the data is sensitive and the processing is often opaque.
FFIEC guidance on information technology examination adds additional requirements. Banks must ensure that third-party AI vendors implement security controls commensurate with the sensitivity of the data being processed. Banks must verify that vendors' security practices meet the bank's standards through audits, certifications, or independent assessments. Banks must ensure that vendors can demonstrate compliance with applicable laws and regulations.
The practical cost of these requirements is substantial. A single third-party risk assessment for a major AI vendor can cost $50,000 to $150,000 when you include legal review, security assessment, due diligence documentation, and internal approval processes. Ongoing monitoring adds $25,000 to $75,000 per year per vendor relationship. Banks with dozens of AI vendor relationships spend millions annually on third-party risk management alone.
How FHE Eliminates Third-Party Data Access
When a bank uses an FHE-based AI system for credit decisioning, fraud detection, or customer analytics, the bank encrypts customer data before it reaches the AI vendor. The AI vendor processes encrypted data and returns encrypted results. The bank decrypts the results locally.
The AI vendor never accesses customer financial information. It processes ciphertext that it cannot interpret. From a regulatory perspective, the vendor is not a third party that handles customer data. It is a computation provider that operates on encrypted inputs and produces encrypted outputs. The data never exits the bank's encryption boundary.
This changes the risk profile that OCC Bulletin 2023-17 is designed to manage. The bulletin's requirements exist because third-party access to customer data creates risks: data breach risk, data misuse risk, availability risk, and compliance risk. When the third party cannot access the data, the data breach and data misuse risks are eliminated by the encryption. The remaining risks -- availability and service performance -- are standard vendor management concerns that apply to any service provider, including non-data-handling vendors like network equipment manufacturers or facilities providers. These risks are managed through standard commercial contracts, not through the heightened third-party risk management framework that data access triggers.
For a bank deploying five AI models through traditional vendors, the annual third-party risk management cost is typically $500,000 to $1.5 million when you combine initial assessments, ongoing monitoring, legal review, and internal compliance overhead. With FHE, the bank eliminates the data-access component of those costs entirely. The vendor relationships still exist, but they are managed as standard technology procurement, not as high-risk third-party data relationships.
Insurance: Transparency Without Exposure
State insurance regulators and the NAIC have issued model bulletins on AI that require insurers to demonstrate transparency about how AI is used in underwriting and claims adjudication. Colorado's SB 21-169, one of the most comprehensive state AI regulations for insurance, requires insurers to conduct testing to ensure that AI systems do not produce unfairly discriminatory outcomes. The NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted in December 2023, requires insurers to maintain documentation of the data used by AI systems and the outcomes those systems produce.
These requirements create a tension. Regulators want to know what data the AI system is using and what decisions it is making. Insurers want to use AI for claims adjudication, underwriting risk assessment, and fraud detection. But the data involved -- policyholder health records, financial information, claims histories -- is highly sensitive. Sharing it with AI vendors creates the same data-access problems that arise in healthcare and banking.
FHE resolves this tension by allowing the insurer to demonstrate what the AI system does without revealing the data it operates on. The AI system processes encrypted policyholder data. It produces encrypted decisions -- approve the claim, deny the claim, flag for review, assign a risk tier. Each decision is accompanied by an H33-74 attestation that cryptographically proves the computation was performed correctly: the specified model was applied, the specified rules were evaluated, and the output corresponds to the input.
When a regulator asks the insurer to demonstrate that its AI system does not produce unfairly discriminatory outcomes, the insurer can provide the attestation chain. The attestation proves that every decision was produced by the same model, applied to data of the same structure, evaluated against the same rules. The insurer can demonstrate statistical properties of the decision distribution -- approval rates, denial rates, flag rates -- without revealing any individual policyholder's data to the regulator.
If the regulator requires deeper analysis, the insurer can provide encrypted test data and allow the regulator to verify the AI system's behavior on controlled inputs. The regulator submits encrypted test cases, the AI system processes them, and the regulator decrypts the results to verify that the system behaves as claimed. At no point does the AI vendor see the test data or the results. The verification is between the insurer and the regulator, with the AI vendor serving only as a computation engine that operates on ciphertext.
The Legal Savings: A Concrete Accounting
The cost of deploying AI in regulated industries is not primarily a technology cost. It is a legal and compliance cost. Here is what a typical enterprise spends to deploy a single AI model through a traditional vendor relationship in a regulated industry.
BAA or DPA negotiation: $30,000 to $100,000 in legal fees, depending on the complexity of the agreement and the number of negotiation rounds. Timeline: three to six months.
Vendor risk assessment: $50,000 to $150,000 for the initial assessment, including security questionnaires, on-site audits, penetration testing review, and documentation. Timeline: two to four months, often running in parallel with BAA negotiation.
Internal compliance review: $20,000 to $50,000 for the compliance team to review the vendor relationship, update risk registers, modify policies and procedures, and prepare board reporting materials. Timeline: one to two months.
Ongoing monitoring: $25,000 to $75,000 per year for annual reassessments, continuous monitoring services, compliance audit support, and incident response coordination.
Breach response preparation: $10,000 to $30,000 per vendor for developing breach response plans that account for the vendor relationship, conducting tabletop exercises, and maintaining breach notification templates and contact lists.
Total first-year cost for a single AI vendor relationship in a regulated industry: $135,000 to $405,000. Ongoing annual cost: $35,000 to $105,000. For an enterprise deploying AI through five vendors, the first-year legal and compliance cost alone is $675,000 to $2 million, with ongoing costs of $175,000 to $525,000 per year.
With FHE, these costs are eliminated. Not reduced. Eliminated. The AI vendor does not access regulated data, so it does not trigger the legal requirements that create these costs. The enterprise deploys the AI model through a standard technology procurement process -- the same process it uses to buy servers, network equipment, or cloud compute capacity. Standard commercial terms. Standard security review. No BAA. No DPA. No vendor risk assessment for data handling. No ongoing compliance monitoring of data protection practices.
Traditional AI Deployment vs. H33 FHE Deployment
| Dimension | Traditional AI Deployment | H33 FHE Deployment |
|---|---|---|
| Legal agreements required | BAA, DPA, vendor risk assessment, custom data protection terms | Standard commercial terms only |
| Time to deploy | 3-6 months (legal negotiation) | Days to weeks (technical integration only) |
| First-year legal cost per vendor | $135,000 - $405,000 | $0 (no data-access agreements needed) |
| Ongoing monitoring cost per vendor/year | $35,000 - $105,000 | $0 (no data-handling to monitor) |
| Vendor breach liability | Contractual indemnification (capped, negotiated) | No breach surface -- vendor holds only ciphertext |
| Breach notification risk | Yes -- vendor breach may trigger notification to patients, customers, regulators | No -- ciphertext breach does not compromise regulated data |
| Regulatory audit burden | Must document vendor data handling, produce BAAs, demonstrate monitoring | Vendor not a data processor -- standard vendor documentation only |
| Data residency compliance | Must ensure vendor processes data in compliant jurisdictions | Data never leaves encryption boundary -- jurisdiction irrelevant |
| Proof of correct processing | Trust vendor's representations, audit logs, SOC 2 reports | H33-74 cryptographic attestation -- mathematical proof per operation |
H33-74 Attestation: Replacing Contractual Trust with Mathematical Proof
The traditional model for trust in vendor relationships is contractual. You sign a BAA. The BAA says the vendor will protect your data. If the vendor fails to protect your data, you have a breach of contract claim. The contract allocates liability, typically with caps that are a fraction of the contract value. The vendor carries cyber insurance. You carry cyber insurance. Both parties hope nobody needs to file a claim.
This is not trust. This is liability allocation. The contract does not prevent breaches. It does not ensure correct processing. It does not prove that the vendor did what it said it would do. It creates a legal framework for assigning blame after something goes wrong.
H33-74 attestation replaces this model with mathematical proof. Every computation performed on encrypted data produces a 74-byte attestation -- a cryptographic proof that the specified computation was performed on the specified input and produced the specified output. The attestation is post-quantum secure, signed with three independent cryptographic families built on three independent hardness assumptions. Any party can verify the attestation without accessing the underlying data.
For a hospital, this means every clinical decision support query produces a verifiable proof that the correct model was applied to the patient's encrypted data and produced the result the hospital received. The hospital does not need to trust the vendor's representations. It verifies mathematically.
For a bank, this means every credit decision, every fraud score, every customer risk assessment produces a verifiable proof of correct execution. The bank's regulators can audit the attestation chain without accessing customer data. The proof is independent of the vendor's assertions.
For an insurer, this means every underwriting decision and every claims adjudication produces a verifiable proof that the decision was produced by the approved model, applied to the correct data, evaluated against the current rules. The state regulator can verify the attestation without the insurer having to expose policyholder data during the examination.
Contractual trust says: "We promise to protect your data, and if we fail, here is how we allocate liability."
Cryptographic trust says: "We never had your data, and here is a mathematical proof that we processed it correctly anyway."
Data Residency and Cross-Border Considerations
One of the most expensive complications in deploying AI across regulated industries is data residency. GDPR requires that personal data of EU residents be processed in jurisdictions with adequate data protection. HIPAA does not impose explicit data residency requirements, but many healthcare organizations adopt data residency policies as part of their risk management programs. Banking regulators in multiple jurisdictions impose data localization requirements for customer financial data.
These requirements constrain where AI infrastructure can be deployed. A hospital system in the EU cannot send patient data to an AI vendor's servers in the United States without a Data Processing Agreement that includes Standard Contractual Clauses or another GDPR-approved transfer mechanism. A bank in Singapore cannot process customer data through AI infrastructure hosted in a non-approved jurisdiction without regulatory approval.
FHE eliminates data residency as a constraint on AI deployment. When the AI vendor processes only ciphertext, the regulated data never leaves the organization's encryption boundary. The ciphertext that travels to the vendor's servers in any jurisdiction is not personal data, not PHI, not customer financial information. It is noise. The underlying data remains encrypted under keys held by the regulated entity in its home jurisdiction.
This means a hospital in Germany can use an AI vendor whose compute infrastructure is in the United States without triggering GDPR cross-border transfer requirements. The personal data of German patients is never transferred to the United States. Only ciphertext is transferred, and ciphertext is not personal data because no party outside the hospital can derive personal data from it.
Similarly, a bank in Singapore can use AI infrastructure hosted in any jurisdiction without seeking regulatory approval for cross-border data transfer. The customer data remains encrypted under the bank's keys. The jurisdiction where the computation occurs is irrelevant because the computation occurs on data that cannot be accessed in that jurisdiction.
Ongoing Monitoring: From Continuous to Unnecessary
Perhaps the most underappreciated cost of deploying AI through traditional vendor relationships is ongoing monitoring. Regulatory guidance across healthcare, banking, and insurance requires regulated entities to continuously monitor their third-party relationships. This includes annual risk reassessments, review of the vendor's SOC 2 or ISO 27001 certifications, evaluation of the vendor's incident response performance, and verification that the vendor's security practices remain adequate.
Ongoing monitoring is expensive not because any individual activity is complex, but because the activities never end and they multiply with every vendor relationship. A bank with ten AI vendor relationships that each require annual reassessment is conducting ten assessments per year, perpetually. Each assessment involves coordinating with the vendor, reviewing updated security documentation, evaluating any changes in the vendor's risk profile, and documenting the results for regulatory examination.
With FHE, the monitoring obligation for data protection evaporates. You still monitor the vendor's service availability, performance, and commercial terms -- the same monitoring you perform for any technology vendor. But you do not monitor the vendor's data protection practices because the vendor does not handle your data. The vendor's SOC 2 report is still relevant for operational reliability, but it is no longer relevant for data protection assurance. The vendor could have the worst data protection practices in the industry, and your regulated data would remain secure because the vendor never possesses it in accessible form.
This shifts the compliance model from continuous monitoring of vendor behavior to one-time verification of cryptographic guarantees. You verify once that the FHE implementation is correct -- that the encryption scheme provides the claimed security properties, that the homomorphic operations produce correct results, and that the key management architecture keeps private keys within your control. After that verification, the security guarantee holds as long as the underlying mathematical assumptions hold. You do not need to re-verify every year because mathematics does not change between audit cycles.
What This Does Not Solve
FHE eliminates data-access-triggered legal requirements. It does not eliminate all regulatory requirements for AI in regulated industries.
Model governance requirements remain. Regulators increasingly require that AI models used in regulated decisions be validated, tested for bias, and documented. FHE does not change these requirements. The model still needs to be appropriate for its use case, tested for fairness, and documented for regulatory examination. FHE changes where the model runs and what data it can access, but it does not change the requirement to use a good model.
Algorithmic transparency requirements remain. When a regulator asks how an AI system makes decisions, the insurer or bank must be able to explain the model's methodology. FHE does not provide model interpretability. It provides data protection during computation. The organization still needs to understand and be able to explain its AI systems to regulators.
Consumer rights requirements remain. HIPAA gives patients the right to access their records. GDPR gives data subjects the right to access, correct, and delete their personal data. CCPA gives consumers similar rights. These rights apply to the data held by the regulated entity, not to the ciphertext processed by the AI vendor. The regulated entity must still fulfill these rights through its own systems.
What FHE eliminates is the layer of legal and compliance overhead that exists specifically because data is shared with a vendor. The BAAs, the DPAs, the vendor risk assessments, the ongoing monitoring, the breach response coordination, the contractual liability negotiations -- all of these exist because the vendor accesses regulated data. Remove the access, and this entire layer disappears.
The Practical Path
For organizations in healthcare, banking, or insurance that want to deploy AI without the data sharing bottleneck, the path is straightforward. First, identify the AI use cases where the data sensitivity is the deployment barrier. These are typically the highest-value use cases -- the ones where the data is most regulated and the AI benefit is most significant. Clinical decision support on patient data. Credit risk assessment on customer financial records. Claims adjudication on policyholder health information.
Second, evaluate the computation requirements. FHE supports a broad range of operations -- arithmetic, comparisons, lookups, conditional logic, matrix operations, neural network inference -- but the computation must be expressed in terms of operations that FHE can perform homomorphically. Most standard AI inference workloads can be expressed this way. H33's BFV scheme handles integer arithmetic at production speeds. CKKS handles floating-point inference. TFHE handles Boolean logic and discrete decisions.
Third, deploy. Without a BAA negotiation. Without a DPA. Without a vendor risk assessment for data handling. Without ongoing monitoring of the vendor's data protection practices. The AI vendor processes ciphertext. Your data stays encrypted. Your legal team can focus on actual legal work instead of negotiating the same data protection clauses for the twentieth time this year.
The bottleneck in deploying AI in regulated industries has never been the AI. It has been the data sharing. FHE eliminates the data sharing. The AI deploys.
Deploy AI Without the Legal Bottleneck
See how FHE eliminates BAAs, DPAs, and vendor risk assessments for AI deployment in healthcare, banking, and insurance. Schedule a technical walkthrough with the H33 team.
Schedule a Demo