The Benchmark
The number requires context because FHE benchmarks have historically been unreliable. Vendors report micro-operation throughput — individual additions or multiplications on encrypted integers — which inflates the numbers by orders of magnitude. A single "operation" in those benchmarks might be one modular addition on a ciphertext polynomial. Performing a useful computation — like comparing two encrypted biometric templates or evaluating a fraud detection model on encrypted transaction data — requires thousands of those micro-operations. The gap between reported micro-op throughput and real-world application throughput has been the primary source of confusion in the FHE market.
The 190 billion figure is different. Each operation in this benchmark is a full-lifecycle operation. That means: accept an encrypted input, perform the computation on ciphertext (including key switching, relinearization, and noise management as needed), produce an encrypted result, and verify the result integrity. This is not a count of individual polynomial multiplications. It is a count of complete, end-to-end encrypted computations. The type of computation varies by engine — BFV integer operations, CKKS floating-point operations, TFHE boolean gate evaluations — but each one represents a full unit of useful work on encrypted data. For full benchmark details, see the H33 Benchmarks page.
The sustained throughput is 2,209,429 operations per second. The keyword is "sustained" — this is not a peak burst rate measured over milliseconds. It is the steady-state throughput measured over extended periods under production-representative load. The latency per operation is 38.5 microseconds end-to-end, which includes the computation, key management, and integrity verification. The hardware is a single ARM-based compute node — not a cluster, not a GPU farm, not specialized FHE accelerator hardware. Commodity cloud compute.
What Each Operation Includes
The definition of "operation" matters enormously for understanding the insurance implications. In the H33 architecture, a full-lifecycle operation includes six distinct phases:
1. Input reception. The encrypted input arrives at the compute endpoint. The input is already FHE-encrypted by the client using parameters that match the server's evaluation key. The server never possesses the decryption key.
2. Computation. The server performs the requested computation on the ciphertext. This may be an encrypted comparison (biometric matching), an encrypted evaluation (fraud score), an encrypted search (database query), or an encrypted inference (ML model). The computation is performed homomorphically — the mathematical properties of the encryption scheme allow the computation to proceed on ciphertext as if it were plaintext.
3. Noise management. Every FHE computation adds noise to the ciphertext. If the noise exceeds the scheme's tolerance, the result becomes undecryptable. Noise management operations — bootstrapping in TFHE, modulus switching in BFV/CKKS — are performed as needed to keep the noise within bounds. This is the most computationally expensive phase and the primary reason FHE has historically been slow.
4. Key switching. When different parts of a computation use different evaluation keys (for example, when combining results from multiple encrypted inputs), key switching operations convert between key spaces without decrypting. This enables multi-party computation and cross-tenant analysis on encrypted data.
5. Result production. The encrypted result is packaged for return to the client. The result is encrypted under the client's key and can only be decrypted by the client. The server has performed the computation without ever seeing the data or the result in plaintext.
6. Integrity verification. The result includes a cryptographic proof that the computation was performed correctly on the provided input. This prevents a malicious server from returning fabricated results. The client can verify that the server performed the claimed computation without trusting the server.
All six phases are included in the 38.5-microsecond latency and the 2.2 million ops/sec throughput. This is the honest measure of FHE performance — the time from encrypted input to verified encrypted output.
Comparison to Global Financial Volume
To contextualize 190 billion operations per day: the global financial system processes approximately 10 billion transactions per day across all payment networks, stock exchanges, settlement systems, and banking platforms combined. This includes Visa (approximately 700 million transactions/day), Mastercard (approximately 400 million), SWIFT (approximately 46 million), and all other financial transaction networks.
A single H33 compute node processes 19 times that volume in encrypted operations per day. This means every financial transaction on Earth could be processed on encrypted data — with the data never becoming plaintext — using less than 6% of a single node's capacity. The remaining 94% is available for everything else: compliance checks, fraud detection, identity verification, risk scoring, and any other computation that currently requires decrypting sensitive data.
This comparison is not about financial transactions specifically. It is about demonstrating that the scale of FHE computation is no longer a constraint for any real-world application. Every use case that currently operates on plaintext data can now operate on encrypted data without performance degradation that matters at human timescales. The as-reported coverage in the National Law Review noted this threshold as significant for regulated industries where encrypted computation enables new compliance architectures.
The Four-Engine Architecture
The performance comes from H33's four-engine FHE architecture. Each engine is optimized for a different class of computation, and the FHE-IQ system automatically routes operations to the optimal engine based on the computation requirements.
BFV (Brakerski/Fan-Vercauteren): Optimized for exact integer arithmetic on encrypted data. Used for database queries, counting operations, comparisons, and integer-based analytics. The H33-128 engine operates at 128-bit security with optimized NTT (Number Theoretic Transform) performance. Applications: encrypted search, access control evaluation, compliance counting.
CKKS (Cheon-Kim-Kim-Song): Optimized for approximate floating-point arithmetic on encrypted data. Used for machine learning inference, statistical analysis, risk scoring, and any computation involving real numbers. The H33-CKKS engine provides configurable precision (16–128 bit mantissa equivalent) with automatic rescaling. Applications: encrypted ML inference for fraud detection, encrypted actuarial computation, encrypted risk scoring.
BFV-32: A specialized 32-bit integer variant optimized for high-throughput batch processing. Used for bulk verification operations, batch authentication, and high-volume screening. The engine processes 8 million encrypted authentications per second in batch mode. Applications: policyholder verification at scale, bulk compliance checks, mass screening operations.
TFHE (Torus FHE): Optimized for boolean circuit evaluation on encrypted data. Used for arbitrary program execution on encrypted inputs, including complex branching logic, conditional operations, and programmable encrypted computation. The H33-TFHE engine provides programmable bootstrapping with sub-millisecond gate evaluation. Applications: encrypted policy evaluation, encrypted claims rules processing, encrypted underwriting logic.
The four-engine architecture means that no single FHE scheme bottlenecks the system. Integer operations run on BFV. Floating-point runs on CKKS. Boolean logic runs on TFHE. High-volume batch processing runs on BFV-32. The FHE-IQ router selects the optimal engine automatically based on the operation type, input characteristics, and required precision. This is why a single node can sustain 2.2 million operations per second across diverse workloads — each operation runs on the engine best suited for it.
Why FHE Performance Was the Blocker
For the past 15 years, FHE has been a theoretical promise without practical applicability. The mathematics were sound since Craig Gentry's breakthrough in 2009. The security properties were exactly what regulated industries needed. But the performance was prohibitive. Early FHE implementations measured operation latency in minutes. By 2020, leading implementations measured in seconds. By 2023, the best implementations measured in hundreds of milliseconds. Every generation was an improvement, but none crossed the threshold from academic curiosity to production deployment.
The threshold for production deployment in insurance applications is approximately 100 milliseconds per operation. At that latency, an encrypted fraud check on a wire transfer adds a sub-perceptible delay. An encrypted identity verification at a claims portal adds a barely noticeable pause. An encrypted risk assessment during a quote request completes before the agent finishes the next question. Above 100 milliseconds, the delay becomes a user experience problem. Above one second, it becomes a workflow problem. Above 10 seconds, it becomes a deployment blocker.
H33's 38.5-microsecond latency is 2,597 times faster than the 100-millisecond production threshold. It is not on the edge of viability. It is orders of magnitude past it. The performance overhead of encrypted computation relative to plaintext computation is now measurable in microseconds — a difference that is invisible to users, invisible to workflows, and invisible to SLA measurements. The performance blocker is completely eliminated.
Insurance Applications
With the performance barrier removed, FHE enables a new category of insurance applications that were previously impossible.
Encrypted Policyholder Verification
Currently, when a carrier verifies a policyholder's identity during claims intake, the policyholder's PII is decrypted at the verification endpoint. If that endpoint is compromised, the PII is exposed. With FHE, the verification runs on encrypted data. The carrier's system compares encrypted credentials against encrypted records. The match/no-match result is produced without any endpoint possessing plaintext PII. A breach of the verification endpoint yields ciphertext.
Encrypted Fraud Detection
Insurance fraud detection models currently operate on plaintext claims data. The model ingests claimant information, claim history, medical records, repair estimates — all decrypted for analysis. With FHE, the fraud detection model runs on encrypted claims data. The model evaluates encrypted inputs and produces an encrypted fraud score that is decrypted only by the authorized claims handler. No intermediate system sees the plaintext data. No database of claims history exists in decrypted form.
Encrypted Multi-Carrier Data Sharing
Carriers need to share claims data for fraud ring detection and aggregate loss modeling, but data sharing between competitors raises antitrust and privacy concerns. With FHE, carriers can submit encrypted claims data to a shared analytics platform. The platform performs cross-carrier analysis — identifying patterns, correlating claim histories, detecting fraud rings — without any carrier seeing another carrier's data. The analysis results are encrypted per carrier. Each carrier sees only the results relevant to their book. This enables industry-level fraud detection without industry-level data exposure.
Encrypted Underwriting Risk Assessment
When a broker submits a new business application, the underwriter evaluates the risk using proprietary models. The policyholder's data is processed by the underwriter's systems in plaintext. With FHE, the underwriter's risk model can run on encrypted policyholder data. The policyholder submits encrypted information. The underwriter's model produces an encrypted risk score. The underwriter sees the score and the pricing recommendation without seeing the underlying data. This is particularly powerful for sensitive data categories: employee health information for group policies, detailed financial records for D&O policies, or trade secrets for intellectual property policies.
Encrypted Claims Adjudication
Claims adjudication involves the most sensitive data in the insurance lifecycle — medical records, financial records, incident reports, law enforcement communications. Currently, this data flows through multiple parties (carrier, TPA, adjuster, defense counsel, subrogation counsel) in plaintext. Each handoff is an exposure point. With FHE, claims documents can be processed by each party on encrypted data. The adjuster evaluates encrypted medical records against encrypted policy terms. Defense counsel reviews encrypted incident reports. The data is never decrypted outside the authorized party's secure environment.
The Data-in-Use Problem in Insurance
The insurance industry has invested heavily in data-at-rest encryption and data-in-transit encryption. Databases are encrypted. Network connections use TLS. Data centers have physical security. But the data-in-use problem remains unsolved across the industry. Every time a claims adjuster opens a file, the data is decrypted. Every time a fraud model processes a claim, the data is decrypted. Every time an underwriter evaluates a submission, the data is decrypted. The decryption creates the vulnerability. The vulnerability creates the claim exposure. The claim exposure drives the premium.
FHE at 190 billion operations per day eliminates the data-in-use problem entirely. The data is never decrypted for processing. The processing happens on ciphertext. The attack surface for data exfiltration collapses because there is no plaintext to exfiltrate. A compromised claims processing server yields encrypted records that are mathematically undecryptable without keys that never existed on the server.
For carriers evaluating their own cyber risk — and for carriers that purchase reinsurance based on their data protection posture — FHE adoption changes the risk profile fundamentally. The reinsurer evaluating a carrier that processes all claims data on FHE-encrypted systems sees a different risk than one evaluating a carrier that decrypts data for processing. The expected loss from a carrier-level data breach drops by the same factor identified in the breach yield analysis: approximately 18x.
The Regulatory Tailwind
Insurance regulators are beginning to recognize the implications of encrypted computation. The NAIC Cybersecurity Working Group has identified data-in-use protection as a future regulatory focus area. State insurance departments in New York, California, and Colorado have issued guidance that explicitly encourages adoption of advanced encryption technologies for policyholder data protection. The EU's DORA regulation (Digital Operational Resilience Act) creates requirements for financial entities — including insurers — that align with encrypted processing capabilities.
The HIPAA safe harbor provision is particularly relevant for health insurers and healthcare policyholders. Under 45 CFR 164.402, a breach of encrypted data does not require notification if the encryption meets specified standards. This safe harbor applies to data at rest and in transit. With FHE, the safe harbor extends to data in use — because the data was never decrypted. A health insurer that processes claims on FHE-encrypted data qualifies for safe harbor protection across the entire data lifecycle, not just storage and transmission.
The regulatory trajectory is clear: encrypted processing will move from optional best practice to expected standard of care. The carriers and policyholders that adopt now will be ahead of the regulatory curve. The carriers that wait will face the same scramble that accompanied MFA adoption when carriers mandated it in 2023–2024.
What 190 Billion Means for HATS
The HATS attestation system benefits directly from this performance capability. Every attestation in the HATS framework is signed with post-quantum cryptographic algorithms. The verification of those attestations involves cryptographic operations that benefit from the same optimized engine architecture. At 190 billion operations per day, the HATS system can process attestations for millions of policyholders simultaneously without performance constraints.
More importantly, the FHE capability enables encrypted attestation processing. The control state data derived from policyholder security tools can be encrypted before analysis. The risk assessment computed from that data can run on encrypted inputs. The carrier sees the risk score and the quote readiness signal without seeing the raw control data. This preserves policyholder privacy while providing the carrier with the verified risk signal they need for underwriting.
The Bottom Line
190 billion FHE operations per day means the performance excuse is over. Every insurance application that currently processes plaintext data can now process encrypted data at production speed. The question is no longer "can we do this?" It is "why are we still decrypting?" The carriers that answer that question first will build the most defensible, the most private, and the most profitable books of business in the market. The technology is ready. The actuarial models need to catch up.
Explore the technology: Benchmarks | FHE Overview | Pricing | Quantum Readiness Assessment