FHE · 22 min read

What Is Fully Homomorphic Encryption (FHE)?
The Complete Guide

FHE lets you compute on encrypted data without ever decrypting it. From a 30-minute-per-gate curiosity in 2009 to 1.2 million authentications per second today, this is the full story—the math, the schemes, the noise problem, and how H33 ships it in production.

~50µs
Per Auth
1.2M/s
Throughput
128-bit
Security
32
Users/Batch

Imagine handing a locked safe to a stranger, having them rearrange everything inside without opening it, and receiving it back with the contents correctly reorganized. That is Fully Homomorphic Encryption—the ability to compute on encrypted data without ever decrypting it. The server that processes your data never sees it. The cloud that stores your records cannot read them. The result, when decrypted by the key holder, is mathematically identical to having performed the computation on plaintext.

For decades, this was a theoretical curiosity—a construct that cryptographers proved possible but could not build efficiently. That changed in 2009. Today, FHE runs in production at H33, processing 1.2 million biometric authentications per second on a single server. This guide covers everything: the history, the math, the four major FHE scheme families, the noise problem that makes FHE hard, the engineering that makes it fast, and why FHE is inherently post-quantum secure.

Key Insight

FHE is the only encryption technology that protects data during computation, not just at rest or in transit. Traditional encryption requires decryption before processing—creating a window where data is exposed in memory. FHE eliminates that window entirely. The plaintext never exists on the server.

The Privacy Problem FHE Solves

Every conventional data-processing pipeline has the same fundamental flaw: to compute on data, you must first decrypt it. This creates an inescapable exposure window.

FHE solves this by making decryption unnecessary for computation. The server operates exclusively on ciphertext. The result is ciphertext. Only the key holder can decrypt the final answer. The server learns nothing—not the inputs, not the intermediate values, not the output.

A Brief History of FHE

The idea of computing on encrypted data is older than most people realize. The journey from theoretical possibility to production deployment spans nearly five decades.

1978
Rivest, Adleman, Dertouzos pose the question: can we compute on encrypted data? They call it a "privacy homomorphism." The concept exists, but no one can build a fully homomorphic scheme.
1999–2005
Partially homomorphic schemes emerge. RSA supports multiplicative homomorphism. Paillier (1999) supports additive homomorphism. ElGamal supports one operation. None support both addition and multiplication—both are needed for arbitrary computation.
2009
Craig Gentry's breakthrough. His PhD thesis at Stanford constructs the first-ever fully homomorphic encryption scheme using ideal lattices. It supports unlimited additions AND multiplications. The catch: each Boolean gate takes ~30 minutes. Completely impractical—but it proves FHE is possible.
2011–2012
2nd generation: BGV and BFV. Brakerij-Gentry-Vaikuntanathan (BGV, 2011) and Brakerij/Fan-Vercauteren (BFV, 2012) replace ideal lattices with Ring-LWE. Orders of magnitude faster. Leveled FHE becomes practical for bounded-depth circuits.
2017
3rd generation: CKKS. Cheon-Kim-Kim-Song introduces approximate arithmetic FHE, enabling encrypted floating-point operations. Opens the door to encrypted machine learning inference.
2016–2020
TFHE and programmable bootstrapping. Chillotti et al. develop TFHE with bootstrapping in under 10ms per gate. Each gate evaluation refreshes noise, enabling arbitrary-depth computation without pre-planning circuit depth.
2020–2024
Hardware acceleration era. Intel releases HEXL (an AVX-512 acceleration library for FHE). GPU implementations achieve 100x speedups for bootstrapping. DARPA launches the DPRIVE program targeting ASIC-accelerated FHE.
2025–2026
Production deployment. H33 achieves 1.2M FHE-encrypted biometric auths/sec on Graviton4 (ARM). ~50µs per authentication. FHE is no longer academic—it is shipping in production at scale.

The Homomorphic Property: What It Actually Means

The word "homomorphic" comes from Greek: homos (same) + morphe (form). A homomorphism preserves structure across a transformation. In FHE, the structure being preserved is arithmetic.

The Homomorphic Property

Enc(a) + Enc(b) = Enc(a + b)
Enc(a) × Enc(b) = Enc(a × b)
Enc(x) Encryption of value x under the public key + Homomorphic addition (operates on ciphertexts) × Homomorphic multiplication (operates on ciphertexts)

With both addition and multiplication on encrypted values, you can evaluate any computable function. Addition and multiplication form a complete set of arithmetic operations—any polynomial can be expressed with them, and any Boolean circuit can be expressed as arithmetic over binary values (where AND = multiplication, XOR = addition mod 2). This is why "fully" homomorphic encryption is so powerful: it isn't limited to a specific class of computations.

There are weaker variants that preceded FHE:

The Ring-LWE Foundation

All modern FHE schemes (BFV, BGV, CKKS, and partially TFHE) are built on the Ring Learning With Errors (Ring-LWE) problem. Understanding Ring-LWE is essential to understanding why FHE is both secure and post-quantum resistant.

The Core Idea

Ring-LWE works in polynomial rings. Instead of working with individual numbers, we work with polynomials of degree N−1 with integer coefficients, modulo both a polynomial (typically xN + 1) and an integer modulus Q.

Ring-LWE in Plain English

Imagine you have a secret polynomial s(x). You publish a(x) · s(x) + e(x) mod Q, where a(x) is random and e(x) is a "small" error polynomial (each coefficient is tiny compared to Q). The Ring-LWE assumption says: given a(x) and a(x) · s(x) + e(x), it is computationally infeasible to recover s(x). This holds even against quantum computers—there is no known quantum algorithm that efficiently solves lattice problems.

The mathematical setting is the ring RQ = ZQ[x] / (xN + 1), where N is a power of 2 (typically 1024, 2048, 4096, or 8192) and Q is a large modulus. Every element in this ring is a polynomial of degree at most N−1 with coefficients in {0, 1, ..., Q−1}. Polynomial multiplication in this ring wraps around at xN + 1, which gives it a special structure that enables fast computation via the Number Theoretic Transform (NTT)—the integer analog of the FFT.

Why Ring-LWE Makes FHE Work

Ring-LWE provides two critical properties for FHE:

  1. Semantic security: Ciphertexts are computationally indistinguishable from random, even to quantum adversaries. The error term "masks" the plaintext.
  2. Homomorphic structure: The polynomial ring supports addition and multiplication. Adding two ciphertexts adds their underlying plaintexts (plus accumulated error). Multiplying two ciphertexts multiplies their plaintexts (but the error grows much faster).

The tension between these properties—the error that provides security also limits computation depth—is the central challenge of FHE engineering. This is the noise problem.

The Noise Problem

Every FHE ciphertext carries noise. This noise is essential for security—without it, the encryption would be trivially breakable. But noise accumulates with every homomorphic operation, and if it exceeds a threshold, decryption produces garbage.

The Fundamental Tradeoff

Noise grows linearly with homomorphic addition and multiplicatively with homomorphic multiplication. After enough multiplications, the noise overwhelms the plaintext and decryption fails. This is why FHE is hard: every engineering decision is ultimately about managing noise growth.

There are three main strategies for managing noise:

1. Leveled FHE (Set a Noise Budget)

Choose parameters (N, Q) large enough to support a specific multiplication depth L. Larger Q gives more noise headroom but larger ciphertexts and slower operations. This is how most practical BFV/BGV deployments work—you know your circuit depth in advance and set parameters accordingly.

2. Modulus Switching

After a multiplication, scale the ciphertext down to a smaller modulus Q'. This reduces the noise proportionally. Think of it as "zooming out" on the noise: the absolute noise stays roughly the same, but Q shrinks, so you can't do this forever. BGV uses modulus switching as its primary noise management tool, consuming one modulus level per multiplication.

3. Bootstrapping

The nuclear option: homomorphically evaluate the decryption circuit on the noisy ciphertext, producing a fresh ciphertext with reset noise. Bootstrapping is what makes FHE "fully" homomorphic (unlimited depth). It is also the most expensive operation—historically taking seconds to minutes. TFHE achieves the fastest bootstrapping (~10ms per gate), making it practical for certain use cases.

Noise Management Strategies Compared

Leveled FHE

Set parameters for max depth. No bootstrapping needed. Fastest per-operation. Best when circuit depth is known.

Modulus Switching

Scale down after multiplications. Extends effective depth. Moderate overhead. Used in BGV and BFV.

Bootstrapping

Reset noise completely. Enables unlimited depth. Expensive (~10ms in TFHE, seconds in BFV/BGV). Required for "fully" homomorphic.

FHE Schemes Compared: BFV, BGV, CKKS, TFHE

Four FHE scheme families dominate the landscape. Each is optimized for different data types and computation patterns. Choosing the right scheme is the most consequential architectural decision in any FHE deployment.

Scheme Data Type Noise Mgmt Best For Batching Bootstrap Speed
BFV Exact integers Scale-invariant Integer arithmetic, matching, database queries N slots (SIMD) Seconds
BGV Exact integers Modulus switching Known-depth circuits, modular arithmetic N slots (SIMD) Seconds
CKKS Approximate reals Rescaling ML inference, statistics, floating-point workloads N/2 slots (SIMD) Seconds
TFHE Booleans / small integers Per-gate bootstrap Boolean circuits, comparisons, arbitrary programs Limited ~10ms/gate

BFV (Brakerij/Fan-Vercauteren)

BFV encrypts integer vectors and is scale-invariant—the noise management does not depend on the computation being performed. This makes it simpler to use: you pick parameters for a maximum multiplication depth, and all operations within that budget work correctly. BFV supports SIMD batching: a single ciphertext can encode N independent integers (one per polynomial slot), and homomorphic operations act on all N slots in parallel.

H33 uses BFV for biometric authentication because biometric matching is fundamentally an integer inner-product operation. With N=4096, each ciphertext holds 32 independent 128-dimensional biometric vectors, and the inner product is computed with a single multiply-accumulate followed by a Galois rotation and accumulation.

BGV (Brakerij-Gentry-Vaikuntanathan)

BGV is closely related to BFV but uses a different noise management strategy: modulus switching. After each multiplication, the ciphertext modulus is reduced, which proportionally reduces the noise. BGV tends to be slightly more efficient than BFV for circuits where the multiplication depth is known in advance, because the modulus ladder can be precisely calibrated. The tradeoff is that BGV requires more careful parameter planning.

CKKS (Cheon-Kim-Kim-Song)

CKKS is the only major FHE scheme that supports approximate arithmetic on real numbers. Instead of treating noise as an enemy to be eliminated, CKKS incorporates it into the computation as a controlled precision loss—similar to floating-point rounding. This is a paradigm shift: CKKS accepts that results are approximate (to, say, 30 bits of precision) in exchange for much more efficient encrypted floating-point operations.

CKKS is the scheme of choice for encrypted machine learning inference, encrypted statistics, and any workload involving real-valued data. The catch: exact integer computation (needed for things like threshold comparisons) requires extra care in CKKS.

TFHE (Torus FHE)

TFHE takes a fundamentally different approach: instead of batching many values into one ciphertext and computing in parallel, TFHE encrypts individual bits (or small integers) and evaluates Boolean gates one at a time. The key innovation is programmable bootstrapping—each gate evaluation also refreshes the noise, enabling arbitrary computation depth without pre-planning.

TFHE's per-gate bootstrapping takes ~10ms (compared to seconds for BFV/BGV bootstrapping), making it practical for gate-by-gate evaluation. The tradeoff: TFHE does not support SIMD batching, so throughput for parallelizable workloads is lower than BFV/BGV/CKKS.

Choosing a Scheme

Integer math with known depth? Use BFV or BGV. Floating-point / ML inference? Use CKKS. Arbitrary Boolean circuits or unknown depth? Use TFHE. Biometric matching? BFV—integer inner products at fixed depth are exactly what it optimizes for.

Performance Reality: From Impractical to Production

The most common objection to FHE is performance. That objection was valid in 2009. It is not valid in 2026. The performance improvement over the past 17 years has been staggering—roughly 11 orders of magnitude.

FHE Performance Evolution

Gentry 2009 (1 Boolean gate)~30 min
HElib 2013 (AES block)~4 min
SEAL 2018 (multiply)~50ms
Lattigo 2021 (BFV multiply)~5ms
H33 2026 (full 32-user biometric batch)~1,375µs
H33 2026 (per authentication)~50µs

The improvement comes from multiple compounding factors: better schemes (BFV vs Gentry's ideal lattices), algorithmic improvements (NTT-based polynomial multiplication, SIMD batching), hardware acceleration (AVX-512, ARM NEON), and engineering optimizations (Montgomery arithmetic, pre-NTT keys, fused operations).

FHE is still slower than plaintext computation—that is inherent to the mathematics. A BFV homomorphic multiply on a 4096-coefficient polynomial is roughly 1000x slower than the equivalent plaintext multiply. But when the alternative is "expose plaintext to the server," the question changes from "is FHE fast enough?" to "is the privacy worth the overhead?" For biometric authentication at 50µs per auth, the answer is unambiguously yes.

Real-World Applications

FHE has moved beyond academic papers into production systems. Here are the application domains where FHE delivers value that no other technology can match.

Encrypted Biometric Matching

Templates are encrypted at enrollment and never decrypted. The server computes similarity scores entirely in ciphertext. Even if the database is breached, the adversary gets lattice-encrypted ciphertext—useless without the secret key.

Private ML Inference

A user encrypts their input (medical image, financial data) and sends ciphertext to the model server. The server runs inference on encrypted data using CKKS. The user decrypts the result. The model owner never sees the input; the user never sees the model weights.

Encrypted Database Queries

Query encrypted records without decrypting the database. Enables encrypted search, encrypted aggregation, and encrypted joins. Critical for regulated data (HIPAA, GDPR) where the database operator must not access plaintext.

Private Set Intersection

Two parties determine the overlap of their datasets without revealing non-overlapping elements. Used in ad measurement, contact tracing, and sanctions screening. FHE-based PSI avoids the communication overhead of MPC approaches.

Healthcare Analytics

Hospitals run analytics on encrypted patient records, enabling multi-institution research without HIPAA-violating data sharing. Genomic analysis on encrypted DNA sequences. Drug interaction screening across encrypted prescription databases.

Financial Computation

Credit scoring on encrypted financial data. Portfolio risk analysis without exposing holdings. Anti-money-laundering screening across encrypted transaction logs. The bank computes, the customer's data stays private.

FHE vs. Alternatives: Security Model Comparison

FHE is not the only privacy-preserving computation technology. Trusted Execution Environments (TEEs), Multi-Party Computation (MPC), and Zero-Knowledge Proofs (ZKPs) each address overlapping but distinct use cases. The security models differ in fundamental ways.

Property FHE TEE / SGX MPC ZKP
Trust assumption Math only Hardware vendor Honest majority Math only
Data exposure Never decrypted Plaintext in enclave Split across parties Prover keeps data
Quantum resistant Yes (lattice-based) No (depends on crypto) Depends on primitives Depends on primitives
Side-channel risk None (no decryption) High (Spectre, Foreshadow) Network timing None
Number of parties 1 (single server) 1 (single server) 2+ required 1 prover + 1 verifier
Computation type Arbitrary (with scheme choice) Arbitrary (native speed) Arbitrary (high communication) Proving statements (not general compute)
Performance overhead 100–10,000x (improving) ~1x (near native) 10–1,000x + network Proof generation expensive
Hardware attacks Immune Vulnerable (SGX broken repeatedly) Immune Immune
TEE Warning

Intel SGX has been broken by Spectre, Meltdown, Foreshadow, Plundervolt, LVI, AEPIC Leak, and Downfall—each extracting plaintext data from "secure" enclaves. TEEs provide performance but not cryptographic security. If your threat model includes nation-state adversaries or must survive hardware vulnerabilities, FHE is the only option that provides mathematical guarantees independent of hardware trust.

H33's BFV Implementation: Production Parameters

H33 uses BFV for its production biometric authentication pipeline. The parameter choices are the result of extensive benchmarking across thousands of configurations, balancing security, noise budget, and throughput.

H33 BFV Production Parameters

Lattice Parameters

  • N = 4,096 (polynomial degree, ring dimension)
  • Q = 56-bit (single ciphertext modulus)
  • t = 65,537 (plaintext modulus, 17-bit prime)
  • Security: 128-bit (post-quantum)

Performance

  • Batch latency: ~1,375µs (32 users)
  • Per-auth latency: ~50µs
  • Throughput: 1.2M auth/sec (96 workers)
  • Hardware: c8g.metal-48xl (Graviton4)

Why These Parameters

N = 4,096 is the minimum ring dimension that provides 128-bit post-quantum security with a single 56-bit modulus. Larger N (8192, 16384) would provide more noise headroom for deeper circuits, but biometric matching only needs multiplication depth 1 (a single inner product), so the extra capacity would be wasted. Smaller N saves memory and makes NTT faster (O(N log N) per polynomial multiply).

Q = 56-bit single modulus means H33 avoids the complexity of Residue Number System (RNS) decomposition. Most FHE libraries use multi-limb moduli (several 60-bit primes combined via CRT) to support deeper circuits. H33's single-modulus approach eliminates CRT conversion overhead and simplifies the entire pipeline. This is only possible because biometric matching has a shallow circuit.

t = 65,537 is a 17-bit prime that satisfies the critical SIMD batching condition: t must be congruent to 1 mod 2N. Since 65,537 ≡ 1 (mod 8,192), BFV's batching encoder can pack N independent plaintext integers into one ciphertext, enabling SIMD parallelism across all 4,096 polynomial slots.

SIMD Batching: 32 Users per Ciphertext

SIMD (Single Instruction, Multiple Data) batching is the single most important performance optimization in practical BFV/BGV deployments. It turns a single homomorphic operation into N parallel operations for free—the ciphertext doesn't get bigger, the computation doesn't get slower, but you process N independent values simultaneously.

How SIMD Batching Works

BFV's plaintext space is the polynomial ring Zt[x]/(xN+1). When t ≡ 1 (mod 2N), this ring factors into N independent slots via the Chinese Remainder Theorem (CRT). Each slot holds an independent integer mod t. A single homomorphic addition or multiplication operates on ALL N slots simultaneously—true SIMD parallelism at the mathematical level.

H33 uses this to batch biometric templates. Each user's template is a 128-dimensional integer vector. With N = 4,096 polynomial slots:

The batch verification is constant time—it takes the same ~1,375µs whether you're verifying 1 user or 32 users, because the SIMD operations touch all slots regardless. This is both a performance feature (amortized cost) and a security feature (no timing side-channels that leak which users are being verified).

Code Example: Encrypt, Compute, Decrypt

Here is a simplified view of the H33 FHE pipeline, showing the encrypt-compute-decrypt flow for biometric verification. The actual production code includes NTT-domain optimizations and Montgomery arithmetic, but the logical flow is the same.

Rust fhe_biometric_pipeline.rs
// === KEYGEN (one-time setup) ===
let params = BfvParams {
    n: 4096,                // Ring dimension
    q: 56_bit_prime,        // Ciphertext modulus
    t: 65537,               // Plaintext modulus (SIMD-compatible)
};
let (secret_key, public_key) = bfv_keygen(&params);

// === ENROLLMENT (client-side) ===
// Biometric template: 128 integers per user, 32 users batched
let template: Vec<u64> = extract_biometric(face_scan);   // 128 dims
let batched = simd_encode(&templates_32_users, &params);  // Pack 32 users
let encrypted = bfv_encrypt(&batched, &public_key);       // Lattice encryption
store_enrolled(user_id, &encrypted);  // Server stores ciphertext only

// === VERIFICATION (server-side, on encrypted data) ===
let probe_ct = receive_encrypted_probe(request);
let enrolled_ct = load_enrolled(user_id);  // Already in NTT form

// Inner product: multiply + accumulate across 128 dimensions
// This operates on ALL 32 users in the batch simultaneously (SIMD)
let score_ct = fhe_inner_product(&probe_ct, &enrolled_ct);

// === DECRYPTION (client-side) ===
let scores = bfv_decrypt(&score_ct, &secret_key);
let match_result = scores[user_slot] >= threshold;
// Server never saw: template, probe, score, or result

The critical property: between enrollment and decryption, no plaintext exists anywhere in the system. The server performs the inner product entirely in ciphertext. Even the match score is encrypted—the server does not know whether the authentication succeeded or failed. Only the client, holding the secret key, learns the result.

Production Benchmark: H33 on Graviton4

These benchmarks were measured on a production c8g.metal-48xl instance (AWS Graviton4, 192 vCPUs, 377 GiB RAM) on February 14, 2026. All numbers represent sustained throughput under load, not burst performance.

H33 FHE Production Benchmarks (Feb 2026)

FHE Batch (32 users, encrypt+compute+decrypt)1,375 µs
Per-Auth Latency (amortized)~50 µs
ZKP STARK Lookup Verification0.067 µs
Dilithium Attestation (sign + verify)~240 µs
Full Stack (FHE + ZKP + Attestation)~1,615 µs
Sustained Throughput (96 workers)1.2M auth/sec
FHE-Only Throughput Ceiling1.29M auth/sec

Key engineering decisions that enable this performance:

Why FHE Is Post-Quantum Secure

FHE's security rests on the hardness of lattice problems—specifically Ring-LWE. This is not just incidentally quantum-resistant; it is the same mathematical foundation used by the NIST post-quantum standards (ML-KEM/Kyber and ML-DSA/Dilithium).

Lattice Problems and Quantum Computers

Shor's algorithm breaks RSA (integer factoring) and ECDSA (discrete logarithm) because these problems have efficient quantum solutions. Lattice problems—including Ring-LWE, Module-LWE, and the Shortest Vector Problem (SVP)—have no known efficient quantum algorithm. The best known quantum speedup for SVP is at most a polynomial improvement (not the exponential speedup Shor gives for factoring). FHE ciphertexts encrypted today will remain secure against future quantum computers.

This is particularly important in the context of Harvest Now, Decrypt Later (HNDL) attacks, where adversaries intercept and store encrypted data today for future quantum decryption. Data protected by FHE is immune to HNDL by construction:

For biometric data—which has an infinite security shelf-life because it cannot be rotated—this distinction is existential. FHE-encrypted biometric templates are the only form factor that survives both current breaches and future quantum attacks.

The Future of FHE

FHE is advancing on multiple fronts simultaneously. The next five years will likely see performance improvements comparable to the last decade, driven by hardware acceleration and compiler tooling.

Hardware Acceleration

Several hardware acceleration programs are in progress:

Compiler and Tooling Improvements

FHE compilers that automatically convert high-level programs into optimized FHE circuits are maturing rapidly. Tools like Google's FHE transpiler, Microsoft SEAL, and Zama's Concrete automate parameter selection, circuit optimization, and noise budget management—reducing the expertise barrier for FHE adoption.

Standardization

The HomomorphicEncryption.org consortium (including Microsoft, Google, Intel, Samsung, and Duality) is working on an FHE API standard and interoperability specification. This will enable encrypted data to be processed by different FHE implementations without re-encryption—a prerequisite for an FHE ecosystem.

Initiative Focus Timeline Impact
DARPA DPRIVE ASIC hardware 2026–2028 10,000x acceleration for target workloads
HE.org Standard API interoperability 2026–2027 Cross-vendor FHE ecosystem
FHE Compilers Automatic optimization Ongoing Reduce expertise barrier by 10x
GPU/TPU FHE ML inference acceleration 2025–2027 Encrypted ML inference at practical latency
NIST PQC Lattice crypto standards Complete Validates lattice-based security (same foundation as FHE)

Common Misconceptions

FHE discourse is plagued by outdated information and misunderstandings. Let's address the most common ones.

"FHE is too slow for production"

This was true in 2015. It is not true in 2026. H33 processes 1.2 million FHE-encrypted authentications per second on a single server. The latency per auth (~50µs) is faster than a typical database query. The key insight: you don't need to make FHE fast for all possible computations—you need to make it fast for your specific computation. Biometric inner products are shallow circuits (multiplication depth 1), which is exactly what BFV optimizes for.

"FHE ciphertexts are enormous"

A single BFV ciphertext at N=4096, Q=56-bit is roughly 64KB. That's large compared to AES ciphertext, but with SIMD batching, that 64KB encodes 32 independent biometric templates—about 2KB per user. The effective expansion ratio is modest, and network bandwidth is rarely the bottleneck for authentication workloads.

"Bootstrapping makes FHE impractical"

Bootstrapping is expensive, but most practical FHE deployments don't use it. Leveled FHE (setting parameters for a known circuit depth) avoids bootstrapping entirely. H33's biometric pipeline uses leveled BFV with depth 1—no bootstrapping, no modulus switching, just a single inner product. Bootstrapping matters for deep circuits (e.g., encrypted neural network training), but shallow circuits dominate production FHE today.

"FHE is just for academics"

FHE is in production at H33 (biometric authentication), Apple (on-device ML), Google (encrypted advertising analytics), and multiple healthcare and financial institutions. The tooling and performance have crossed the production threshold for targeted use cases. General-purpose FHE for arbitrary programs remains a research challenge, but domain-specific FHE deployments are shipping.


FHE represents a fundamental shift in how we think about data security. For the first time, encryption does not have to be removed for data to be useful. The data stays encrypted. The computation happens on ciphertext. The result is encrypted. The server learns nothing.

Combined with post-quantum lattice-based security, SIMD batching, and modern engineering optimizations, FHE has moved from theoretical breakthrough to production infrastructure. At H33, it is the foundation of every authentication—1.2 million per second, each one computed on data the server has never seen and never will.


H33 provides post-quantum authentication infrastructure built on FHE biometric processing (BFV lattice-based), STARK zero-knowledge proofs, and ML-DSA digital signatures—all in a single API call at ~50µs per authentication. Every component in the stack is post-quantum secure by construction.

Build With Post-Quantum FHE Security

Enterprise-grade FHE biometrics, ZKP attestation, and post-quantum cryptography. One API call. ~50µs per authentication.

Get Free API Key → Read the Docs
Free tier · 10,000 API calls/month · No credit card required