Benchmarks Docs Pricing Blog About
Log In Get API Key

Fully Homomorphic Encryption API — 4 Engines, One Call

Compute on encrypted data at microsecond latency. Four FHE engines — BFV, CKKS, BFV-32, and FHE-IQ — for biometrics, search, ML inference, and general encrypted computation. SIMD batching packs 32 users per ciphertext.

Get Free API Key See Benchmarks
939µs
Per 32-User Batch
32
Users / Ciphertext
4
FHE Engines
N=4096
to N=16384
4 Engines

Choose the Right FHE Engine for Your Workload

Each engine is optimized for different security margins, arithmetic types, and performance profiles.

Property H33-128 (BFV) H33-256 (BFV) H33-CKKS H33-BFV32 H33-FHE-IQ
Polynomial Degree (N) 4,096 8,192 8,192–16,384 2,048 Auto-selected
Arithmetic Type Exact integer Exact integer Approximate (float) Exact integer Adaptive
Security Level 128-bit 256-bit 128–256 bit 80-bit 128–256 bit
Best For Biometrics, auth High-assurance ML inference Fast lightweight auth Unknown workloads
Batch Size 32 users/CT 64 users/CT Varies by precision 16 users/CT Auto-optimized
Batch Latency ~939µs ~3.2ms ~5–12ms ~420µs Varies
Plaintext Modulus (t) 65,537 65,537 N/A (float) 65,537 Adaptive
PQ-Secure Yes (lattice) Yes (lattice) Yes (lattice) Yes (lattice) Yes (lattice)
Use Cases

What You Can Build With Encrypted Computation

The server computes on your data without ever decrypting it. Three primary application domains.

939µs

Encrypted Biometrics

Enroll and verify biometric templates entirely in FHE ciphertext space. The server performs inner-product matching on encrypted vectors. Biometrics are never decrypted — not during enrollment, not during verification. 32 users matched per ciphertext in under 1ms.

Search

Encrypted Search & Matching

Run keyword matching and similarity searches on encrypted datasets. The query is encrypted, the database is encrypted, and the results are encrypted. Only the querying client can decrypt the results. No data exposure at the search server.

ML

Encrypted ML Inference

Run classification and regression models on encrypted inputs using H33-CKKS. The model processes encrypted feature vectors and returns encrypted predictions. The model provider never sees the input data; the data provider never sees the model weights.

SIMD Batching

32 Users Per Ciphertext — 128x Storage Reduction

CRT-based SIMD packing exploits the polynomial structure of BFV to process multiple users in parallel within a single ciphertext.

4,096 slots

Polynomial Degree

H33-128 uses N=4096, providing 4,096 plaintext slots per ciphertext. With 128 biometric dimensions per user, this packs exactly 32 users (4096 ÷ 128 = 32). One FHE operation processes all 32 simultaneously.

~256 KB

Per-User Storage

Without SIMD batching, each encrypted biometric template requires ~32MB. With 32 users per ciphertext, per-user storage drops to ~256KB — a 128x reduction. This makes FHE-encrypted biometric databases practical at scale.

Constant Time

1–32 Users = Same Cost

Whether you batch 1 user or 32, the FHE computation cost is identical (~939µs). Batching amortizes the expensive NTT and key-switching operations across all users in the ciphertext. Maximum efficiency at full batch.

CRT Packing

Chinese Remainder Theorem

SIMD batching works because t=65537 satisfies t ≡ 1 (mod 2N). This enables CRT decomposition of the plaintext space into N independent slots. Each slot holds one coefficient of one user’s biometric vector. The math is exact — no approximation error.

Developer Experience

Encrypt, Compute, Decrypt — One API Flow

Client-side encryption, server-side computation on ciphertexts, client-side decryption.

H33 FHE API — Encrypt → Compute → Decrypt
// 1. Generate FHE keys (client-side)
const { publicKey, secretKey } = await h33.fhe.generateKeys({
  engine: 'h33-128',             // BFV, N=4096, t=65537
  batchSize: 32                  // 32 users per ciphertext
});

// 2. Encrypt biometric templates (client-side)
const ciphertext = await h33.fhe.encrypt({
  data: biometricVectors,          // 32 x 128-dim vectors
  publicKey: publicKey
});

// 3. Compute on encrypted data (server-side)
const result = await h33.fhe.compute({
  operation: 'inner_product',     // Biometric matching
  ciphertext: ciphertext,
  enrolledTemplate: enrolledCT    // Pre-encrypted template
});
// Server NEVER sees plaintext — computation on ciphertexts only

// 4. Decrypt result (client-side)
const scores = await h33.fhe.decrypt({
  ciphertext: result,
  secretKey: secretKey
});
// scores → [0.97, 0.12, 0.89, ...] (32 match scores)
// ~939µs total for 32-user batch
FAQ

Frequently Asked Questions

What is an FHE API?
An FHE (Fully Homomorphic Encryption) API allows you to perform computations on encrypted data without decrypting it. You encrypt data client-side, send ciphertexts to the server, the server computes on the ciphertexts, and returns encrypted results that only you can decrypt. The server never sees plaintext data at any point. H33’s FHE API provides this capability with four different encryption engines optimized for different workloads.
Which FHE engine should I use?
H33 offers four engines: H33-128 (BFV) is the default for biometric matching and exact integer arithmetic at N=4096. H33-256 (BFV) uses N=8192 for higher security margins. H33-CKKS is designed for approximate arithmetic like ML inference and floating-point operations. H33-BFV32 is a lightweight engine at N=2048 for fast, low-security-margin authentication. H33-FHE-IQ automatically selects the optimal engine based on your workload parameters.
How fast is H33’s FHE?
H33-128 (BFV, N=4096) completes a 32-user biometric batch in approximately 939 microseconds — under 1 millisecond. This includes encryption, FHE inner product computation, and partial decryption. Per-user latency is approximately 29 microseconds. These benchmarks were measured on AWS Graviton4 with Montgomery NTT, Harvey lazy reduction, and SIMD batching optimizations.
What is SIMD batching and how does it work?
SIMD (Single Instruction, Multiple Data) batching packs multiple user data into a single ciphertext using the CRT (Chinese Remainder Theorem) structure of BFV encryption. With N=4096 polynomial degree and 128 biometric dimensions, H33 packs 32 users into one ciphertext. This means one FHE operation processes 32 users simultaneously, reducing per-user cost by 32x and storage from ~32MB to ~256KB per user.
How does H33’s FHE compare to Microsoft SEAL or Zama?
Microsoft SEAL is a library — you build your own FHE application. Zama provides FHE tooling for specific use cases (ML inference via Concrete-ML). H33 is a production-grade FHE API with four optimized engines, integrated ZK proofs, post-quantum signatures, and sub-millisecond latency. H33’s BFV implementation uses Montgomery NTT with Harvey lazy reduction and NEON-accelerated Galois operations — optimizations not available in general-purpose libraries. The result is ~939µs per 32-user batch, compared to typical SEAL benchmarks of 10–50ms for similar operations.
Can I use the FHE API for encrypted search or ML inference?
Yes. H33-CKKS supports approximate arithmetic for ML inference on encrypted data — run classification or regression models where the input data remains encrypted throughout. For encrypted search, H33-128 (BFV) supports encrypted keyword matching and similarity search on encrypted vectors. Both use cases benefit from SIMD batching to process multiple queries per ciphertext.

Compute on Encrypted Data Today

Four FHE engines. Sub-millisecond latency. Your data never leaves encryption.
Get Free API Key Read Documentation
1,000 free FHE operations per month. No credit card required.