Biometrics · 22 min read

Biometric Authentication:
The Complete Implementation Guide for 2026

How biometric matching actually works, why template security is the hardest unsolved problem in identity, and how Fully Homomorphic Encryption eliminates the plaintext template entirely—achieving 1.2 million authentications per second on production hardware with zero plaintext exposure.

~50µs
Per Auth
1.2M
Auth/sec
32
Users/CT
0
Plaintext Templates

Biometric authentication is replacing passwords. Not in some distant future—right now, in 2026, across every industry from banking to border control. Apple Face ID processes over a billion authentications daily. India's Aadhaar system has enrolled 1.4 billion people. The global biometric market has crossed $50 billion and is projected to reach $150 billion by 2032.

But biometric systems carry a risk that no password system ever has: you cannot rotate a compromised fingerprint. When a biometric template leaks, the damage is permanent and irreversible. This single fact changes the entire security architecture.

This guide covers everything an engineering team needs to build a production biometric authentication system in 2026: how matching algorithms work at the mathematical level, what FAR and FRR actually mean with real numbers, why every traditional template storage approach is fundamentally broken, how FHE eliminates the problem architecturally, and how to implement it all at scale.

The Biometric Landscape in 2026

Passwords are dying. Not because they're inconvenient (though they are), but because they've become statistically indefensible. Verizon's 2025 Data Breach Investigations Report found that 81% of hacking-related breaches involved stolen or weak credentials. The average enterprise manages over 25,000 passwords across its workforce. Users reuse passwords across 65% of their accounts. The entire model is broken.

Biometrics solve the core credential problem: the authenticator is inseparable from the person. You cannot share a fingerprint like you share a password. You cannot phish an iris scan. You cannot brute-force a voiceprint. The security model shifts from "something you know" (which can be stolen, guessed, or leaked) to "something you are" (which is cryptographically unique and physically bound).

Biometric Modalities Compared

Each biometric modality has distinct characteristics that determine its suitability for different use cases.

ModalityTypical FARTypical FRRSpoofing DifficultyUser AcceptancePrimary Use Case
Fingerprint 0.001% 0.1% Medium High Mobile devices, access control
Face (3D structured light) 0.0001% 0.3% High High Device unlock, payments
Iris 0.00001% 0.5% Very High Medium Border control, high-security
Voice 0.1% 2–5% Low High Call centers, hands-free auth
Behavioral (keystroke, gait) 1–3% 5–10% Very High High (passive) Continuous authentication
Palm vein 0.00008% 0.01% Very High Medium Payments, physical access

Fingerprint remains the most widely deployed modality due to decades of sensor maturity and high user acceptance. Face recognition has surged since Apple's Face ID launched in 2017, driving a massive wave of 3D structured-light sensor adoption. Iris recognition offers the highest uniqueness (the probability of two irises matching is approximately 1 in 1078) but requires specialized hardware and close-range capture.

Voice recognition is the weakest modality in terms of security—modern voice synthesis and deepfake audio have pushed spoofing difficulty down significantly. Behavioral biometrics (keystroke dynamics, mouse movement patterns, gait analysis) are too imprecise for one-shot authentication but excel at continuous authentication, where the system silently monitors patterns throughout a session.

How Biometric Matching Works

Every biometric system follows the same fundamental pipeline: capture, extract, encode, match. The raw biometric signal (a fingerprint image, a face photo, an audio clip) is never stored or compared directly. Instead, the system extracts a compact mathematical representation—a template—and matching is performed on templates.

Feature Extraction

The first step transforms raw sensor data into a structured feature vector. The extraction method depends on the modality:

Template Creation

The extracted features are normalized and encoded into a template—a fixed-size numerical representation suitable for comparison. For H33's FHE biometric system, templates are 128-dimensional floating-point vectors, normalized to unit length. This standardization is critical because it enables the matching operation to be expressed as a simple inner product (dot product).

Why 128 Dimensions?

128 dimensions provide an optimal trade-off between discriminative power and computational efficiency. Research shows diminishing returns above 128 dims for most modalities. At 128 dimensions, the template is 1,024 bytes (128 × 8 bytes per float64)—compact enough for efficient FHE encryption while preserving sufficient information for sub-0.001% FAR. With SIMD batching, 32 such templates fit into a single 4096-slot BFV ciphertext.

Matching Algorithms

Three major approaches to biometric matching have evolved over the past three decades:

1. Correlation-Based Matching

The oldest approach: superimpose two biometric samples and measure pixel-level correlation. Used in early fingerprint systems. Largely obsolete—highly sensitive to translation, rotation, and deformation. Requires precise alignment pre-processing that adds latency and error.

2. Minutiae-Based Matching

The standard for fingerprint systems. Compares the spatial arrangement of minutiae points (ridge endings and bifurcations) between two templates. Uses algorithms like the Hough transform or graph matching to find the optimal alignment between minutiae sets. Robust to partial prints and local deformation. This is the algorithm behind most AFIS (Automated Fingerprint Identification Systems) deployed by law enforcement.

3. Deep Learning Embedding Matching

The modern approach, dominant in face and voice recognition. A deep neural network maps raw biometric data to a fixed-dimensional embedding space. Matching reduces to computing the cosine similarity (or equivalently, the inner product of unit-normalized vectors) between two embeddings. This is the approach H33 uses because it maps directly to an FHE-friendly operation.

The Inner Product Match

For two unit-normalized 128-dimensional templates A and B, the match score is their inner product:

Match Score score = Σ(A[i] × B[i]) for i = 0..127
Same person score ≈ 0.75 – 0.99
Different person score ≈ -0.1 – 0.3
Typical threshold 0.55 – 0.65

This inner product is exactly the operation that BFV FHE can compute on encrypted data. The enrolled template and the probe template are both encrypted—the server computes the inner product without ever seeing either vector in plaintext.

FAR, FRR, and EER Explained

Every biometric system is governed by two opposing error rates:

FAR and FRR are inversely related through the matching threshold. Raising the threshold reduces FAR (harder to fool) but increases FRR (more legitimate rejections). Lowering it does the opposite. The threshold is a policy decision, not a technical one—a nuclear facility sets a different threshold than a phone unlock.

SystemFARFRREERTest Dataset
Apple Face ID 0.0001% ~2% N/A 1M faces (Apple internal)
NIST FRVT Top-1 (2025) 0.001% 0.08% <0.1% Visa photos, 12M gallery
Fingerprint (500 dpi sensor) 0.001% 0.1% ~0.2% FVC 2006
Iris (Daugman IrisCode) 0.00001% 0.5% ~0.08% NIST IREX
H33 FHE Biometric <0.001% <0.3% <0.15% 128-dim embeddings, 32-user batch

The Template Security Problem

Here is the fundamental paradox of biometric authentication: the very thing that makes biometrics strong (they are unique, permanent, and inseparable from the person) is exactly what makes their compromise catastrophic.

Critical Warning

You cannot rotate a compromised fingerprint. When a password database leaks, you reset passwords. When an encryption key is compromised, you generate new keys. When a biometric template leaks, the damage is permanent. The user's fingerprint, iris pattern, or facial geometry cannot be changed. Every system that ever uses that biometric for that person is compromised—forever. This is not a theoretical concern. It has already happened at scale.

The Breach Record

The history of biometric data breaches reads like a security horror story:

2015 — OPM Breach
The U.S. Office of Personnel Management breach exposed 5.6 million fingerprint records belonging to federal employees with security clearances. The attacker (attributed to China) obtained background investigation files including biometric data. Those 5.6 million people cannot get new fingerprints. Their biometric identity is permanently compromised.
2019 — Biostar2
27.8 million records including fingerprint data, facial recognition data, and unencrypted usernames and passwords were found exposed on a publicly accessible database belonging to Suprema, the security company behind the Biostar 2 biometric lock platform used by banks, government agencies, and the UK Metropolitan Police.
2020 — Clearview AI
Clearview AI's entire client list and 3 billion facial images scraped from the internet were breached. While these were not biometric templates per se, the raw images can be used to generate templates for any face recognition system.
2021 — MOSIP (India)
Researchers discovered vulnerabilities in the Modular Open Source Identity Platform that could allow extraction of biometric data from Aadhaar-like national ID systems serving hundreds of millions of people.
2023–2025 — BIPA Litigation Wave
Over $3.6 billion in BIPA settlements including Facebook ($650M), Google ($100M), TikTok ($92M), and BNSF Railway ($228M). Illinois courts rule that each individual biometric scan constitutes a separate violation, creating existential liability for companies storing unprotected biometric data.

The pattern is unmistakable: every system that stores biometric templates in a form that can be recovered (whether encrypted-at-rest, hashed, or in plaintext) will eventually be breached. The encryption keys get stolen. The database gets exfiltrated. The admin account gets compromised. The only template that cannot be stolen is one that never exists in plaintext.

Why Traditional Protection Fails

Standard approaches to protecting biometric templates all share a fatal flaw:

FHE: The Architectural Solution

Fully Homomorphic Encryption (FHE) solves the biometric template problem at the mathematical level, not the operational level. The plaintext template does not exist on the server—not in RAM, not on disk, not in an enclave. The server performs biometric matching on encrypted data and produces an encrypted result, without ever decrypting anything.

How BFV Works for Biometrics

H33 uses the BFV (Brakerski/Fan-Vercauteren) fully homomorphic encryption scheme, specifically configured for biometric matching:

H33 BFV Configuration

Polynomial degree (N) 4,096
Ciphertext modulus (Q) 56-bit single modulus
Plaintext modulus (t) 65,537
SIMD slots 4,096
Users per ciphertext 32 (4,096 slots ÷ 128 dims)
Security level ≥128-bit (lattice-based, post-quantum)

The BFV scheme is built on the Ring Learning With Errors (RLWE) problem, which is believed to be hard for both classical and quantum computers. This means the encrypted biometric templates are quantum-resistant by construction—Shor's algorithm cannot help an attacker.

The Encrypted Matching Pipeline

The matching operation is an inner product of two 128-dimensional vectors. In the FHE domain, this proceeds as follows:

  1. Enrollment: The user's biometric template (128 floats) is quantized to integers, packed into a BFV plaintext using SIMD batching, encrypted with the system public key, and stored. The plaintext is immediately discarded. The server only holds ciphertext.
  2. Probe encryption: At verification time, the fresh biometric capture is similarly quantized, packed, and encrypted on the client side.
  3. Encrypted inner product: The server multiplies the encrypted probe by the enrolled ciphertext element-wise (using BFV homomorphic multiplication), then sums across the 128 dimensions using Galois rotations—all in the encrypted domain.
  4. Threshold comparison: The encrypted score is compared against the threshold, producing an encrypted boolean. The server learns only the accept/reject decision (via a zero-knowledge proof or partial decryption), never the score or the templates.
Rust encrypted_biometric_verify.rs
// H33 FHE biometric verification — zero plaintext exposure

// 1. Enrollment: encrypt and store (client-side)
let template: Vec<f64> = extract_embedding(&face_image);  // 128-dim
let quantized = quantize_template(&template, scale);
let ct_enrolled = bfv_encrypt(&quantized, &public_key);
store_encrypted_template(user_id, &ct_enrolled);
// Plaintext `template` dropped here — never persisted

// 2. Verification: encrypted matching (server-side)
let ct_probe = receive_encrypted_probe(request);
let ct_enrolled = load_enrolled_ciphertext(user_id);

// Inner product entirely in FHE domain
let ct_product = bfv_multiply(&ct_probe, &ct_enrolled);
let ct_score = fhe_accumulate_slots(&ct_product, 128);

// Server never sees the score, the templates, or the match result
let ct_result = encrypted_threshold_check(&ct_score, threshold);
let attestation = dilithium_sign(&ct_result, &signing_key);
Key Insight

The security guarantee is mathematical, not operational. It does not depend on the server being uncompromised, the admin being trustworthy, or the network being secure. Even if an attacker has root access to the server, reads all of RAM, and exfiltrates the entire database, they get BFV ciphertexts. Decrypting those ciphertexts requires solving the RLWE problem—which is computationally infeasible for both classical and quantum computers.

SIMD Batching: 32 Users Per Ciphertext

BFV's SIMD (Single Instruction, Multiple Data) capability packs multiple plaintext values into a single ciphertext. With N=4096 polynomial slots and 128 dimensions per template, H33 packs 32 user templates into one ciphertext. This means a single FHE operation processes 32 users simultaneously at nearly the same cost as processing one.

The CRT (Chinese Remainder Theorem) batching condition requires t ≡ 1 (mod 2N). With t=65537 and N=4096, the decomposition primitive root is 6561, which satisfies this condition. The result is a 128x reduction in per-user storage (from ~32MB to ~256KB per user) and massive throughput gains from SIMD parallelism.

Liveness Detection

A biometric system that accepts a photograph of a face, a silicone fingerprint mold, or a recorded voice clip is not a biometric system—it's a security theater prop. Presentation attack detection (PAD), commonly called liveness detection, is the critical component that distinguishes a live human from a spoof artifact.

Presentation Attack Types

Attack TypeModalitySophisticationDetection Difficulty
Printed photo Face Low Easy
Screen replay (2D) Face Low Easy
3D-printed mask Face High Medium
Silicone mold Fingerprint Medium Medium
Gelatin finger Fingerprint Low Easy
High-resolution iris print Iris Medium Medium
Deepfake video Face High Hard
Voice synthesis / deepfake audio Voice High Hard

Passive vs. Active Liveness

Passive liveness analyzes the biometric capture itself for signs of life without requiring the user to perform any action. The system looks for texture analysis (moire patterns from screen replay, paper texture from printed photos), depth cues (structured light or time-of-flight sensors detect flat surfaces), skin reflectance (live skin has different specular properties than silicone or gelatin), and micro-movements (involuntary eye saccades, pulse-driven micro-expressions).

Active liveness instructs the user to perform a specific action: blink, turn their head, smile, or speak a random phrase. This confirms the presence of a responsive human but adds friction to the user experience. Active liveness is harder to spoof (the attacker must respond to a random challenge in real time) but easier to detect as "biometric security" by the user, which can impact adoption.

ISO 30107-3 Certification

The international standard for PAD testing is ISO/IEC 30107-3. It defines two key metrics: APCER (Attack Presentation Classification Error Rate)—the proportion of attack presentations incorrectly classified as genuine, and BPCER (Bona Fide Presentation Classification Error Rate)—the proportion of genuine presentations incorrectly classified as attacks. A production system should target APCER < 1% at BPCER < 5%. Third-party testing labs (iBeta, BixeLab) provide independent ISO 30107-3 Level 1 and Level 2 certification.

The Deepfake Escalation

The most significant threat evolution in biometric security is the rise of deepfake attacks. In 2024–2025, deepfake face generation reached the point where synthetic faces can fool many commercial liveness detection systems. Voice deepfakes are even further ahead—tools like ElevenLabs can clone a voice from 30 seconds of audio with enough fidelity to bypass voice recognition systems.

The defense is layered: passive liveness catches low-effort attacks, active liveness defeats replay and static deepfakes, and multimodal fusion (requiring multiple biometric signals simultaneously) makes it exponentially harder to spoof the complete system. A deepfake video paired with synthetic audio paired with a spoofed fingerprint is orders of magnitude harder to produce than any single modality attack.

Multimodal Fusion

Multimodal biometric systems combine two or more biometric modalities to make a single authentication decision. The security benefits are dramatic: if each modality has an independent FAR of 0.001%, combining two modalities multiplicatively reduces the FAR to 0.000001% (1 in 108) under score-level fusion.

Fusion Strategies

Feature-Level Fusion

Concatenates raw feature vectors from multiple modalities into a single vector before matching. Preserves maximum information but requires compatible feature representations and increases dimensionality. Best when modalities share a common embedding space.

Score-Level Fusion

Each modality produces an independent match score. Scores are normalized and combined (weighted sum, SVM, or neural network). Most practical approach—modalities can use different algorithms and hardware. H33 uses this approach.

Decision-Level Fusion

Each modality produces an independent accept/reject decision. Decisions are combined by majority vote, AND logic, or OR logic. Simplest to implement but discards the most information (the continuous match score).

Rank-Level Fusion

Used in identification (1:N) scenarios. Each modality produces a ranked list of candidates. Lists are merged using Borda count, logistic regression, or learning-to-rank algorithms. Common in law enforcement AFIS systems.

FAR Improvement Through Fusion

ConfigurationFARFRRImprovement
Face only 0.001% 0.3% Baseline
Fingerprint only 0.001% 0.1% Baseline
Face + Fingerprint (score fusion) 0.000001% 0.4% 1000x FAR reduction
Face + Fingerprint + Voice ~0.0000001% 0.8% 10,000x FAR reduction
Face + Behavioral (continuous) 0.0001% 0.5% 10x FAR reduction

The trade-off is FRR: combining modalities with AND logic means the system rejects if any modality fails, which compounds false rejections. In practice, weighted score-level fusion with learned weights achieves the best balance—the fusion model learns to trust high-confidence modalities more and low-confidence ones less, adapting per-user and per-session.

Multimodal + FHE

H33's FHE biometric system supports multimodal fusion natively. Each modality's 128-dim embedding is encrypted independently, and the fused score is computed in the encrypted domain. The server performs weighted score fusion on encrypted scores and produces an encrypted accept/reject decision. This means multimodal fusion adds zero additional plaintext exposure—the server sees no scores, no templates, and no intermediate fusion results.

Privacy Regulations

Biometric data occupies the most protected category in virtually every major privacy regulation worldwide. The regulatory landscape is converging on a clear position: biometric data requires explicit consent, heightened security, and severe penalties for mishandling.

RegulationJurisdictionBiometric ClassificationConsent RequiredMax Penalty
GDPR Article 9 EU/EEA "Special category" data Explicit 4% global revenue or €20M
BIPA Illinois, USA "Biometric identifier" Written, informed $5,000/violation (intentional)
CCPA/CPRA California, USA "Sensitive personal information" Opt-out right $7,500/intentional violation
PIPL China "Sensitive personal information" Separate consent 5% annual revenue or ¥50M
LGPD Brazil "Sensitive personal data" Specific consent 2% revenue or R$50M
POPIA South Africa "Special personal information" Explicit R10M or 10 years imprisonment

BIPA: The $3.6 Billion Warning

Illinois' Biometric Information Privacy Act (BIPA) is the most aggressive biometric privacy law in the world. Enacted in 2008, BIPA requires:

In 2023, the Illinois Supreme Court ruled in Cothron v. White Castle that each individual biometric scan constitutes a separate violation. For a company scanning employee fingerprints twice daily for time-clock purposes, that's $10,000 per employee per day in potential statutory damages. White Castle alone faced potential liability exceeding $17 billion. Texas, Washington, and at least 15 other states have enacted or proposed similar laws.

How FHE Satisfies Every Regulation

FHE-based biometric processing provides a uniquely strong compliance posture because the architecture itself enforces the regulatory requirements:

FHE Regulatory Compliance Map

GDPR Art. 9: "Appropriate safeguards" Template never exists in plaintext
GDPR Art. 25: "Data protection by design" Privacy is mathematical, not policy
BIPA: "Protect from disclosure" Disclosure is computationally infeasible
BIPA: "Destroy when purpose fulfilled" Plaintext never created to destroy
CCPA: "Reasonable security" 128-bit lattice security (post-quantum)
PIPL: "Necessity and minimum scope" Server processes data it cannot read

The critical advantage is that FHE compliance does not depend on policy enforcement, employee training, or access control configurations. The server mathematically cannot access the plaintext biometric data, regardless of who has root access, what the retention policy says, or whether the DBA is malicious. This is a fundamentally different compliance posture than "we encrypted it at rest and have access controls."

Implementation Guide

This section provides concrete implementation patterns for integrating H33's FHE biometric authentication into a production system.

Enrollment Flow

Enrollment captures the user's biometric, generates an encrypted template, and stores it. The plaintext template exists only in the client's memory for the duration of feature extraction and quantization.

JavaScript enrollment.js
// Step 1: Capture biometric and extract embedding (client-side)
const capture = await biometricSensor.capture();
const embedding = await extractEmbedding(capture); // 128-dim float array

// Step 2: Enroll via H33 API
const response = await fetch('https://api.h33.ai/v1/biometric/enroll', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${apiKey}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    user_id: 'usr_a1b2c3d4',
    template: embedding,        // Encrypted client-side by SDK
    modality: 'face',
    liveness_token: livenessResult.token
  })
});

// Response: { enrolled: true, template_id: "tmpl_...", encryption: "BFV-4096" }
// The plaintext embedding is discarded — only ciphertext stored server-side

Verification Flow

Verification captures a fresh biometric probe, sends it for encrypted matching against the enrolled template, and returns an attestation-signed result.

JavaScript verification.js
// Step 1: Capture and extract (client-side)
const probe = await biometricSensor.capture();
const probeEmbedding = await extractEmbedding(probe);

// Step 2: Verify via H33 API
const result = await fetch('https://api.h33.ai/v1/biometric/verify', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${apiKey}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    user_id: 'usr_a1b2c3d4',
    probe: probeEmbedding,      // Encrypted client-side by SDK
    modality: 'face',
    liveness_token: livenessResult.token
  })
});

// Response:
// {
//   match: true,
//   confidence: "high",
//   attestation: "dilithium_sig_...",  // ML-DSA signed result
//   zkp: "stark_proof_...",             // STARK proof of correct computation
//   latency_us: 48,                     // ~50µs end-to-end
//   encryption: "BFV-4096",
//   pq_secure: true
// }

Continuous Authentication

For high-security environments, continuous authentication silently re-verifies the user throughout their session using behavioral biometrics (keystroke dynamics, mouse movement patterns) fused with periodic face re-verification.

JavaScript continuous_auth.js
// Initialize continuous auth session
const session = await h33.continuousAuth.start({
  user_id: 'usr_a1b2c3d4',
  modalities: ['behavioral', 'face'],
  face_interval_sec: 300,        // Re-verify face every 5 min
  behavioral_window_sec: 30,     // Evaluate behavior every 30 sec
  confidence_threshold: 0.7,     // Lower than one-shot (cumulative)
  on_confidence_drop: 'step_up'   // Trigger explicit re-auth
});

// Feed behavioral signals (keystroke timing, mouse dynamics)
document.addEventListener('keydown', (e) => {
  session.recordKeystroke({ key: e.key, timestamp: e.timeStamp });
});

// Session emits events
session.on('confidence_drop', () => {
  showReauthModal();  // "Please look at the camera to continue"
});

session.on('session_locked', () => {
  redirectToLogin();  // Behavioral pattern diverged too far
});

Performance Reality

FHE has a well-earned reputation for being slow. Early FHE implementations were millions of times slower than plaintext computation. The question every engineering team asks is: can FHE biometrics actually work at production scale?

The answer, measured on production hardware, is yes.

H33 Production Benchmarks

Measured on AWS c8g.metal-48xl (Graviton4, 192 vCPUs, 377 GiB RAM) running 96 parallel workers with the system allocator:

Production Pipeline Latency (Single API Call)

FHE Batch (32 users, BFV inner product)~1,375 µs
ZKP (STARK lookup proof)0.067 µs
Attestation (SHA3 + Dilithium sign+verify)~240 µs
Total (32-user batch)~1,615 µs
Per-authentication latency~50 µs

Throughput

MetricValueNotes
Sustained full-stack throughput ~1.2M auth/sec 96 workers, system allocator, full pipeline
FHE-only ceiling ~1.29M auth/sec FHE batch only, no ZKP/attestation
Batch size 32 users/ciphertext 4096 slots ÷ 128 dims
Per-user storage ~256 KB 128x reduction from 32MB via SIMD batching
Hardware cost $1.80–2.30/hr c8g.metal-48xl spot pricing, us-east-1
Cost per million auths ~$0.002 At sustained throughput on spot instances

To put this in perspective: at 1.2 million authentications per second, a single server can handle the combined daily biometric authentication volume of most countries in under a minute. The per-authentication cost of $0.000002 makes FHE biometric matching cheaper than most traditional plaintext matching services, while providing categorically stronger security.

Why This Performance Is Possible

Achieving sub-50-microsecond FHE biometric authentication required a stack of interdependent optimizations, each of which compounds the others:

Implementation Note

These optimizations are tightly coupled to the BFV parameter selection (N=4096, 56-bit single modulus, t=65537). Changing any parameter invalidates many of the optimizations. For example, increasing N to 8192 for higher security would double NTT cost and halve batch density. The parameter set is the result of extensive co-optimization between security requirements, matching accuracy, and performance.

Putting It All Together

Building a production biometric authentication system in 2026 requires getting five things right simultaneously:

Production Checklist

  1. Template security: FHE-encrypted templates with zero plaintext exposure. The template never exists on the server in decryptable form. This is the non-negotiable foundation—everything else is security theater without it.
  2. Liveness detection: ISO 30107-3 certified PAD with both passive and active liveness. Deepfake-resilient detection using depth sensing, texture analysis, and challenge-response. Without liveness, you're authenticating photographs.
  3. Multimodal fusion: At minimum two modalities for high-security scenarios. Score-level fusion with learned weights. Each additional modality multiplicatively reduces FAR while FHE ensures zero additional plaintext exposure.
  4. Regulatory compliance: GDPR Article 9 explicit consent, BIPA-compliant notice and retention schedule, CCPA/CPRA sensitive data handling. FHE provides architectural compliance—but you still need the consent flows and documentation.
  5. Post-quantum security: BFV lattice-based encryption is quantum-resistant by construction. ML-DSA (Dilithium) signatures for attestation. ML-KEM (Kyber) for key exchange. The entire pipeline must be quantum-safe—one classical link breaks the chain.

The traditional approach was to build each of these as a separate system: a biometric SDK from one vendor, an encryption layer from another, a compliance framework from a third, and hope they integrate. The modern approach is a single API call that handles the entire pipeline—FHE encryption, biometric matching, STARK proof of correct computation, and Dilithium-signed attestation—in ~50 microseconds.

Biometric authentication is the future of identity. But only if the templates are protected with the same mathematical rigor we apply to the matching itself. FHE makes that possible. The rest is engineering.


H33 provides post-quantum biometric authentication infrastructure: BFV FHE-encrypted matching, STARK zero-knowledge proofs, and ML-DSA digital signatures—all in a single API call at ~50µs per authentication. 1.2 million authentications per second on production hardware. Zero plaintext templates. Quantum-safe by construction.

Build With Post-Quantum Security

Enterprise-grade FHE biometrics, ZKP attestation, and post-quantum cryptography. One API call. Sub-millisecond latency.

Get Free API Key → Read the Docs
Free tier · 10,000 API calls/month · No credit card required