Connect any biometric model to H33's FHE-encrypted matching pipeline. Your model extracts the embedding — H33 encrypts, stores, and matches it homomorphically. Zero plaintext exposure, ~50µs per authentication.
H33 is an encrypted matching infrastructure layer. It accepts pre-extracted embedding vectors
(&[f32]), encrypts them with BFV fully homomorphic encryption, and computes cosine
similarity entirely in the encrypted domain. H33 never sees, stores, or processes raw biometric data.
Your model is the camera. H33 is the vault. NEC, Cognitec, ArcFace, SpeechBrain — they produce float vectors. H33 encrypts those vectors and matches them without ever decrypting.
H33 provides pre-built adapters for these models. Any model producing a float vector can be used via GenericAdapter or the raw enroll()/verify() API.
| Adapter | Model | Modality | Dim | Normalization | Input Format |
|---|---|---|---|---|---|
| ArcFaceAdapter | InsightFace / ArcFace | Face | 512 | L2-normalized (validates [0.9, 1.1]) | float32 array |
| SpeechBrainAdapter | ECAPA-TDNN | Voice | 192 | Auto L2-normalize | float32 array |
| SourceAFISAdapter | SourceAFIS | Fingerprint | 256 | Auto L2-normalize | Spatial-binned float32 |
| GenericAdapter | Any model | Any | Configurable | Auto L2-normalize | float32 array |
SourceAFIS does NOT output float vectors. It outputs CBOR-encoded minutiae templates. Client-side code must convert minutiae to a 256-D spatial-binned vector before sending to H33. See the SourceAFIS section below for conversion code.
# pip install insightface opencv-python requests from insightface.app import FaceAnalysis import cv2, requests # 1. Extract 512-D embedding app = FaceAnalysis(name='buffalo_l') app.prepare(ctx_id=0) faces = app.get(cv2.imread("photo.jpg")) embedding = faces[0].embedding.tolist() # 512-D, L2-normalized # 2. Enroll with H33 (embedding encrypted server-side via BFV FHE) resp = requests.post("https://api.h33.ai/v1/enroll", json={ "user_id": "user-123", "embedding": embedding, "biometric_type": "facial" }, headers={"Authorization": "Bearer YOUR_API_KEY"}) print(resp.json()) # {"enrolled": true, "user_id": "user-123"}
# Extract fresh embedding from live capture faces = app.get(cv2.imread("live_capture.jpg")) probe = faces[0].embedding.tolist() # Verify against enrolled template (FHE cosine similarity) resp = requests.post("https://api.h33.ai/v1/verify", json={ "user_id": "user-123", "embedding": probe, "biometric_type": "facial" }, headers={"Authorization": "Bearer YOUR_API_KEY"}) result = resp.json() # {"match": true, "similarity": 0.94, "proof": "0xabc...", "attestation": "dilithium:..."}
use h33::biometric_auth::{BiometricAuthSystem, BiometricAuthConfig, ArcFaceAdapter}; let system = BiometricAuthSystem::new(BiometricAuthConfig::default())?; let adapter = ArcFaceAdapter; // Adapter validates: 512-D, finite, non-zero, L2 norm in [0.9, 1.1] let result = system.enroll_with_adapter("user-123", &embedding, &adapter)?; println!("Enrolled: {}", result.user_id); // Verify: returns match + ZK proof + Dilithium attestation let verify = system.verify_with_adapter("user-123", &probe, &adapter)?; println!("Match: {} (similarity: {:.4})", verify.matched, verify.similarity);
# pip install speechbrain torchaudio requests from speechbrain.pretrained import EncoderClassifier import requests # 1. Extract 192-D speaker embedding classifier = EncoderClassifier.from_hparams( source="speechbrain/spkrec-ecapa-voxceleb" ) signal = classifier.load_audio("voice.wav") embedding = classifier.encode_batch(signal).squeeze().tolist() # 192-D # 2. Enroll (H33 auto-normalizes via SpeechBrainAdapter) resp = requests.post("https://api.h33.ai/v1/enroll", json={ "user_id": "user-456", "embedding": embedding, "biometric_type": "voice" }, headers={"Authorization": "Bearer YOUR_API_KEY"}) print(resp.json()) # {"enrolled": true, "user_id": "user-456"}
use h33::biometric_auth::{BiometricAuthSystem, BiometricAuthConfig, SpeechBrainAdapter}; let adapter = SpeechBrainAdapter; // Adapter validates: 192-D, finite, non-zero, then applies L2-normalize let result = system.verify_with_adapter("user-456", &voice_embedding, &adapter)?;
SourceAFIS outputs CBOR minutiae templates, not float vectors. You must convert minutiae to a 256-D spatial-binned vector before sending to H33.
import numpy as np import sourceafis, requests # 1. Extract minutiae from fingerprint image template = sourceafis.extract(fingerprint_image) minutiae = template.minutiae # list of (x, y, direction) W, H = template.width, template.height # 2. Spatial binning: 16x16 grid → 256-D vector grid = np.zeros(256, dtype=np.float32) for m in minutiae: cell = int(m.x / W * 16) * 16 + int(m.y / H * 16) grid[min(cell, 255)] += 1 # 3. L2-normalize grid = grid / np.linalg.norm(grid) embedding = grid.tolist() # 4. Enroll with H33 resp = requests.post("https://api.h33.ai/v1/enroll", json={ "user_id": "user-789", "embedding": embedding, "biometric_type": "fingerprint" }, headers={"Authorization": "Bearer YOUR_API_KEY"})
use h33::biometric_auth::{BiometricAuthSystem, BiometricAuthConfig, SourceAFISAdapter}; let adapter = SourceAFISAdapter; // Adapter validates: 256-D, finite, non-zero, then applies L2-normalize let result = system.enroll_with_adapter("user-789", &fingerprint_vector, &adapter)?;
For models not explicitly supported (NEC NeoFace, Cognitec FaceVACS, custom iris encoders, etc.),
use GenericAdapter to specify the expected dimension and biometric type.
use h33::biometric_auth::{GenericAdapter, BiometricType}; // NEC NeoFace outputs 2048-D facial embeddings let nec_adapter = GenericAdapter::new( BiometricType::Facial, 2048, "NEC-NeoFace" ); // Custom iris encoder, 1024-D let iris_adapter = GenericAdapter::new( BiometricType::Iris, 1024, "CustomIris" ); // GenericAdapter validates: correct dim, finite, non-zero, L2-normalizes let result = system.enroll_with_adapter("user-000", &nec_embedding, &nec_adapter)?;
import { H33Client } from 'h33-node'; const h33 = new H33Client({ apiKey: process.env.H33_API_KEY }); // Embedding from your model (e.g., TensorFlow.js face-api) const embedding = await yourModel.extractEmbedding(imageBuffer); // Enroll const enrollment = await h33.biometric.enroll({ userId: 'user-123', embedding: Array.from(embedding), biometricType: 'facial' }); // Verify const result = await h33.biometric.verify({ userId: 'user-123', embedding: Array.from(probeEmbedding), biometricType: 'facial' }); console.log(result.match, result.similarity, result.proof);
| Method | Description | Returns |
|---|---|---|
| enroll(user_id, embedding) | Encrypt and store a biometric template. BFV FHE encryption with SIMD batching (32 users/ciphertext). | EnrollmentResult |
| verify(user_id, embedding) | Match against enrolled template via homomorphic cosine similarity. Returns match + ZK proof + attestation. | VerificationResult |
| enroll_with_adapter(user_id, embedding, adapter) | Validate/normalize via adapter, then enroll. Catches dimension mismatches, NaN, zero vectors before encryption. | EnrollmentResult |
| verify_with_adapter(user_id, embedding, adapter) | Validate/normalize via adapter, then verify. Same validation as enroll_with_adapter. | VerificationResult |
| Method | Description | Returns |
|---|---|---|
| create_liveness_session() | Create a challenge-response liveness session. Returns challenges (blink, head turn, speech) with time limits. | AntiSpoofingSession |
| verify_with_liveness(user_id, embedding, capture) | Liveness check first, then FHE verification. Rejects photo attacks, replays, deepfakes before spending FHE cycles. | LivenessVerificationResult |
use h33::biometric_auth::{BiometricAuthConfig, BiometricAuthSystem}; // Default: FHE Standard mode, 0.7 threshold, anti-spoofing OFF let config = BiometricAuthConfig::default(); // Fast: FHE Turbo mode (development/testing) let config = BiometricAuthConfig::fast(); // Post-quantum: FHE Precision mode, 0.75 threshold let config = BiometricAuthConfig::post_quantum(); // Custom with anti-spoofing enabled let config = BiometricAuthConfig { anti_spoofing_enabled: true, anti_spoofing_risk_level: RiskLevel::High, ..BiometricAuthConfig::default() }; let system = BiometricAuthSystem::new(config)?;
H33's adapter layer catches common integration errors before they reach the FHE pipeline:
| Error | Cause | Fix |
|---|---|---|
| expected 512-D, got 256-D | Wrong model output dimension for chosen adapter | Check model output shape; use correct adapter |
| non-finite value at index 42 | NaN or Infinity in embedding (bad input image, failed inference) | Validate model output; check for null face detections |
| zero vector (norm² = 0.00e+0) | All-zero embedding (no face detected, silent audio) | Ensure face/voice is present in input |
| L2 norm 113.42 outside [0.90, 1.10] | ArcFace embedding not L2-normalized (raw output without post-processing) | L2-normalize client-side: v / np.linalg.norm(v) |
import numpy as np # Defensive embedding preparation def prepare_embedding(raw_embedding, expected_dim): v = np.array(raw_embedding, dtype=np.float32) # Check dimension assert v.shape == (expected_dim,), f"Expected {expected_dim}-D, got {v.shape}" # Check for NaN/Inf assert np.all(np.isfinite(v)), "Embedding contains NaN or Inf" # Check non-zero norm = np.linalg.norm(v) assert norm > 1e-6, "Zero vector — no biometric detected" # L2-normalize return (v / norm).tolist()
prepare_embedding() or the adapter's validate_and_normalize() to catch bad data early. FHE encryption is expensive — don't waste cycles on garbage input.anti_spoofing_enabled: true in BiometricAuthConfig. This adds liveness detection (blink, movement, challenge-response) before FHE matching — blocking photo attacks, replays, and deepfakes.H33's anti-spoofing pipeline detects 21 attack types across face and voice modalities, including photo attacks, replay attacks, deepfakes, screen captures, and synthetic voice. It runs before FHE matching — if liveness fails, no FHE cycles are spent.
use h33::biometric_auth::{ BiometricAuthSystem, BiometricAuthConfig, BiometricCapture, FaceFrame, RiskLevel, }; // Enable anti-spoofing let config = BiometricAuthConfig { anti_spoofing_enabled: true, anti_spoofing_risk_level: RiskLevel::High, ..BiometricAuthConfig::default() }; let system = BiometricAuthSystem::new(config)?; // 1. Create liveness session (returns challenges) let session = system.create_liveness_session()?; // 2. Collect biometric capture with face frames let capture = BiometricCapture { face_frames: Some(vec![frame1, frame2, frame3]), voice_segments: None, challenge_results: Some(challenge_responses), }; // 3. Verify with liveness (anti-spoof first, then FHE match) let result = system.verify_with_liveness("user-123", &embedding, &capture)?; if result.liveness_passed { // FHE verification ran if let Some(verify) = result.verification_result { println!("Match: {}, Similarity: {:.4}", verify.matched, verify.similarity); } } else { // Spoofing detected — FHE verification was NOT run println!("Liveness failed: {:?}", result.liveness_result); }
H33's FHE-based biometric architecture satisfies the strictest biometric privacy regulations. Raw biometric data never reaches H33 servers — only encrypted ciphertexts are stored and processed.
unenroll() API enables right to deletion. H33 as processor stores only BFV ciphertexts.Get an API key and integrate H33's FHE biometric pipeline in under 10 minutes.