Benchmarks Stack Ranking H33 FHE H33 ZK APIs Pricing PQC Docs Blog About
SDK Guide Biometrics

Biometric SDK
Integration Guide

Connect any biometric model to H33's FHE-encrypted matching pipeline. Your model extracts the embedding — H33 encrypts, stores, and matches it homomorphically. Zero plaintext exposure, ~50µs per authentication.

~50µs
Per Auth
1.2M
Auth/sec
4
Modalities
PQ
Secure

Architecture: You Extract, We Encrypt

H33 is an encrypted matching infrastructure layer. It accepts pre-extracted embedding vectors (&[f32]), encrypts them with BFV fully homomorphic encryption, and computes cosine similarity entirely in the encrypted domain. H33 never sees, stores, or processes raw biometric data.

Key Concept

Your model is the camera. H33 is the vault. NEC, Cognitec, ArcFace, SpeechBrain — they produce float vectors. H33 encrypts those vectors and matches them without ever decrypting.

STEP 1
Feature Extraction (Your Side)
Run your biometric model (ArcFace, SpeechBrain, SourceAFIS, NEC, etc.) to produce a float embedding vector from raw image/audio/fingerprint data.
Client-Side Your Model
STEP 2
Adapter Validation (H33 SDK)
Pre-built adapters validate dimension, check for NaN/Inf, verify L2 normalization, and normalize if needed. Catches bad input before encryption.
BiometricAdapter Optional
STEP 3
FHE Encryption + Matching (H33 Core)
BFV encrypts the embedding into a lattice-based ciphertext. Cosine similarity computed homomorphically — 32 users per ciphertext via SIMD batching.
~1,375µs / 32 users Post-Quantum
STEP 4
ZK Proof + Attestation (H33 Core)
STARK lookup proof (0.067µs) + Dilithium signature (~240µs) produce a verifiable, post-quantum attestation of the match result.
SHA3-256 ML-DSA

Supported Models & Embedding Formats

H33 provides pre-built adapters for these models. Any model producing a float vector can be used via GenericAdapter or the raw enroll()/verify() API.

Adapter Model Modality Dim Normalization Input Format
ArcFaceAdapter InsightFace / ArcFace Face 512 L2-normalized (validates [0.9, 1.1]) float32 array
SpeechBrainAdapter ECAPA-TDNN Voice 192 Auto L2-normalize float32 array
SourceAFISAdapter SourceAFIS Fingerprint 256 Auto L2-normalize Spatial-binned float32
GenericAdapter Any model Any Configurable Auto L2-normalize float32 array
Important

SourceAFIS does NOT output float vectors. It outputs CBOR-encoded minutiae templates. Client-side code must convert minutiae to a 256-D spatial-binned vector before sending to H33. See the SourceAFIS section below for conversion code.


Quick Start: ArcFace (Face, 512-D)

Python: Extract + Enroll

Python arcface_enroll.py
# pip install insightface opencv-python requests
from insightface.app import FaceAnalysis
import cv2, requests

# 1. Extract 512-D embedding
app = FaceAnalysis(name='buffalo_l')
app.prepare(ctx_id=0)
faces = app.get(cv2.imread("photo.jpg"))
embedding = faces[0].embedding.tolist()  # 512-D, L2-normalized

# 2. Enroll with H33 (embedding encrypted server-side via BFV FHE)
resp = requests.post("https://api.h33.ai/v1/enroll", json={
    "user_id": "user-123",
    "embedding": embedding,
    "biometric_type": "facial"
}, headers={"Authorization": "Bearer YOUR_API_KEY"})

print(resp.json())  # {"enrolled": true, "user_id": "user-123"}

Python: Verify

Python arcface_verify.py
# Extract fresh embedding from live capture
faces = app.get(cv2.imread("live_capture.jpg"))
probe = faces[0].embedding.tolist()

# Verify against enrolled template (FHE cosine similarity)
resp = requests.post("https://api.h33.ai/v1/verify", json={
    "user_id": "user-123",
    "embedding": probe,
    "biometric_type": "facial"
}, headers={"Authorization": "Bearer YOUR_API_KEY"})

result = resp.json()
# {"match": true, "similarity": 0.94, "proof": "0xabc...", "attestation": "dilithium:..."}

Rust: Enroll with Adapter

Rust src/main.rs
use h33::biometric_auth::{BiometricAuthSystem, BiometricAuthConfig, ArcFaceAdapter};

let system = BiometricAuthSystem::new(BiometricAuthConfig::default())?;
let adapter = ArcFaceAdapter;

// Adapter validates: 512-D, finite, non-zero, L2 norm in [0.9, 1.1]
let result = system.enroll_with_adapter("user-123", &embedding, &adapter)?;
println!("Enrolled: {}", result.user_id);

// Verify: returns match + ZK proof + Dilithium attestation
let verify = system.verify_with_adapter("user-123", &probe, &adapter)?;
println!("Match: {} (similarity: {:.4})", verify.matched, verify.similarity);

Quick Start: SpeechBrain (Voice, 192-D)

Python speechbrain_enroll.py
# pip install speechbrain torchaudio requests
from speechbrain.pretrained import EncoderClassifier
import requests

# 1. Extract 192-D speaker embedding
classifier = EncoderClassifier.from_hparams(
    source="speechbrain/spkrec-ecapa-voxceleb"
)
signal = classifier.load_audio("voice.wav")
embedding = classifier.encode_batch(signal).squeeze().tolist()  # 192-D

# 2. Enroll (H33 auto-normalizes via SpeechBrainAdapter)
resp = requests.post("https://api.h33.ai/v1/enroll", json={
    "user_id": "user-456",
    "embedding": embedding,
    "biometric_type": "voice"
}, headers={"Authorization": "Bearer YOUR_API_KEY"})

print(resp.json())  # {"enrolled": true, "user_id": "user-456"}

Rust: Voice Verification

Rust src/voice_verify.rs
use h33::biometric_auth::{BiometricAuthSystem, BiometricAuthConfig, SpeechBrainAdapter};

let adapter = SpeechBrainAdapter;
// Adapter validates: 192-D, finite, non-zero, then applies L2-normalize
let result = system.verify_with_adapter("user-456", &voice_embedding, &adapter)?;

Quick Start: SourceAFIS (Fingerprint, 256-D)

Client-Side Conversion Required

SourceAFIS outputs CBOR minutiae templates, not float vectors. You must convert minutiae to a 256-D spatial-binned vector before sending to H33.

Python sourceafis_convert.py
import numpy as np
import sourceafis, requests

# 1. Extract minutiae from fingerprint image
template = sourceafis.extract(fingerprint_image)
minutiae = template.minutiae  # list of (x, y, direction)
W, H = template.width, template.height

# 2. Spatial binning: 16x16 grid → 256-D vector
grid = np.zeros(256, dtype=np.float32)
for m in minutiae:
    cell = int(m.x / W * 16) * 16 + int(m.y / H * 16)
    grid[min(cell, 255)] += 1

# 3. L2-normalize
grid = grid / np.linalg.norm(grid)
embedding = grid.tolist()

# 4. Enroll with H33
resp = requests.post("https://api.h33.ai/v1/enroll", json={
    "user_id": "user-789",
    "embedding": embedding,
    "biometric_type": "fingerprint"
}, headers={"Authorization": "Bearer YOUR_API_KEY"})

Rust: Fingerprint with Adapter

Rust src/fingerprint.rs
use h33::biometric_auth::{BiometricAuthSystem, BiometricAuthConfig, SourceAFISAdapter};

let adapter = SourceAFISAdapter;
// Adapter validates: 256-D, finite, non-zero, then applies L2-normalize
let result = system.enroll_with_adapter("user-789", &fingerprint_vector, &adapter)?;

Generic Adapter: Any Model, Any Dimension

For models not explicitly supported (NEC NeoFace, Cognitec FaceVACS, custom iris encoders, etc.), use GenericAdapter to specify the expected dimension and biometric type.

Rust src/custom_model.rs
use h33::biometric_auth::{GenericAdapter, BiometricType};

// NEC NeoFace outputs 2048-D facial embeddings
let nec_adapter = GenericAdapter::new(
    BiometricType::Facial,
    2048,
    "NEC-NeoFace"
);

// Custom iris encoder, 1024-D
let iris_adapter = GenericAdapter::new(
    BiometricType::Iris,
    1024,
    "CustomIris"
);

// GenericAdapter validates: correct dim, finite, non-zero, L2-normalizes
let result = system.enroll_with_adapter("user-000", &nec_embedding, &nec_adapter)?;

Node.js SDK

JavaScript enroll.js
import { H33Client } from 'h33-node';

const h33 = new H33Client({ apiKey: process.env.H33_API_KEY });

// Embedding from your model (e.g., TensorFlow.js face-api)
const embedding = await yourModel.extractEmbedding(imageBuffer);

// Enroll
const enrollment = await h33.biometric.enroll({
  userId: 'user-123',
  embedding: Array.from(embedding),
  biometricType: 'facial'
});

// Verify
const result = await h33.biometric.verify({
  userId: 'user-123',
  embedding: Array.from(probeEmbedding),
  biometricType: 'facial'
});

console.log(result.match, result.similarity, result.proof);

Rust API Reference

Core Methods

Method Description Returns
enroll(user_id, embedding) Encrypt and store a biometric template. BFV FHE encryption with SIMD batching (32 users/ciphertext). EnrollmentResult
verify(user_id, embedding) Match against enrolled template via homomorphic cosine similarity. Returns match + ZK proof + attestation. VerificationResult
enroll_with_adapter(user_id, embedding, adapter) Validate/normalize via adapter, then enroll. Catches dimension mismatches, NaN, zero vectors before encryption. EnrollmentResult
verify_with_adapter(user_id, embedding, adapter) Validate/normalize via adapter, then verify. Same validation as enroll_with_adapter. VerificationResult

Anti-Spoofing Methods (Opt-In)

Method Description Returns
create_liveness_session() Create a challenge-response liveness session. Returns challenges (blink, head turn, speech) with time limits. AntiSpoofingSession
verify_with_liveness(user_id, embedding, capture) Liveness check first, then FHE verification. Rejects photo attacks, replays, deepfakes before spending FHE cycles. LivenessVerificationResult

Configuration

Rust config.rs
use h33::biometric_auth::{BiometricAuthConfig, BiometricAuthSystem};

// Default: FHE Standard mode, 0.7 threshold, anti-spoofing OFF
let config = BiometricAuthConfig::default();

// Fast: FHE Turbo mode (development/testing)
let config = BiometricAuthConfig::fast();

// Post-quantum: FHE Precision mode, 0.75 threshold
let config = BiometricAuthConfig::post_quantum();

// Custom with anti-spoofing enabled
let config = BiometricAuthConfig {
    anti_spoofing_enabled: true,
    anti_spoofing_risk_level: RiskLevel::High,
    ..BiometricAuthConfig::default()
};

let system = BiometricAuthSystem::new(config)?;

Error Handling

H33's adapter layer catches common integration errors before they reach the FHE pipeline:

Error Cause Fix
expected 512-D, got 256-D Wrong model output dimension for chosen adapter Check model output shape; use correct adapter
non-finite value at index 42 NaN or Infinity in embedding (bad input image, failed inference) Validate model output; check for null face detections
zero vector (norm² = 0.00e+0) All-zero embedding (no face detected, silent audio) Ensure face/voice is present in input
L2 norm 113.42 outside [0.90, 1.10] ArcFace embedding not L2-normalized (raw output without post-processing) L2-normalize client-side: v / np.linalg.norm(v)
Python error_handling.py
import numpy as np

# Defensive embedding preparation
def prepare_embedding(raw_embedding, expected_dim):
    v = np.array(raw_embedding, dtype=np.float32)

    # Check dimension
    assert v.shape == (expected_dim,), f"Expected {expected_dim}-D, got {v.shape}"

    # Check for NaN/Inf
    assert np.all(np.isfinite(v)), "Embedding contains NaN or Inf"

    # Check non-zero
    norm = np.linalg.norm(v)
    assert norm > 1e-6, "Zero vector — no biometric detected"

    # L2-normalize
    return (v / norm).tolist()

Best Practices

Integration Checklist


Anti-Spoofing Integration

H33's anti-spoofing pipeline detects 21 attack types across face and voice modalities, including photo attacks, replay attacks, deepfakes, screen captures, and synthetic voice. It runs before FHE matching — if liveness fails, no FHE cycles are spent.

Rust liveness_verify.rs
use h33::biometric_auth::{
    BiometricAuthSystem, BiometricAuthConfig,
    BiometricCapture, FaceFrame, RiskLevel,
};

// Enable anti-spoofing
let config = BiometricAuthConfig {
    anti_spoofing_enabled: true,
    anti_spoofing_risk_level: RiskLevel::High,
    ..BiometricAuthConfig::default()
};
let system = BiometricAuthSystem::new(config)?;

// 1. Create liveness session (returns challenges)
let session = system.create_liveness_session()?;

// 2. Collect biometric capture with face frames
let capture = BiometricCapture {
    face_frames: Some(vec![frame1, frame2, frame3]),
    voice_segments: None,
    challenge_results: Some(challenge_responses),
};

// 3. Verify with liveness (anti-spoof first, then FHE match)
let result = system.verify_with_liveness("user-123", &embedding, &capture)?;

if result.liveness_passed {
    // FHE verification ran
    if let Some(verify) = result.verification_result {
        println!("Match: {}, Similarity: {:.4}", verify.matched, verify.similarity);
    }
} else {
    // Spoofing detected — FHE verification was NOT run
    println!("Liveness failed: {:?}", result.liveness_result);
}

Compliance

H33's FHE-based biometric architecture satisfies the strictest biometric privacy regulations. Raw biometric data never reaches H33 servers — only encrypted ciphertexts are stored and processed.

Regulatory Coverage

Start Encrypting Biometrics

Get an API key and integrate H33's FHE biometric pipeline in under 10 minutes.

Get API Key API Reference