Biometrics Security · 22 min read

Liveness Detection:
Preventing Biometric Spoofing Attacks

Presentation attacks range from printed photos to real-time deepfake injection. This guide covers every layer of defense—passive texture analysis, active challenge-response, ISO 30107-3 certification levels, anti-injection techniques—and explains why FHE biometrics provide the cryptographic guarantee that liveness alone cannot.

ISO 30107
PAD Standard
6 Types
Attack Vectors
<0.5%
Target APCER
FHE
Cryptographic Backstop

Every biometric system makes an implicit assumption: the biometric sample it receives comes from a living, physically present human being. Liveness detection is the technology that validates this assumption. Without it, biometric authentication is trivially defeated by anyone with a printer, a smartphone screen, or increasingly, a generative AI model.

The stakes are not theoretical. In 2019, researchers at Bkav Corporation bypassed Apple's Face ID using a 3D-printed mask combined with 2D images of the eye region. In 2020, a study by Kneron demonstrated that silicone masks could fool face recognition systems at airport kiosks and payment terminals across Asia. And with the explosion of deepfake generation tools since 2023, the barrier to creating convincing synthetic face imagery has dropped to essentially zero.

This guide covers the complete landscape of presentation attack detection (PAD): the taxonomy of attacks, the detection techniques that counter them, the standards that certify them, and the architectural truth that no amount of liveness detection can fully replace cryptographic protection of biometric templates.

Key Insight

Liveness detection and encrypted biometric processing are complementary defenses, not alternatives. Liveness prevents an attacker from impersonating a user at the point of capture. FHE-based biometric matching prevents an attacker from exploiting stolen templates even if they bypass every other control. A mature biometric system deploys both.

Why Liveness Detection Matters

Biometric systems are deployed in contexts where the consequences of spoofing are severe: border control, financial services, healthcare identity, government benefits, and device unlock. A spoofed fingerprint that unlocks a phone is an inconvenience. A spoofed face that clears a bank's KYC check is fraud. A spoofed biometric that passes an immigration gate is a national security incident.

The fundamental vulnerability is that biometric traits are not secret. Your face is visible in every photo you have ever posted. Your fingerprints are on every surface you have ever touched. Your voice is recorded in every phone call. Unlike passwords, biometric traits cannot be changed if compromised. This means the authentication system cannot simply rely on the biometric data being hard to obtain—it must verify that the data is being presented by the actual living person, in real time.

The Presentation Attack Taxonomy (ISO 30107-1)

ISO 30107-1 defines the formal taxonomy for biometric presentation attacks. A Presentation Attack Instrument (PAI) is any artefact or technique used to interfere with the biometric capture subsystem. PAIs are classified by their species (the type of artefact) and the level of effort required to create them.

The standard distinguishes between two fundamental categories:

For authentication systems, impersonation is the primary threat. The following table classifies the major attack vectors by type, sophistication, and the liveness technique required to detect them.

Attack Types: From Printed Photos to Deepfake Injection

Attack Type PAI Species Sophistication Cost Detection Technique
2D Print Attack Printed photo (paper/card) Low <$1 Texture analysis, depth estimation, moiré detection
Screen Replay Photo/video on phone or tablet Low <$5 Moiré pattern detection, screen bezel detection, reflection analysis
3D Rigid Mask Resin, plaster, or 3D-printed mask Medium $50–$500 Texture/material analysis, skin reflectance, micro-expression detection
Silicone/Latex Mask Custom-molded flexible mask High $2K–$10K Sub-surface scattering analysis, pulse detection, thermal imaging
Deepfake Injection Synthetic video injected into camera pipeline High $0–$100 Cryptographic camera attestation, frame integrity, injection detection
Puppet/Animatronic Mechanical face with articulated features Very High $10K+ Micro-movement analysis, physiological signal detection, multi-modal fusion
Critical Threat: Deepfake Injection

Deepfake injection attacks are the fastest-growing attack vector because they bypass the camera entirely. The attacker does not present a physical artefact to the sensor. Instead, they inject a synthetic video stream into the device's camera pipeline using virtual camera software, rooted devices, or API hooking. This means traditional liveness checks that analyze optical properties of the captured scene (texture, depth, moiré) are completely ineffective. Defending against injection requires a fundamentally different approach: verifying that the video frames originate from a genuine, unmodified camera sensor.

Passive Liveness Detection

Passive liveness operates transparently—the user simply looks at the camera, and the system analyzes the captured frames without requiring any deliberate action. This is the preferred approach for user experience because it adds zero friction. The challenge is achieving high accuracy against sophisticated attacks using only the visual information available in a standard 2D camera feed.

Texture Analysis

The most fundamental passive technique exploits the fact that photographs, screens, and masks have different surface textures than living skin. Algorithms analyze micro-texture patterns using Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), or learned CNN features to distinguish organic skin from printed paper, LCD pixels, or synthetic materials.

A printed photo on paper exhibits dot patterns from the printing process. A screen replay shows pixel grid structure. A 3D-printed mask has layer lines. These artefacts are often invisible to the human eye but detectable by texture classification models trained on large PAI datasets.

Python texture_analysis.py
# Simplified LBP-based texture analysis for PAD
import numpy as np
from skimage.feature import local_binary_pattern

def compute_lbp_histogram(face_roi, radius=3, n_points=24):
    """Extract LBP texture features from a face region."""
    gray = cv2.cvtColor(face_roi, cv2.COLOR_BGR2GRAY)
    lbp = local_binary_pattern(gray, n_points, radius, method='uniform')
    hist, _ = np.histogram(lbp, bins=n_points + 2,
                           range=(0, n_points + 2), density=True)
    return hist  # Feed to SVM or neural classifier

# Modern approaches: Replace LBP with learned CNN features
# ResNet-18 fine-tuned on OULU-NPU / SiW / CASIA-SURF datasets
# Achieves ACER < 1% on intra-dataset, ~5-15% cross-dataset

Moire Pattern Detection

When a camera photographs a screen, the interaction between the camera's sensor grid and the screen's pixel grid produces moiré interference patterns—characteristic rippled bands that do not appear when imaging real faces. Moiré detection algorithms operate in the frequency domain, applying Fourier or wavelet transforms to identify the periodic frequency signatures unique to screen replay attacks.

This technique is highly effective against screen replay but does not generalize to print attacks (which produce different frequency artefacts) or 3D masks (which produce none). It is typically deployed as one signal in a multi-classifier ensemble.

Depth Estimation

A real face is a 3D surface. A printed photo is flat. A screen is flat. Depth estimation algorithms infer the 3D structure of the presented face from a single 2D image (monocular depth estimation) or from stereo cameras, structured light projectors, or time-of-flight sensors.

Monocular depth estimation uses deep neural networks trained to predict per-pixel depth maps from RGB images. The model learns that real faces exhibit characteristic depth variation (nose protrudes, eye sockets are recessed, cheeks curve) while flat PAIs produce uniform depth maps. This works well against 2D attacks but struggles against 3D masks, which do have genuine geometric variation.

Frequency-Domain Analysis

Beyond moiré detection, frequency-domain analysis examines the full spectral characteristics of the captured image. Real faces exhibit certain high-frequency texture details (pores, fine hair, micro-wrinkles) that are lost or distorted in reproductions. Print attacks lose high-frequency detail due to printer resolution limits. Screen replays introduce aliasing and quantization artefacts. Deepfakes often exhibit spectral inconsistencies in areas the generator network struggles with (hairline, ears, teeth boundaries).

Spectral analysis is particularly useful as a complementary signal because it captures different information than spatial-domain texture analysis, improving ensemble robustness.

Strengths of Passive Liveness

Zero user friction. Works with standard 2D cameras. Fast inference (10–50ms). No user instruction required. Invisible to the user. Scales to high-throughput scenarios (airport gates, payment terminals).

Limitations of Passive Liveness

Struggles against high-quality 3D masks and silicone prosthetics. Vulnerable to adversarial examples. Performance degrades across different cameras, lighting, and environments. Cannot detect injection attacks that bypass the camera sensor entirely.

Active Liveness Detection

Active liveness requires the user to perform specific actions during the capture process. The system issues a challenge and verifies that the user's response matches the expected behavior. Because the challenge is generated dynamically and unpredictably, a static PAI (photo, pre-recorded video) cannot respond correctly.

Challenge-Response Protocols

The most common active liveness challenges include:

Randomized Prompt Sequencing

The effectiveness of active liveness depends critically on unpredictability. If an attacker knows the system always asks for "blink then turn left," they can prepare a video that performs those actions. Robust active liveness systems use randomized prompt sequencing—selecting from a pool of challenges in random order, with random timing.

Example: Randomized 3-Challenge Protocol

  1. Challenge 1—Selected randomly from {blink, smile, raise eyebrows}. System waits 1.5–3.0 seconds for response. Verifies action occurred within the prompt window.
  2. Challenge 2—Selected randomly from {turn head left, turn head right, nod}. Must differ from Challenge 1 category. Verifies 3D pose change with correct direction and magnitude.
  3. Challenge 3—Selected randomly from {follow the dot, read digits aloud, hold still for 2s}. Verifies temporal correlation between prompt and response with sub-second precision.

With 3 challenge categories of 3 options each, there are 27 possible sequences. An attacker must either respond in real time (defeating the purpose of using a PAI) or pre-record all 27 combinations (detectable via video compression artefacts and timing inconsistencies).

Usability Tradeoff

Active liveness adds 3–8 seconds of user interaction and requires the user to follow instructions. This reduces completion rates, especially for elderly users, users with disabilities, and users in noisy environments where verbal challenges are impractical. Accessibility regulations (WCAG 2.1, ADA) may require alternative verification paths when active liveness challenges are used. Always design fallback flows.

Hybrid Approaches: Passive + Active

The most robust liveness systems combine passive and active techniques in a tiered architecture. Passive liveness runs on every frame with zero friction. If the passive model's confidence is high, the session proceeds without any active challenge. If the passive model detects ambiguity or elevated risk (e.g., unusual texture patterns, borderline depth estimation), the system escalates to an active challenge.

Step 1
Passive Analysis
High Confidence?
Score > 0.95
Yes
Pass (0s added)
Step 1
Passive
Ambiguous?
Score 0.5–0.95
Step 2
Active Challenge

This hybrid approach achieves the best of both worlds: the majority of legitimate users pass through with zero friction (passive only), while suspicious sessions receive additional scrutiny. The escalation threshold can be tuned based on the security requirements of the application—tighter thresholds for banking, looser thresholds for social media.

Metrics: APCER vs. BPCER and the DET Curve

Liveness detection performance is measured using two complementary error rates defined in ISO 30107-3:

Metric Full Name Definition Who It Affects
APCER Attack Presentation Classification Error Rate Proportion of attack presentations incorrectly classified as bona fide (live) Security: Lower APCER = fewer attacks succeed
BPCER Bona Fide Presentation Classification Error Rate Proportion of genuine (live) presentations incorrectly classified as attacks Usability: Lower BPCER = fewer legitimate users rejected
ACER Average Classification Error Rate (APCER + BPCER) / 2 Overall system quality balance

These two metrics have an inherent tradeoff. Tightening the liveness threshold reduces APCER (fewer attacks get through) but increases BPCER (more legitimate users are rejected). The Detection Error Tradeoff (DET) curve plots APCER against BPCER at every threshold, providing a complete picture of system performance.

A good PAD system has a DET curve that hugs the lower-left corner—low APCER and low BPCER simultaneously. The operating point on this curve is chosen based on the application's risk profile:

APCER Is Per-Species

ISO 30107-3 requires that APCER be reported separately for each PAI species. A system that achieves 0.1% APCER against print attacks but 15% APCER against silicone masks has a very different security profile than one with 2% APCER across all species. Always demand per-species breakdowns when evaluating PAD solutions.

ISO 30107-3 Certification and iBeta Testing

ISO 30107-3 defines the testing methodology and reporting requirements for evaluating PAD performance. It specifies how PAI species should be created, how testing should be conducted, and how results should be reported. The standard itself does not define pass/fail thresholds—those are set by accredited testing laboratories.

iBeta Level 1 vs. Level 2

iBeta Quality Assurance is a NIST/NVLAP-accredited testing laboratory (Lab Code 200962-0) that conducts ISO 30107-3 conformant PAD testing. Their testing is considered the industry benchmark for liveness detection certification.

Certification PAI Species Tested Pass Criteria Difficulty
iBeta Level 1 2D print photos, screen replay (photos & videos) APCER ≤ 0% across all species, BPCER ≤ varies Baseline: defeats commodity attacks
iBeta Level 2 Level 1 + 3D-printed masks, paper/resin masks, silicone/latex prosthetics, partial overlays APCER ≤ 0% across all species Advanced: defeats physical artefact attacks

Level 1 testing uses PAIs that can be created from publicly available photos of the target—the attacker needs only a social media photo to create a print or screen replay attack. Level 2 testing assumes the attacker has physical access to the target (or a 3D scan) and can create custom masks. Level 2 certification is significantly harder to achieve and is required for applications where the threat model includes determined adversaries with resources.

What Certification Does Not Cover

It is critical to understand the limitations of current certification:

Deepfake Injection Attacks

Deepfake injection represents a paradigm shift in presentation attacks because it moves the attack surface from the physical world to the software stack. Instead of fooling a camera with a physical artefact, the attacker replaces the camera feed entirely.

How Injection Works

The attack typically proceeds as follows:

  1. Virtual camera installation—The attacker installs software (OBS Virtual Camera, ManyCam, or custom tools) that creates a virtual camera device on the operating system.
  2. Deepfake generation—Using face-swapping tools (DeepFaceLive, FaceFusion, or custom GAN/diffusion models), the attacker generates a real-time video stream of the target's face.
  3. Camera substitution—The biometric application is tricked into reading from the virtual camera instead of the physical camera. On rooted Android devices, this can be done at the driver level.
  4. Liveness bypass—Because the deepfake model can animate the synthetic face in real time, it can respond to active liveness challenges (blink, turn, smile) just as effectively as a real person.
Attack injection_pipeline.sh
# Typical deepfake injection attack chain (for defensive understanding)
# Step 1: Generate deepfake stream from target's photos
$ deepfacelive --source target_photos/ --output virtual_cam

# Step 2: Route through virtual camera
$ v4l2loopback-ctl set-caps /dev/video2  # Linux
$ obs --start-virtual-camera                # macOS/Windows

# Step 3: Application sees /dev/video2 as "camera"
# All traditional liveness checks pass — the synthetic face
# can blink, turn, smile, follow gaze targets in real time
# Texture/depth analysis sees generated pixels, not a PAI artefact

Anti-Injection Techniques

Defending against injection requires verifying the integrity of the entire capture pipeline, not just the content of the frames.

Cryptographic Camera Attestation

The camera hardware signs each frame (or frame hash) with a device-bound private key stored in a secure enclave (TPM, Secure Element, ARM TrustZone). The server verifies the signature chain traces back to a genuine camera module from a trusted manufacturer. Virtual cameras cannot produce valid signatures.

Frame Integrity Verification

Embed cryptographic watermarks or nonces into the camera's ISP (Image Signal Processor) output at the hardware level. The server verifies the watermark is present and valid. Software-injected frames cannot contain the hardware-level watermark.

Device Integrity Checks

Verify the device is not rooted/jailbroken (SafetyNet/Play Integrity on Android, DeviceCheck on iOS). Check for known virtual camera apps. Verify the camera API is returning data from a physical sensor, not a virtual device.

Injection Artefact Detection

Analyze frames for compression artefacts, GAN fingerprints, temporal inconsistencies (frame rate jitter, missing sensor noise patterns), and statistical anomalies that distinguish synthetic frames from genuine camera output.

The Arms Race Reality

Anti-injection is an ongoing arms race. Attackers who control the device can potentially intercept and modify any software-level check. Hardware-rooted attestation (where the camera hardware itself produces cryptographic proof) is the only technique with a durable security guarantee, because it requires the attacker to physically modify the device's secure element—a dramatically higher barrier than installing software.

3D Face Mapping Technologies

For applications requiring the highest security level, 3D face mapping provides depth information that makes flat PAIs (print, screen) trivially detectable and significantly raises the bar for 3D mask attacks.

Structured Light

A projector emits a known pattern (dot grid, stripe pattern) onto the face. A camera observes the deformation of the pattern as it wraps around the 3D surface. The deformation is computed into a dense depth map. Apple's TrueDepth camera (used in Face ID) uses a structured light system with ~30,000 infrared dots.

Structured light provides high-resolution depth maps but requires dedicated hardware (IR projector + IR camera) and works best at short range (30–50cm). It is effective against all 2D attacks and most 3D mask attacks, though high-quality silicone masks with correct skin reflectance properties remain challenging.

Time-of-Flight (ToF)

A ToF sensor emits modulated infrared light and measures the phase shift of the reflected signal to compute per-pixel depth. ToF sensors provide lower spatial resolution than structured light but work at greater distances and are less affected by ambient lighting. Many modern smartphones include ToF sensors (Samsung, Sony, Huawei).

Stereo Vision

Two cameras with a known baseline separation capture the scene simultaneously. Depth is computed from the disparity (pixel offset) between the two images using stereo matching algorithms. Stereo vision works with standard RGB cameras (no special hardware) but requires careful calibration and produces less accurate depth maps than structured light or ToF.

Technology Depth Resolution Range Hardware Cost PAI Resistance
Structured Light <1mm 0.2–0.5m $15–$50 High
Time-of-Flight 5–10mm 0.2–5m $10–$30 Medium-High
Stereo Vision 10–50mm 0.3–3m $5–$15 Medium
Monocular CNN Relative only Any $0 (software) Low-Medium

Remote vs. On-Device Processing

Where liveness detection runs has significant implications for security, latency, privacy, and cost.

On-Device Processing

Pros: Lower latency (no network round-trip). Works offline. Biometric data never leaves the device. Harder for network-level attackers to intercept raw frames.

Cons: Model must fit on device (constrained compute/memory). Harder to update models without app updates. Attacker who roots the device can tamper with the local model.

Server-Side Processing

Pros: Larger, more accurate models. Easy to update models without client changes. Can apply ensemble methods and cross-session analysis. Centralised fraud analytics.

Cons: Requires network connectivity. Adds 100–500ms latency. Raw biometric frames transit the network (privacy risk unless encrypted). Server becomes a high-value target.

The optimal architecture for high-security applications is a hybrid approach: on-device preprocessing extracts features and performs initial liveness checks, then encrypted features are sent to the server for final verification. This minimizes raw biometric data in transit while leveraging server-side model sophistication.

The Privacy Dimension

Server-side liveness processing means raw face images or video frames must travel over the network and be processed on infrastructure you do not control. Even with TLS, the server operator has access to plaintext biometric data. FHE-based biometric processing eliminates this concern entirely: the server processes encrypted biometric features and never sees the raw template. Liveness detection can run on-device, and only the encrypted feature vector is transmitted for matching.

Commercial Liveness SDK Comparison

The following table compares the capabilities of major commercial liveness detection SDKs available as of early 2026. Performance numbers are based on publicly available documentation and third-party testing where available.

Vendor / SDK Passive Active Injection Detection iBeta L1 iBeta L2 On-Device
FaceTec Yes Yes (3D) Yes Pass Pass Yes
iProov (GPA) Yes Yes (Flashmark) Yes Pass Pass Hybrid
Jumio Yes Yes Partial Pass Pass Hybrid
Onfido (Motion) Yes Yes Partial Pass Pending Hybrid
AWS Rekognition Yes Basic No N/A N/A Server
Apple Face ID Yes (3D) Attention Hardware N/A N/A Yes

A few observations from this landscape. FaceTec and iProov lead in certification coverage and injection defense. Apple's Face ID achieves exceptional security through hardware-level 3D sensing and secure enclave processing but is limited to Apple devices. Cloud-only solutions like AWS Rekognition provide convenience but lack injection detection—a critical gap for high-security applications. None of these SDKs address the fundamental problem of template security: if the biometric template stored on the server is compromised, liveness detection at the capture point is irrelevant.

Liveness + FHE: The Defense-in-Depth Architecture

Liveness detection answers one question: "Is a real person present at the sensor right now?" It does not answer: "Is the stored biometric template safe from exfiltration, insider threat, or quantum-era decryption?"

This is the critical gap. A system with perfect liveness detection and plaintext template storage is still catastrophically vulnerable to database breaches. The 2015 OPM breach exposed 5.6 million fingerprint records. The 2019 BioStar 2 breach exposed 27.8 million biometric records. In both cases, liveness detection was irrelevant—the attacker went around it by stealing the templates directly.

FHE-based biometric matching closes this gap by ensuring that plaintext biometric templates never exist on the server. The template is encrypted on the client device at enrollment and remains encrypted through storage, matching, and disposal. The matching computation (inner product, distance calculation) is performed entirely in the encrypted domain.

H33 Defense-in-Depth Architecture

Layer 1: Liveness (Point of Capture)

  • Passive texture + frequency analysis
  • Active challenge-response (if needed)
  • Device integrity / injection detection
  • Prevents: Photo, video, mask, injection attacks

Layer 2: FHE (Template Protection)

  • BFV lattice-based encryption (N=4096)
  • Matching on encrypted data (~1ms)
  • Server never sees plaintext templates
  • Prevents: DB breach, insider threat, HNDL, quantum decryption

The two layers are complementary. Liveness protects the capture moment. FHE protects the stored template. Together, they cover the entire attack surface: an attacker who bypasses liveness (somehow presenting a synthetic face that passes all checks) still cannot extract useful biometric data from the server because only encrypted templates exist. An attacker who breaches the server database gets ciphertext that is computationally infeasible to decrypt, even with a quantum computer (BFV is lattice-based, not vulnerable to Shor's algorithm).

Rust liveness_fhe_pipeline.rs
// H33 defense-in-depth: liveness + FHE biometric verification

// Step 1: On-device liveness check (runs on client)
let liveness_result = passive_liveness_check(&camera_frames);
if !liveness_result.is_live {
    return Err("Liveness check failed");
}

// Step 2: Extract biometric template on-device
let template: [f64; 128] = extract_face_embedding(&camera_frames);

// Step 3: Encrypt template on-device (plaintext never leaves device)
let encrypted_probe = bfv_encrypt(&template, &public_key);

// Step 4: Send encrypted probe to server
// Server performs FHE inner product against encrypted enrolled template
// Server NEVER sees plaintext — not the probe, not the enrolled template
let encrypted_score = fhe_inner_product(&encrypted_probe, &enrolled_ct);

// Step 5: Dilithium attestation (post-quantum signature)
let attestation = dilithium_sign(&batch_digest, &signing_key);

// Total pipeline: ~50µs per authentication (32-user batch)

Implementation Checklist

For teams deploying liveness detection in production, the following checklist covers the essential decisions and verification steps.

Pre-Deployment Checklist

  1. Define your threat model. Which PAI species are realistic threats for your application? A consumer app faces different threats than a border control system. This determines whether you need Level 1 or Level 2 PAD.
  2. Choose passive, active, or hybrid. For frictionless UX with adequate security, start with passive. For high-security applications, deploy hybrid with active escalation. Document the expected BPCER impact on user completion rates.
  3. Require iBeta certification. If your vendor claims liveness detection, ask for iBeta Level 1 certification at minimum. For financial services, government, or healthcare, require Level 2. Demand per-species APCER breakdowns.
  4. Address injection attacks separately. iBeta certification does not cover injection. Verify your solution includes device integrity checks, virtual camera detection, and ideally hardware-rooted camera attestation.
  5. Test across demographics. Validate APCER and BPCER across skin tones (Fitzpatrick I–VI), age groups, and lighting conditions. Bias in liveness detection is a legal and ethical risk.
  6. Design for accessibility. Active challenges must have alternatives for users who cannot perform physical actions (head turn, blink) due to disabilities. WCAG 2.1 compliance is not optional.
  7. Plan for model updates. Liveness models degrade as attackers adapt. Ensure your architecture supports model updates without full app redeployment (server-side models, or OTA model delivery for on-device).
  8. Encrypt stored templates. Liveness protects the capture moment. It does nothing for stored templates. If you are storing biometric templates, encrypt them with FHE (for active matching) or strong at-rest encryption with key management (for archival). Better: use FHE so plaintext never exists.
  9. Log and monitor. Track liveness rejection rates, APCER trends over time, device/browser distributions, and geographic anomalies. A spike in rejections from a specific device model or region may indicate a new attack tool in circulation.
  10. Conduct regular red-team testing. Hire penetration testers with PAD expertise to attack your system with current-generation PAIs and injection tools. Test at least annually. Update your threat model based on findings.

The Bottom Line

Liveness detection is a necessary but insufficient defense for biometric systems. It protects the point of capture by verifying that a real, living person is presenting their biometric to the sensor. It does not protect the stored template, the matching computation, or the communication channel.

The threat landscape is evolving rapidly. Commodity attacks (print, screen replay) are defeated by any certified passive liveness system. 3D mask attacks require Level 2 PAD with depth sensing or advanced material analysis. Deepfake injection attacks require fundamentally different defenses: device integrity verification, camera attestation, and synthetic content detection. Each layer of the threat model requires a corresponding layer of defense.

But here is the architectural truth that the liveness detection industry rarely acknowledges: no amount of liveness sophistication protects a biometric template that is stored in plaintext on a server. If the database is breached, the templates are compromised forever—biometric data cannot be rotated like passwords. If the encrypted templates are harvested for future quantum decryption, liveness at the capture point is irrelevant.

This is why H33's architecture treats liveness as a defense-in-depth layer and FHE as the cryptographic guarantee. Liveness raises the cost and difficulty of impersonation at the sensor. FHE ensures that even a total server compromise yields nothing but ciphertext—ciphertext built on lattice-based cryptography that is resistant to both classical and quantum attacks. The plaintext biometric template never exists on the server, which means there is nothing to steal, nothing to harvest, and nothing to decrypt.

Deploy liveness detection for what it does well: stopping presentation attacks at the point of capture. But do not mistake it for template security. For that, you need cryptography.


H33 provides post-quantum biometric authentication with FHE template matching (BFV lattice-based, ~50µs per auth), ML-DSA attestation, and ML-KEM key exchange. Biometric templates are encrypted on-device and never decrypted on the server. Liveness integration via standard SDKs, with the cryptographic layer ensuring template security regardless of the liveness implementation chosen.

Build With Post-Quantum Security

Enterprise-grade FHE biometrics, ZKP verification, and post-quantum cryptography. One API call. Sub-millisecond latency. Templates never decrypted.

Get Free API Key → Read the Docs
Free tier · 10,000 API calls/month · No credit card required