Zero-knowledge proofs are powerful, but generating them one at a time doesn't scale. H33's batch ZKP system aggregates multiple proofs into unified verification, dramatically improving throughput while maintaining full cryptographic guarantees. In production on Graviton4 hardware, this batch architecture is a core reason H33 sustains 2,172,518 authentications per second with an average latency of just ~42µs per auth (see our full benchmark results).
Batch vs. Sequential
Sequential proof generation processes proofs one at a time. Batch aggregation combines multiple proofs into a single verification operation, sharing computational work across all proofs in the batch. At H33's production scale, this is the difference between verifying 32 individual STARK lookups (~2.7µs total) and performing a single in-process DashMap lookup at 0.085µs—a 31x reduction in per-user ZKP overhead.
Why Individual Proof Verification Bottlenecks
A standard ZK proof verification involves elliptic-curve pairings or hash-chain traversals that are computationally expensive. When each authentication event generates its own proof, the verifier must repeat the same setup work—loading verification keys, initializing hash contexts, and performing modular arithmetic—for every single user. At 1,000 requests per second this overhead is negligible. At 1.5 million per second, it is the entire bottleneck.
The core insight behind H33's approach is that these repeated setup costs can be amortized. If 32 users authenticate within the same time window, the verifier loads the verification key once, initializes the hash context once, and checks all 32 proofs against a shared accumulator. The marginal cost of adding one more proof to an existing batch is a fraction of verifying that proof independently.
How Proof Aggregation Works
Instead of verifying 100 separate proofs individually, batch ZKP uses cryptographic techniques to verify them together:
- Proof batching: Multiple proofs are collected into a batch window (H33 uses 32 users per ciphertext batch, aligned with the BFV SIMD slot layout)
- Shared computation: Common verification steps—key loading, NTT domain transforms, hash context initialization—are performed once
- Aggregated verification: A single check validates the entire batch via an accumulated commitment
- Individual results: Per-proof pass/fail status is still available through the accumulator's internal indices
H33 aligns its ZKP batch size with its FHE batch size. BFV with N=4096 and 128-dimensional biometric vectors packs exactly 32 users per ciphertext. By batching ZK proofs in groups of 32 as well, the entire pipeline—FHE computation, ZKP verification, and Dilithium attestation—operates on the same batch boundary, eliminating cross-stage synchronization overhead.
// Sequential verification (slow)
for (const proof of hundredProofs) {
await h33.proof.verify(proof); // Each takes time
}
// Batch verification (fast)
const results = await h33.proof.batchVerify({
proofs: hundredProofs,
aggregation: 'recursive' // Use recursive SNARK composition
});
// Dramatically faster than sequentialAggregation Strategies
H33 supports multiple aggregation strategies optimized for different use cases:
Parallel verification: Independent proofs verified simultaneously across CPU cores. Best for heterogeneous proof types. On a 96-vCPU Graviton4 instance, this distributes verification work via Rayon's work-stealing scheduler, keeping all cores saturated.
Recursive composition: Proofs combined into a single recursive SNARK. Best for homogeneous proofs where you want a single verification. This is particularly valuable for on-chain submission where gas costs scale per verification call.
Merkle aggregation: Proofs organized into a Merkle tree for efficient partial verification. Best when you may need to verify subsets. H33 uses SHA3-256 for the Merkle tree hash function, maintaining post-quantum security throughout the aggregation layer.
The Production Pipeline: FHE + ZKP + Attestation
Batch ZKP does not operate in isolation. In H33's production stack, it is the middle stage of a three-stage pipeline that processes 32 users per API call:
| Stage | Operation | Latency | PQ-Secure |
|---|---|---|---|
| 1. FHE Batch | BFV inner product (32 users/CT) | ~1,109µs | Yes (lattice) |
| 2. ZKP Verify | In-process DashMap lookup | 0.085µs | Yes (SHA3-256) |
| 3. Attestation | SHA3 digest + Dilithium sign+verify | ~244µs | Yes (ML-DSA) |
| Total (32 users) | ~1,356µs | ||
| Per auth | ~42µs |
The ZKP stage is the fastest component because batch caching has already reduced per-lookup cost to near-zero. When a 32-user FHE batch completes, the ZKP layer checks all 32 proof commitments against the in-process DashMap in a single pass, then hands the verified batch to Dilithium for a single attestation signature covering all 32 results. This batched attestation step alone provides a 31x speedup over signing each user's result individually.
Use Cases
Blockchain rollups: Aggregate thousands of transaction proofs into a single on-chain verification, reducing gas costs by orders of magnitude.
Audit logging: Batch verify a day's worth of authentication proofs in a single operation for compliance review. H33's Merkle aggregation strategy produces a compact proof tree that auditors can selectively verify without re-processing the entire batch.
Periodic verification: Collect proofs over a time window and verify them together during low-load periods.
Biometric authentication at scale: When performing FHE-encrypted biometric matching, each user's match result generates a ZKP attesting that the computation was performed correctly on encrypted data. Without batching, 1.5 million individual STARK proofs per second would be computationally infeasible. Batching reduces this to cached lookups.
Implementation Patterns
// Pattern 1: Time-windowed batching
const batcher = h33.createProofBatcher({
maxBatchSize: 32, // Aligned with FHE SIMD batch size
maxWaitMs: 50,
aggregation: 'parallel'
});
// Proofs are automatically batched
app.post('/verify', async (req, res) => {
const result = await batcher.add(req.body.proof);
res.json(result);
});
// Pattern 2: Explicit batch submission
const auditProofs = await collectDayOfProofs();
const batchResult = await h33.proof.batchVerify({
proofs: auditProofs,
aggregation: 'merkle',
returnTree: true // Get Merkle tree for selective re-verification
});
// Pattern 3: Recursive aggregation for on-chain
const recursiveProof = await h33.proof.aggregate({
proofs: transactionProofs,
strategy: 'recursive',
outputFormat: 'solidity' // Ready for on-chain verification
});Security Considerations
Batch verification maintains full security. Every proof in the batch is individually validated—aggregation never obscures a forgery:
- No false positives: Invalid proofs never pass batch verification. The accumulator construction is binding; a single invalid contribution causes the corresponding index to fail.
- Isolation: One bad proof doesn't invalidate the batch—individual results are returned with per-index pass/fail status.
- Deterministic: Same inputs always produce same verification results, enabling reproducible audits.
- Auditable: Batch verification is fully deterministic and can be replayed. The Merkle root serves as a compact commitment to the entire batch for later re-verification.
- Post-quantum secure: H33's ZKP cache uses SHA3-256 commitments and Dilithium (ML-DSA) attestation signatures, both of which are NIST-standardized post-quantum primitives. No elliptic-curve assumptions are required in the verification path.
Performance Characteristics
Batch ZKP efficiency improves with batch size up to hardware limits:
- Small batches (10-50): Good efficiency gains from shared setup
- Medium batches (50-200): Optimal efficiency for most applications
- Large batches (200+): Memory becomes the bottleneck; consider splitting
H33's production batch size of 32 is not arbitrary—it is dictated by the BFV SIMD slot geometry. With N=4096 polynomial degree and 128-dimensional biometric vectors, exactly 32 user templates fit per ciphertext (4096 / 128 = 32). Aligning ZKP batch boundaries to this number means the FHE and ZKP stages always process the same work unit, with zero padding waste and zero inter-stage buffering.
The exact performance depends on proof type, hardware, and aggregation strategy. H33 automatically selects optimal parameters based on your configuration. On Graviton4 (c8g.metal-48xl, 192 vCPUs), 96 parallel workers each process batches of 32 users, yielding the sustained 2.17M auth/sec throughput measured in production benchmarks.
Enable Batch ZKP Processing
Aggregate proofs for enterprise-scale throughput. Get started with 1,000 free auths.
Get Free API Key