BenchmarksStack Ranking
APIsPricingDocsWhite PaperTokenBlogAboutSecurity Demo
Log InGet API Key
ZK Proofs · 5 min read

Batch ZKP:
Aggregating Proofs for Massive Throughput

H33's batch ZK proof processing aggregates hundreds of proofs efficiently, enabling enterprise-scale privacy-preserving authentication.

~42µs
Auth Latency
2.17M/s
Throughput
128-bit
Security
Zero
Plaintext

Zero-knowledge proofs are powerful, but generating them one at a time doesn't scale. H33's batch ZKP system aggregates multiple proofs into unified verification, dramatically improving throughput while maintaining full cryptographic guarantees. In production on Graviton4 hardware, this batch architecture is a core reason H33 sustains 2,172,518 authentications per second with an average latency of just ~42µs per auth (see our full benchmark results).

Batch vs. Sequential

Sequential proof generation processes proofs one at a time. Batch aggregation combines multiple proofs into a single verification operation, sharing computational work across all proofs in the batch. At H33's production scale, this is the difference between verifying 32 individual STARK lookups (~2.7µs total) and performing a single in-process DashMap lookup at 0.085µs—a 31x reduction in per-user ZKP overhead.

Why Individual Proof Verification Bottlenecks

A standard ZK proof verification involves elliptic-curve pairings or hash-chain traversals that are computationally expensive. When each authentication event generates its own proof, the verifier must repeat the same setup work—loading verification keys, initializing hash contexts, and performing modular arithmetic—for every single user. At 1,000 requests per second this overhead is negligible. At 1.5 million per second, it is the entire bottleneck.

The core insight behind H33's approach is that these repeated setup costs can be amortized. If 32 users authenticate within the same time window, the verifier loads the verification key once, initializes the hash context once, and checks all 32 proofs against a shared accumulator. The marginal cost of adding one more proof to an existing batch is a fraction of verifying that proof independently.

How Proof Aggregation Works

Instead of verifying 100 separate proofs individually, batch ZKP uses cryptographic techniques to verify them together:

Key Insight

H33 aligns its ZKP batch size with its FHE batch size. BFV with N=4096 and 128-dimensional biometric vectors packs exactly 32 users per ciphertext. By batching ZK proofs in groups of 32 as well, the entire pipeline—FHE computation, ZKP verification, and Dilithium attestation—operates on the same batch boundary, eliminating cross-stage synchronization overhead.

// Sequential verification (slow)
for (const proof of hundredProofs) {
  await h33.proof.verify(proof);  // Each takes time
}

// Batch verification (fast)
const results = await h33.proof.batchVerify({
  proofs: hundredProofs,
  aggregation: 'recursive'  // Use recursive SNARK composition
});
// Dramatically faster than sequential

Aggregation Strategies

H33 supports multiple aggregation strategies optimized for different use cases:

Parallel verification: Independent proofs verified simultaneously across CPU cores. Best for heterogeneous proof types. On a 96-vCPU Graviton4 instance, this distributes verification work via Rayon's work-stealing scheduler, keeping all cores saturated.

Recursive composition: Proofs combined into a single recursive SNARK. Best for homogeneous proofs where you want a single verification. This is particularly valuable for on-chain submission where gas costs scale per verification call.

Merkle aggregation: Proofs organized into a Merkle tree for efficient partial verification. Best when you may need to verify subsets. H33 uses SHA3-256 for the Merkle tree hash function, maintaining post-quantum security throughout the aggregation layer.

The Production Pipeline: FHE + ZKP + Attestation

Batch ZKP does not operate in isolation. In H33's production stack, it is the middle stage of a three-stage pipeline that processes 32 users per API call:

StageOperationLatencyPQ-Secure
1. FHE BatchBFV inner product (32 users/CT)~1,109µsYes (lattice)
2. ZKP VerifyIn-process DashMap lookup0.085µsYes (SHA3-256)
3. AttestationSHA3 digest + Dilithium sign+verify~244µsYes (ML-DSA)
Total (32 users)~1,356µs
Per auth~42µs

The ZKP stage is the fastest component because batch caching has already reduced per-lookup cost to near-zero. When a 32-user FHE batch completes, the ZKP layer checks all 32 proof commitments against the in-process DashMap in a single pass, then hands the verified batch to Dilithium for a single attestation signature covering all 32 results. This batched attestation step alone provides a 31x speedup over signing each user's result individually.

Use Cases

Blockchain rollups: Aggregate thousands of transaction proofs into a single on-chain verification, reducing gas costs by orders of magnitude.

Audit logging: Batch verify a day's worth of authentication proofs in a single operation for compliance review. H33's Merkle aggregation strategy produces a compact proof tree that auditors can selectively verify without re-processing the entire batch.

Periodic verification: Collect proofs over a time window and verify them together during low-load periods.

Biometric authentication at scale: When performing FHE-encrypted biometric matching, each user's match result generates a ZKP attesting that the computation was performed correctly on encrypted data. Without batching, 1.5 million individual STARK proofs per second would be computationally infeasible. Batching reduces this to cached lookups.

Implementation Patterns

// Pattern 1: Time-windowed batching
const batcher = h33.createProofBatcher({
  maxBatchSize: 32,       // Aligned with FHE SIMD batch size
  maxWaitMs: 50,
  aggregation: 'parallel'
});

// Proofs are automatically batched
app.post('/verify', async (req, res) => {
  const result = await batcher.add(req.body.proof);
  res.json(result);
});

// Pattern 2: Explicit batch submission
const auditProofs = await collectDayOfProofs();
const batchResult = await h33.proof.batchVerify({
  proofs: auditProofs,
  aggregation: 'merkle',
  returnTree: true  // Get Merkle tree for selective re-verification
});

// Pattern 3: Recursive aggregation for on-chain
const recursiveProof = await h33.proof.aggregate({
  proofs: transactionProofs,
  strategy: 'recursive',
  outputFormat: 'solidity'  // Ready for on-chain verification
});

Security Considerations

Batch verification maintains full security. Every proof in the batch is individually validated—aggregation never obscures a forgery:

Performance Characteristics

Batch ZKP efficiency improves with batch size up to hardware limits:

Key Insight

H33's production batch size of 32 is not arbitrary—it is dictated by the BFV SIMD slot geometry. With N=4096 polynomial degree and 128-dimensional biometric vectors, exactly 32 user templates fit per ciphertext (4096 / 128 = 32). Aligning ZKP batch boundaries to this number means the FHE and ZKP stages always process the same work unit, with zero padding waste and zero inter-stage buffering.

The exact performance depends on proof type, hardware, and aggregation strategy. H33 automatically selects optimal parameters based on your configuration. On Graviton4 (c8g.metal-48xl, 192 vCPUs), 96 parallel workers each process batches of 32 users, yielding the sustained 2.17M auth/sec throughput measured in production benchmarks.

Enable Batch ZKP Processing

Aggregate proofs for enterprise-scale throughput. Get started with 1,000 free auths.

Get Free API Key

Build With Post-Quantum Security

Enterprise-grade FHE, ZKP, and post-quantum cryptography. One API call. Sub-millisecond latency.

Get Free API Key → Read the Docs
Free tier · 10,000 API calls/month · No credit card required
Verify It Yourself