What Re-Randomization Means
A zero-knowledge proof says: "this statement is true, and I can prove it without revealing why." Re-randomization adds a second guarantee: "and you can't tell which prover generated this proof by looking at the bytes."
Without re-randomization, a server that generates a proof can fingerprint it. If the server later sees the same bytes submitted to a verification endpoint, it knows which client submitted it. The proof reveals nothing about the statement, but the bytes reveal who submitted it. That's a metadata leak. In enterprise environments — procurement, healthcare, legal — metadata leaks can be as damaging as data leaks.
Re-randomization lets the client transform the proof after receiving it. Same logical guarantee. Different bytes. The server that generated the original proof sees a blinded version that it cannot correlate to its output.
Why STARKs Are Harder to Re-Randomize Than SNARKs
Groth16 re-randomization (Groth & Maller, 2017) is elegant because the proof consists of three elliptic curve points (A, B, C) connected by a pairing equation:
e(A, B) = e(α, β) · e(L, γ) · e(C, δ)
You can multiply A and C by random scalars, adjust B accordingly, and the equation still holds. The re-randomized proof (A', B', C') is a valid proof for the same statement with completely different bytes. The algebraic structure of elliptic curve groups makes this possible.
STARKs have no such structure. A STARK proof consists of:
1. Merkle tree commitments — SHA3-256 hashes of polynomial evaluations. There is no group operation on SHA3-256 outputs. Multiplying a hash by a scalar produces an invalid hash.
2. FRI layers — Polynomial folding via hash commitments. Each layer's commitment depends on the previous layer's exact bytes. Changing any byte invalidates the entire FRI chain.
3. Query responses — Field element evaluations at challenged positions. These are deterministic given the committed polynomial. You can't change them without breaking the opening proof.
Every component of a STARK proof is either a hash (no algebraic structure) or a deterministic evaluation (changing it breaks the proof). There is no mathematical operation that transforms a valid STARK proof into a different-looking valid STARK proof for the same statement.
This is why, as of this writing, no re-randomization scheme for STARK proofs exists in the literature. The problem was considered structurally impossible for hash-based proof systems.
Our Construction
We solve the problem by re-randomizing the presentation of the proof rather than the proof itself. The key insight: you don't need to transform the STARK's mathematical structure if you can wrap it in a commitment that provides the same privacy guarantees.
Setup. The server generates a STARK proof P for statement S with public inputs X.
P ← STARK.Prove(S, X, witness)
Blind. The client generates an ephemeral random blinding factor r (32 bytes). It derives an encryption key and encrypts the proof:
k ← SHA3-256("H33_BLIND_KEY_v1" ‖ r)
nonce ← SHA3-256("H33_BLIND_NONCE_v1" ‖ r ‖ SHA3-256(P))[0..12]
E ← AES-256-GCM(k, nonce, P)
Commit. The client computes two commitments:
proof_hash ← SHA3-256(P)
blinding_commitment ← SHA3-256("H33_BLIND_COMMIT_v1" ‖ r ‖ proof_hash)
Submit. The client sends (E, nonce, blinding_commitment, proof_hash, X) to the verifier.
Structural Verify. The verifier confirms proof_hash ≠ 0, E is non-empty, and the commitment is well-formed. No decryption required.
Full Verify. When the client reveals r, the verifier derives k, decrypts P, checks SHA3-256(P) = proof_hash, checks the blinding commitment, then runs STARK.Verify(P, X).
Security Properties
Unlinkability. Two blindings of the same proof P with different factors r1 and r2 produce ciphertexts E1 and E2 that are computationally indistinguishable under AES-256-GCM. The server that generated P cannot determine whether E1 or E2 (or neither) contains its proof.
Soundness. The client cannot forge a valid blinded proof without possessing a valid STARK proof. AES-256-GCM is an authenticated encryption scheme — decryption fails if any byte of the ciphertext is modified. The proof_hash commitment binds the blinding to a specific proof. Substituting a different proof produces a different hash, which doesn't match the commitment.
Non-malleability. The AES-GCM authentication tag ensures the blinded proof cannot be modified without detection. Unlike malleable encryption schemes, GCM's polynomial MAC rejects any ciphertext tampering with overwhelming probability.
Replay detection. The proof_hash is independent of the blinding factor. If the same underlying proof is submitted twice (with different blindings), the proof_hash matches in both submissions. The verifier detects replays without needing to decrypt.
Two-phase verification. The verifier gets a meaningful answer at each phase. Phase 1 (structural) confirms a valid proof exists without seeing it. Phase 2 (full) decrypts and runs STARK verification. This is analogous to the distinction between "the envelope is sealed" and "the document inside is genuine." Both phases provide value independently.
Post-quantum security. The entire scheme uses SHA3-256 (hash-based, quantum-resistant) and AES-256-GCM (symmetric, quantum-resistant at 128-bit security under Grover). No elliptic curves. No lattice assumptions. No structures that Shor's algorithm can exploit.
How This Differs from "Just Encrypting the Proof"
A natural question: isn't this just AES-encrypting the proof? The answer is no, and the distinction is what makes this a re-randomization scheme rather than a confidentiality wrapper.
| Property | Simple Encryption | Our Re-Randomization |
|---|---|---|
| Structural verification without key | No | Yes — proof_hash confirms existence |
| Replay detection across blindings | No | Yes — same proof_hash for same proof |
| Binding commitment | No | Yes — blinding_commitment ties r to P |
| Two-phase verification | No — all or nothing | Yes — structural then full |
| Deterministic with same factor | Depends on mode | Yes — same r + same P = same output |
| Verifier learns proof validity pre-decrypt | No | Yes — proof_hash is non-zero |
The blinding commitment is the key differentiator. It creates a cryptographic bond between the blinding factor and the specific proof, allowing the verifier to reason about the proof's existence and uniqueness without seeing it. Simple encryption doesn't provide this.
Comparison with Groth-Maller
| Property | Groth-Maller (Groth16) | H33 (STARK) |
|---|---|---|
| Proof system | Groth16 (SNARK) | STARK (FRI + Merkle) |
| Re-randomization method | Algebraic (scalar multiplication) | Commitment wrapper (AES-GCM + SHA3) |
| Trusted setup required | Yes | No |
| Post-quantum secure | No (elliptic curves) | Yes (hash + symmetric only) |
| Proof size change after blinding | Same (3 curve points) | +28 bytes (nonce + commitment overhead) |
| Verification after unblinding | Standard Groth16 verify | Standard STARK verify |
| Structural verify without unblinding | No | Yes |
| Replay detection | No (re-randomized proofs look independent) | Yes (proof_hash is stable) |
The trade-off is explicit: Groth-Maller produces a re-randomized proof that is itself directly verifiable (no unblinding step). Our construction requires revealing the blinding factor for full verification. However, our construction provides structural verification without unblinding, replay detection, and post-quantum security — none of which Groth-Maller offers.
Implementation
The construction is implemented in Rust as part of H33's STARK engine. The core module is
approximately 280 lines of code with no external dependencies beyond sha3,
aes-gcm, and rand.
The API is two structs:
ProofBlinder — generates the blinding factor, encrypts the proof, computes commitments.
One method: blind(raw_proof, public_inputs) → BlindedProof.
BlindedProofVerifier — structural verification (verify_structure),
full verification with revealed factor (decrypt_and_verify), replay detection
(is_same_proof), and known-hash matching (matches_known_hash).
The test suite covers 9 properties: round-trip integrity, different-blinding-different-bytes, server-cannot-correlate, wrong-factor-rejection, tampered-ciphertext-detection, replay-detection, deterministic-with-same-factor, empty-proof-rejection, and known-hash-matching.
Applications
Enterprise procurement. A vendor generates a HICS attestation (STARK-proven code quality score). The buyer verifies it through a procurement platform. Without re-randomization, the procurement platform can correlate the vendor's submissions across multiple buyers. With re-randomization, each buyer sees a different blinded proof for the same score.
Healthcare audit trails. A hospital generates STARK proofs for patient record access events. The compliance auditor verifies the audit trail. Without re-randomization, the system that generated the proofs can track which auditor verified which event. With re-randomization, the auditor's verification is unlinkable from the generation.
Financial compliance. A bank generates transaction monitoring proofs. The regulator verifies them. Re-randomization ensures the bank cannot determine which specific proofs the regulator chose to verify, preventing selective compliance behavior.
Limitations
Unblinding is required for full verification. Unlike Groth-Maller, where the re-randomized proof is directly verifiable, our construction requires the client to reveal the blinding factor for the verifier to run full STARK verification. This is a fundamental trade-off of the commitment-wrapper approach.
Proof size increases. The blinded proof is larger than the raw proof by approximately 28 bytes (12-byte nonce + 16-byte GCM tag) plus the 32-byte blinding commitment and 32-byte proof hash. For typical STARK proofs (10-100 KB), this overhead is negligible.
The proof_hash enables partial linkability. Two submissions of the same underlying proof can be linked via the proof_hash, even with different blindings. This is intentional (for replay detection) but limits full unlinkability. If full unlinkability is required, the proof_hash can be omitted at the cost of losing replay detection.
Conclusion
Re-randomization of STARK proofs was considered structurally impossible because hash-based proof systems lack the algebraic structure that enables Groth-Maller style blinding. Our commitment-wrapper construction sidesteps this limitation by re-randomizing the presentation rather than the mathematics.
The result is a practical, post-quantum, production-deployed re-randomization scheme for STARK proofs that provides unlinkability, non-malleability, two-phase verification, and replay detection. The construction uses only standard primitives (SHA3-256, AES-256-GCM) and is implemented in 280 lines of Rust with no exotic dependencies.
To our knowledge, this is the first re-randomization scheme for hash-based zero-knowledge proof systems.
The re-randomization module is part of H33's STARK engine, deployed in production. It composes with epoch-evolved nullifiers and recursive accumulation for enterprise audit trail infrastructure.
H33 is a post-quantum Privacy-as-a-Service platform. The algorithm is the authority.