APIsPricingDocsWhite PaperTokenBlogAboutSecurity Demo
Log InGet API Key
PQC · 12 min read

Why Quantum Security Budgets
Are Brilliant

Security is not binary. It is a budget — and the question is how many independent mathematical bets an attacker must win simultaneously.

QSB 3
Independent Assumptions
74 B
Distilled Footprint
391µs
3-Key Batch (32 users)
FIPS
204 · 205 · 206

The Paradigm Shift: From Binary to Budget

The post-quantum cryptography conversation has been dominated by a binary question: is this algorithm quantum-safe or not? That framing is seductive because it is simple. It is also wrong.

Every cryptographic algorithm rests on one or more mathematical hardness assumptions. RSA rests on the difficulty of factoring large integers. Elliptic curve cryptography rests on the discrete logarithm problem over elliptic curves. When Shor's algorithm eventually runs on a sufficiently large quantum computer, both assumptions collapse simultaneously — because both reduce to problems that quantum computers solve efficiently. The question was never whether your algorithm is quantum-safe. The question was always: how many independent mathematical bets are you making, and how many of them must an attacker win at the same time?

This is the Quantum Security Budget. A QSB is not a score, not a rating, not a compliance checkbox. It is a count: how many independent hardness assumptions must be broken simultaneously for a forgery to succeed. A system with a QSB of 1 has placed one mathematical bet. A system with a QSB of 3 has placed three independent mathematical bets, and all three must fail before the system is compromised. The higher the QSB, the more resilient the system is against the kind of failure that actually kills cryptographic schemes: an unexpected breakthrough that renders one assumption worthless overnight.

Most deployed post-quantum systems today have a QSB of 1. They picked one NIST-approved algorithm, implemented it correctly, and shipped. That is rational if you trust that particular algorithm will survive indefinitely. It is a catastrophic gamble if you don't.

The Problem with Single-Algorithm PQC

The NIST post-quantum standardization process began in 2017 with 69 candidate submissions. By the time NIST announced its final selections in 2022 and began publishing FIPS drafts, the field had been culled dramatically. Rainbow fell to a key recovery attack. SIKE fell to a polynomial-time attack that reduced its security from exponential to trivial. GeMSS was eliminated. These were not gradual erosion stories where security margins shrank over decades. They were sudden total breaks. One publication, and the scheme was dead.

The schemes that survived — ML-KEM (Kyber), ML-DSA (Dilithium), FALCON, SLH-DSA (SPHINCS+) — survived because nobody has found the breakthrough attack against them yet. Yet is the operative word. The cryptographic research community is young. Lattice-based cryptography, the foundation for ML-DSA and FALCON, has been studied seriously for about two decades. Hash-based signatures have a longer pedigree, but the specific construction in SLH-DSA is relatively new. Nobody can guarantee that any one of these schemes will survive the next twenty years of cryptanalysis, let alone the next fifty.

When you deploy a single-algorithm PQC system, you are making a single bet. If that bet pays off, you are fine. If it doesn't — if a paper drops on the IACR ePrint archive next Tuesday that reduces your scheme's security from 2^128 to 2^40 — then every signature you ever produced is forgeable, every attestation you ever issued is worthless, and every system that depended on that algorithm needs an emergency migration. There is no graceful degradation. There is no fallback. The single bet either holds or it doesn't.

A QSB of 1 means you are exactly one arxiv.org preprint away from a total break.

Three Independent Mathematical Bets

H33 takes a different approach. Every attestation — every single substrate minted, every single signing event — carries three post-quantum signatures from three independent mathematical families. All three must verify. A forgery requires breaking all three. The three families are:

ML-DSA-65 (Dilithium) — MLWE lattices. The security of ML-DSA rests on the Module Learning With Errors (MLWE) assumption and the Module Short Integer Solution (MSIS) assumption over structured lattices. The hard problem: given a system of noisy linear equations over a module lattice, recover the secret vector. This problem generalizes classical Learning With Errors, which has been studied intensively since Regev's foundational work in 2005. The best known attacks are BKZ-style lattice reduction algorithms whose cost scales exponentially with the lattice dimension. ML-DSA-65 targets NIST security category 3, approximately 2^192 classical operations. It is standardized under NIST FIPS 204.

FALCON-512 — NTRU lattices. FALCON's security rests on the Short Integer Solution (SIS) assumption over NTRU lattices, a specific algebraic structure first proposed by Hoffstein, Pipher, and Silverman in 1996. The NTRU lattice has a substantially different mathematical flavor from the module lattices underlying ML-DSA. The best attacks require either direct lattice reduction or algebraic attacks that exploit NTRU structure — attacks that have been refined over 28 years but have never broken the parameter sets NIST accepted. FALCON-512 targets NIST security category 1, approximately 2^128 classical operations. It is covered by draft NIST FIPS 206.

SLH-DSA-SHA2-128f-simple (SPHINCS+) — stateless hash functions. SLH-DSA's security rests entirely on the pre-image resistance of SHA-256. There are no lattice problems, no algebraic structures, no trapdoor assumptions. The entire security argument reduces to a single question: given a SHA-256 output, can an adversary find an input that hashes to it? If SHA-256 is a good random oracle, SLH-DSA signatures cannot be forged. If it is not, they can. The current best attacks require approximately 2^128 operations classically and approximately 2^85 under Grover's algorithm. SLH-DSA-SHA2-128f targets NIST security category 1. It is standardized under NIST FIPS 205.

Three families. Three mathematical foundations. Module lattices, NTRU lattices, hash pre-image resistance. The research communities that attack these three problems are largely distinct. The techniques that work against one usually do not transfer to the others. A breakthrough in attacking module lattices does not typically translate into a breakthrough against NTRU lattices or hash functions. When we sign under all three families and require all three to verify, we are betting that a single adversary cannot simultaneously break all three problems.

That bet has a name: QSB 3.

Why Independence Matters More Than Level

Here is the part that trips people up. The H33 three-key bundle's effective NIST security level is Level 1, not Level 3 or Level 5. This is because NIST security levels for composed schemes are bounded by the weakest component:

The floor is 128 bits. We do not inflate this to Level 3 in our marketing because doing so would be inaccurate and would undermine the independence claim that actually matters. But here is why a QSB of 3 at Level 1 is stronger than a QSB of 1 at Level 5, against the threats that actually kill cryptographic schemes.

Imagine a cryptanalyst announces a breakthrough against MLWE at Level 3 parameters. The attack reduces ML-DSA-65 from 2^192 cost to 2^80. A deployment that uses only ML-DSA-65 is catastrophically broken — every signature ever produced is now forgeable. But the H33 substrate's other two components are unaffected. FALCON-512 (NTRU-SIS, not MLWE) remains at 2^128. SLH-DSA (hash pre-image, not any lattice problem) remains at 2^128. The substrate's effective security degrades from "three independent assumptions at Level 1" to "two independent assumptions at Level 1." It survives.

Now imagine the breakthrough is against NTRU-SIS instead. FALCON collapses. ML-DSA is unaffected. SLH-DSA is unaffected. The substrate survives. Now imagine a breakthrough against SHA-256 pre-image resistance. SLH-DSA collapses. ML-DSA and FALCON are unaffected. The substrate survives.

In every one of these scenarios, a single-algorithm deployment — even at NIST Level 5 — would be a total break if it happened to be the algorithm that fell. The QSB-3 bundle survives by design. The independence of assumptions, not the NIST level of any individual component, is the property that provides resilience against cryptanalytic surprise.

Put differently: a NIST Level 5 single-algorithm deployment is stronger against brute-force attacks. A QSB-3 Level 1 bundle is stronger against mathematical breakthroughs. Brute-force attacks against 128-bit security require resources that probably don't exist on Earth in 2026. Mathematical breakthroughs require one good idea and one tenured professor. We design against the more likely failure mode.

H33-74: Distilling Three Signatures to 74 Bytes

The obvious objection to a three-family signing scheme is size. The raw signature bundle from ML-DSA-65 (3,309 bytes), FALCON-512 (~666 bytes), and SLH-DSA-SHA2-128f (17,088 bytes) totals approximately 21 kilobytes. That is large. It is too large to put on a blockchain. It is too large for many embedded systems. It is too large for most practical deployment patterns that care about persistent storage.

H33-74 solves this through distillation — not compression. The distinction matters. Compression implies a reversible encoding: you take the original data, make it smaller, and can reconstruct the original from the compressed form. H33-74 does not do that. It distills the three-family signature bundle into a new 74-byte cryptographic object (32 bytes on-chain, 42 bytes in Cachee) that serves as a persistent commitment to the full bundle. The full 21 KB bundle lives in an off-chain signature store and is fetched on demand by verifiers who need to perform full three-family cryptographic verification.

The 74-byte primitive carries enough information to establish that a specific signing event occurred, that a specific message was signed, and that the signer committed to the three-family bundle at a specific time. It does not carry enough information to independently verify the three raw signatures — that requires fetching the full bundle. But the 74 bytes is what you pay to the blockchain, and 74 bytes is what you store persistently. The 21 KB is what you pay to the off-chain signature store, which is cheap commodity storage.

This two-tier architecture — 74 bytes persistent, 21 KB ephemeral — is what makes a QSB of 3 practical at scale. Without it, three-family signing would be an academic exercise: mathematically interesting but operationally infeasible for any workload that cares about on-chain footprint or storage density. With H33-74, three-family signing fits in fewer bytes than a single tweet. Patent pending — 6 patents, 250+ claims.

The NIST Standards Behind the Three Families

Each of the three families in the H33 QSB-3 bundle is standardized (or in final draft) under a NIST FIPS standard. This is not incidental. Standardization means known-answer-test (KAT) vectors, formal security proofs reviewed by the global cryptographic community, and a concrete set of parameters that have survived years of public scrutiny.

FIPS 204 — ML-DSA (Module-Lattice-Based Digital Signature Algorithm). Published August 2024. Standardizes three parameter sets: ML-DSA-44 (Level 2), ML-DSA-65 (Level 3), and ML-DSA-87 (Level 5). H33 uses ML-DSA-65 in production. The underlying construction is based on the Fiat-Shamir with Aborts paradigm applied to module lattices. Signature size: 3,309 bytes. Public key size: 1,952 bytes.

FIPS 205 — SLH-DSA (Stateless Hash-Based Digital Signature Algorithm). Published August 2024. Standardizes twelve parameter sets across three hash functions (SHA-256, SHAKE-256) and two speed/size tradeoffs (f for fast, s for small). H33 uses SLH-DSA-SHA2-128f-simple — the fastest variant at NIST Level 1. Signature size: 17,088 bytes. Public key size: 32 bytes. The entire security argument rests on the random oracle model for SHA-256.

Draft FIPS 206 — FN-DSA (FFT-based Signature Algorithm over NTRU Lattices). Expected final publication in 2025. Standardizes FALCON-512 (Level 1) and FALCON-1024 (Level 5). H33 uses FALCON-512 in production. The construction uses fast Fourier sampling over NTRU lattices, producing compact signatures (~666 bytes) with relatively fast verification. FALCON's mathematical heritage traces back to the original NTRU proposal in 1996 — thirty years of cryptanalysis with no successful break at standardized parameters.

All three standards mandate specific KAT vectors, which H33 validates in CI on every build. If a KAT vector fails, the build fails. There is no path to production that does not pass all three families' known-answer tests.

Production Performance: Three Keys Do Not Mean Three Times the Cost

The most common objection we hear from engineering teams evaluating multi-family PQC is performance: "three signature families means three times the signing cost and three times the verification cost, right?" Wrong. Here is what actually happens in production.

H33 batches 32 users per attestation cycle. The full pipeline — FHE batch encryption, three-key signing, ZKP cached verification — completes in 1,345 microseconds per batch of 32 users. The three-key attestation component (SHA3 hash, Dilithium sign and verify, FALCON sign and verify, SPHINCS+ sign and verify) takes 391 microseconds of that total. Per user, the three-key attestation overhead is approximately 12 microseconds.

Production throughput on Graviton4 hardware (c8g.metal-48xl, 192 vCPUs, 371 GiB RAM): 2,293,766 authentications per second with full three-key attestation. That is not a microbenchmark of the signing primitive in isolation. That is the end-to-end pipeline: FHE batch at 943 microseconds, three-key attestation at 391 microseconds, ZKP cached lookup at 0.358 microseconds.

The reason three families do not cost three times as much is batching. When you amortize one three-family signing event across 32 users, the per-user cost of the second and third signature families is marginal. SLH-DSA dominates the signing cost (~5ms per event), but that 5ms is shared across 32 users and overlapped with the FHE pipeline. The verification side is even cheaper: all three verifications can be run in parallel across cores, completing in roughly the time of the slowest single verification (SPHINCS+ at ~300 microseconds).

A QSB of 1 would shave 391 microseconds off the batch. A QSB of 3 adds 391 microseconds and survives an arbitrary cryptanalytic breakthrough against any single family. That is 12 microseconds of insurance per user per attestation. We consider it the best 12 microseconds an enterprise can spend.

Enterprise QSB: How Organizations Should Think About Quantum Migration

Most enterprise quantum migration plans follow a simple pattern: identify which algorithms need replacing, pick a NIST-approved replacement, schedule the migration, execute. This is sensible operational planning, but it implicitly assumes a QSB of 1 is sufficient. For many workloads, it is. For workloads where the consequences of a cryptographic break are severe — financial settlement, healthcare records, government communications, legal evidence, regulatory compliance — a QSB of 1 is a bet that the specific algorithm you chose will never be broken.

Here is how we recommend enterprises think about QSB as a planning framework:

Classify data by break consequence, not by sensitivity. The question is not "how sensitive is this data?" but "what happens if the cryptographic protection on this data is broken in 5, 10, or 20 years?" Data with high break consequences — data that is still damaging to expose decades from now — should carry a higher QSB. Medical records, estate documents, biometric templates, financial audit trails, and compliance evidence all have multi-decade break horizons. A QSB of 3 is appropriate.

Treat QSB as a risk parameter, not a feature. A QSB of 1 is not "basic" and a QSB of 3 is not "premium." They are different risk profiles. A QSB of 1 is sufficient when the break consequence is bounded (session tokens, short-lived credentials, ephemeral communications). A QSB of 3 is appropriate when the break consequence is unbounded (long-lived attestations, permanent records, legal evidence).

Plan for assumption failure, not algorithm failure. Most migration plans focus on replacing specific algorithms. A QSB framework focuses on the underlying mathematical assumption. If your entire PQC stack rests on MLWE lattices (ML-KEM for key exchange, ML-DSA for signatures), you have a QSB of 1 even though you are using two different algorithms — because both algorithms fail if MLWE breaks. Diversifying across mathematical assumptions, not just across algorithms, is the point of QSB.

Budget for distillation, not raw signatures. The raw size of a three-family signature bundle (21 KB) is impractical for many workloads. But the distilled footprint (74 bytes via H33-74) is smaller than most single-algorithm PQC signatures. When evaluating multi-family approaches, compare the persistent footprint, not the ephemeral bundle size. The ephemeral bundle lives in cheap storage. The persistent commitment is what lives on-chain or in your audit trail forever.

Do not wait for uniform NIST levels. Some organizations are deferring multi-family adoption until all three families can be deployed at uniform NIST Level 3 or Level 5 (which requires FALCON-1024 and SLH-DSA-SHA2-192f, pending Graviton4 benchmarking). This is a mistake. The security benefit of independence is available today at Level 1. Waiting for uniform Level 3 means operating at QSB 1 for months or years while Level 1 QSB 3 is already shipping. Independence now is better than uniformity later.

Frequently Asked Questions

What exactly is a Quantum Security Budget?

A Quantum Security Budget (QSB) is a count of independent mathematical hardness assumptions that must be broken simultaneously for a cryptographic forgery to succeed. A QSB of 1 means one assumption. A QSB of 3 means three independent assumptions. The higher the number, the more resilient the system is against the sudden collapse of any single mathematical foundation. It shifts the question from "is this algorithm quantum-safe?" to "how many independent bets have you placed?"

Why does H33 use three families instead of two or four?

Three is the current practical maximum for NIST-standardized post-quantum signature families with genuinely independent mathematical foundations. NIST standardized three categories of signature schemes: lattice-based (two variants: MLWE via ML-DSA and NTRU via FALCON) and hash-based (SLH-DSA). Using both lattice variants plus the hash-based scheme gives three independent assumptions. Adding a fourth family would require a NIST-standardized scheme from a fourth mathematical foundation (code-based, isogeny-based, multivariate), and no such scheme has been standardized yet. When NIST standardizes a fourth family, H33 can extend the QSB to 4 via the reserved bits in the algorithm flags byte.

Does not the bundle security just equal the weakest component?

Under brute-force assumptions, yes — the bundle's NIST level is bounded at Level 1 by FALCON-512 and SLH-DSA-SHA2-128f. But the QSB framework is not about brute-force resistance. It is about resilience to mathematical breakthroughs. A QSB of 3 at Level 1 survives the complete collapse of any single family, which a single Level 5 algorithm cannot do. The bundle's brute-force floor is Level 1 (128 bits). The bundle's cryptanalytic resilience is three independent 128-bit assumptions, which is a fundamentally different security posture than one 256-bit assumption.

What is the difference between H33-74 distillation and compression?

Compression implies reversibility: the original data can be reconstructed from the compressed form. Distillation produces a new cryptographic object — a 74-byte commitment — from which the original 21 KB signature bundle cannot be recovered. The 74 bytes is a cryptographic commitment to the three-family signing event, not a smaller encoding of it. The original bundle lives off-chain in a signature store and is fetched on demand for full verification. Distillation is to compression what a hash is to a zip file: it produces evidence of the original, not a smaller copy of it.

Is the performance overhead of three families acceptable for real-time systems?

Yes. H33 processes 2,293,766 full three-key authentications per second in production on Graviton4 hardware. The three-key attestation adds 391 microseconds per batch of 32 users, or approximately 12 microseconds per user. For comparison, a typical network round-trip is 1,000-50,000 microseconds. The QSB-3 overhead is smaller than the noise floor of most real-time systems. Verification is similarly fast: all three signatures verify in parallel, completing in roughly 300 microseconds for the full bundle.

Which organizations should adopt a QSB greater than 1?

Any organization where the consequence of a cryptographic break extends beyond the useful lifetime of the data being protected. Healthcare (medical records have multi-decade relevance), financial services (settlement finality, audit trails), government (classified communications, identity documents), legal (evidentiary records, notarization), and critical infrastructure (SCADA signing, firmware attestation) are all domains where a QSB of 3 is appropriate. If a break in 2036 would be damaging, the cryptographic protection you deploy in 2026 should survive a decade of unknown mathematical progress. A QSB of 1 bets it will. A QSB of 3 insures against the possibility it won't.

Build With QSB 3

Every H33 API call returns a three-family substrate attestation. 74 bytes. Three independent hardness assumptions. Patent pending.

Get Free API Key Read the Docs
Free tier · 10,000 API calls/month · No credit card required
Verify It Yourself