Every digital system you rely on—online banking, encrypted messaging, VPN tunnels, TLS certificates, software updates, healthcare records—is protected by public-key cryptography built on two mathematical assumptions: that factoring large integers is hard and that computing discrete logarithms is hard. These assumptions have held for decades. Quantum computers will shatter both of them.
Post-quantum cryptography (PQC) is the field of cryptographic algorithms designed to resist attacks from both classical and quantum computers. Unlike quantum cryptography (which uses quantum mechanics to transmit keys), PQC algorithms run on ordinary hardware—your existing servers, laptops, and phones—but are built on mathematical problems that remain hard even for quantum processors. NIST finalized the first three PQC standards in August 2024. The migration clock is already running.
This guide covers everything a security engineer, architect, or technical leader needs to know: what quantum computers actually break, what they do not break, the four algorithm families, deep dives on each NIST standard, key-size and performance comparisons, regulatory deadlines, the hybrid approach, H33's production PQ stack, and a concrete migration roadmap.
What Quantum Computers Break
The threat is not that quantum computers are "faster" in some general sense. The threat is specific: a sufficiently large, error-corrected quantum computer running Shor's algorithm can efficiently solve the integer factorization problem and the discrete logarithm problem. These are the two mathematical foundations underlying virtually all public-key cryptography deployed today.
Shor's Algorithm: The Core Threat
Peter Shor published his algorithm in 1994. It uses quantum Fourier transforms to find the period of a modular exponential function, which directly yields the prime factors of a composite integer (breaking RSA) or the discrete logarithm (breaking Diffie-Hellman and ECC). On a classical computer, the best known factoring algorithm (General Number Field Sieve) runs in sub-exponential time. Shor's algorithm runs in polynomial time—an exponential speedup.
RSA (all key sizes) — integer factorization. ECDSA / ECDH / Ed25519 / X25519 — elliptic curve discrete log. Diffie-Hellman / ElGamal / DSA — finite field discrete log. Every TLS handshake, every SSH session, every certificate chain, every code-signing key that uses these algorithms will be retrospectively compromised once a cryptographically relevant quantum computer (CRQC) exists.
| Algorithm | Foundation | Broken By | Est. Logical Qubits |
|---|---|---|---|
| RSA-2048 | Integer factorization | Shor's | 1,730–6,190 |
| RSA-4096 | Integer factorization | Shor's | ~12,000 |
| ECDSA P-256 | Elliptic curve discrete log | Shor's | 2,330–2,619 |
| ECDH / X25519 | Elliptic curve discrete log | Shor's | ~2,330 |
| Diffie-Hellman 2048 | Finite field discrete log | Shor's | Similar to RSA |
| Ed25519 | Elliptic curve discrete log | Shor's | ~2,330 |
The logical qubit estimates above represent how many error-corrected qubits are needed. Physical qubit requirements depend on error rates and error-correction overhead. Craig Gidney's May 2025 paper reduced the estimated physical qubit count for RSA-2048 from ~20 million to under 1 million, narrowing the gap between current hardware and a CRQC to roughly three orders of magnitude.
What Quantum Computers Do NOT Break
One of the most common misconceptions is that quantum computers "break all encryption." They do not. The second relevant quantum algorithm is Grover's algorithm, which provides a quadratic speedup for unstructured search. Its impact on symmetric cryptography and hash functions is real but manageable.
Grover's algorithm halves the effective security level of symmetric ciphers and hash functions. AES-256 drops from 256-bit security to ~128-bit equivalent. AES-128 drops to ~64-bit equivalent (insufficient). SHA-256 collision resistance drops from 128-bit to ~85-bit. The fix is straightforward: use larger key sizes. AES-256 and SHA-384/SHA-512 remain secure against quantum adversaries with comfortable margins.
| Algorithm | Type | Classical Security | Post-Quantum Security | Verdict |
|---|---|---|---|---|
| AES-256 | Symmetric cipher | 256-bit | ~128-bit | Safe |
| AES-128 | Symmetric cipher | 128-bit | ~64-bit | Insufficient |
| ChaCha20-Poly1305 | AEAD cipher | 256-bit | ~128-bit | Safe |
| SHA-256 | Hash function | 128-bit collision | ~85-bit collision | Acceptable |
| SHA-384 / SHA-512 | Hash function | 192 / 256-bit | ~128 / 170-bit | Safe |
| SHA-3 (Keccak) | Hash function | Up to 256-bit | Up to ~128-bit | Safe |
| HMAC-SHA-256 | MAC | 256-bit | ~128-bit | Safe |
The practical takeaway: symmetric encryption and hashing survive the quantum era with minor upgrades (move from AES-128 to AES-256, prefer SHA-384+ where collision resistance matters). The crisis is entirely in public-key cryptography—key exchange, digital signatures, and certificate chains.
NIST PQC Standardization: From Call to FIPS
The global effort to standardize quantum-resistant algorithms has been led by the National Institute of Standards and Technology (NIST). The process took eight years, involved submissions from research teams worldwide, and withstood multiple rounds of cryptanalysis by the global academic community.
The eight-year process was not bureaucratic delay. It was necessary. One of the Round 3 candidates—SIKE (an isogeny-based scheme)—was completely broken by a classical attack in 2022 after years of analysis. Rigorous, multi-round cryptanalysis is not optional for standards that will protect the world's infrastructure for decades.
The Four Families of Post-Quantum Algorithms
Post-quantum cryptography is not a single approach. It encompasses four major mathematical families, each with different security assumptions, performance profiles, and tradeoffs. Understanding these families is essential for evaluating which algorithms fit your threat model.
Lattice-Based
Hard problem: Learning With Errors (LWE) and Module-LWE. Finding a short vector in a high-dimensional lattice is believed hard for both classical and quantum computers. Underlies ML-KEM (FIPS 203) and ML-DSA (FIPS 204). Offers the best balance of security, performance, and key size. Dominates the NIST selections.
Hash-Based
Hard problem: Security of the underlying hash function (SHA-256 / SHA-3 / SHAKE). The most conservative family—security proof reduces directly to the hash function's collision and preimage resistance. Underlies SLH-DSA (FIPS 205). Signatures only (no key exchange). Larger signatures but minimal cryptographic assumptions.
Code-Based
Hard problem: Decoding a random linear code (equivalent to generic syndrome decoding). The McEliece cryptosystem has survived over 40 years of cryptanalysis since 1978. HQC (Hamming Quasi-Cyclic) selected by NIST as a backup KEM. Drawback: very large public keys (hundreds of kilobytes for McEliece).
Multivariate
Hard problem: Solving systems of multivariate quadratic polynomial equations over finite fields (MQ problem). Compact signatures but large public keys. Rainbow, a leading multivariate scheme, was broken in 2022. The family remains active in research but has no NIST finalists remaining in the current round.
A fifth family—isogeny-based cryptography, built on maps between elliptic curves—was a strong NIST candidate (SIKE). In July 2022, Wouter Castryck and Thomas Decru published a classical polynomial-time attack that completely broke SIKE on a single laptop in under an hour. The lesson: any PQC scheme can fall to future cryptanalysis. This is why NIST selected algorithms from multiple families and why the hybrid approach (PQC + classical) is recommended during the transition period.
FIPS 203: ML-KEM (Kyber) — Post-Quantum Key Encapsulation
ML-KEM (Module Lattice-Based Key Encapsulation Mechanism), derived from the CRYSTALS-Kyber submission, is the standard for quantum-resistant key exchange. It replaces RSA key exchange and ECDH in TLS handshakes, VPNs, and any protocol that needs two parties to agree on a shared secret.
How ML-KEM Works
ML-KEM is a key encapsulation mechanism (KEM), not a traditional key exchange. The distinction matters:
- KeyGen: The receiver generates a public/private key pair. The public key is a matrix
Asampled from a seed, plus a vectort = A*s + ewheresis a short secret vector andeis a small error vector. The secret key iss. - Encapsulate: The sender generates a random message
m, computes a ciphertext that "wraps"musing the public key with fresh randomness, and derives the shared secretKfromm. - Decapsulate: The receiver uses the private key to unwrap the ciphertext, recovers
m(or detects tampering), and derives the same shared secretK.
The security rests on the Module Learning With Errors (MLWE) problem: given A and t = A*s + e, recovering s requires finding a short vector in a lattice—a problem for which no efficient quantum algorithm is known.
| Parameter Set | NIST Level | Public Key | Ciphertext | Shared Secret | Classical Equivalent |
|---|---|---|---|---|---|
| ML-KEM-512 | 1 | 800 B | 768 B | 32 B | AES-128 |
| ML-KEM-768 | 3 | 1,184 B | 1,088 B | 32 B | AES-192 |
| ML-KEM-1024 | 5 | 1,568 B | 1,568 B | 32 B | AES-256 |
ML-KEM-768 is the recommended default for most applications, providing NIST Level 3 security (equivalent to AES-192 against both classical and quantum attacks). Key sizes are larger than ECDH (32 bytes for X25519) but still practical for network protocols—a TLS handshake adds roughly 1–2 KB.
ML-KEM Performance
ML-KEM-768 Benchmark (Typical Server Hardware)
ML-KEM is not only quantum-safe—it is faster than both RSA and ECDH for key exchange. The NTT-based polynomial arithmetic at its core is highly parallelizable and cache-friendly. There is no performance excuse for delaying ML-KEM adoption.
FIPS 204: ML-DSA (Dilithium) — Post-Quantum Digital Signatures
ML-DSA (Module Lattice-Based Digital Signature Algorithm), derived from CRYSTALS-Dilithium, is the standard for quantum-resistant digital signatures. It replaces RSA signatures, ECDSA, and EdDSA for authenticating identities, signing documents, verifying software updates, and securing certificate chains.
How ML-DSA Works
ML-DSA uses a "Fiat-Shamir with Aborts" paradigm:
- KeyGen: Generate a matrix
Afrom a seed, secret vectorss1ands2with small coefficients, and computet = A*s1 + s2. The public key is(seed, t). The secret key is(s1, s2). - Sign: Sample a random masking vector
y, computew = A*y, hash the message withwto get a challengec, computez = y + c*s1. Ifzis "too large" (would leak information abouts1), abort and retry with fresh randomness. This rejection sampling ensures signatures reveal nothing about the secret key. - Verify: Recompute
w'from the signature and public key. Check that the hash matches. Constant-time verification.
The "aborts" mechanism is what makes ML-DSA secure: by rejecting signatures that would leak information, the scheme achieves zero-knowledge properties. On average, signing requires approximately 4–7 iterations.
| Parameter Set | NIST Level | Public Key | Signature | Secret Key | Classical Equivalent |
|---|---|---|---|---|---|
| ML-DSA-44 | 2 | 1,312 B | 2,420 B | 2,560 B | SHA-256 collision |
| ML-DSA-65 | 3 | 1,952 B | 3,309 B | 4,032 B | AES-192 |
| ML-DSA-87 | 5 | 2,592 B | 4,627 B | 4,896 B | AES-256 |
ML-DSA-65 (NIST Level 3) is recommended for most applications. Signature sizes are larger than ECDSA (64 bytes) or Ed25519 (64 bytes), but 3.3 KB is still practical for authentication tokens, API responses, and certificate chains.
ML-DSA Performance
H33's production stack uses ML-DSA (Dilithium) for attestation signatures on every authentication. Measured on Graviton4 (c8g.metal-48xl):
ML-DSA-65 Production Performance (Graviton4)
By batching 32 users per attestation call, H33 amortizes the Dilithium overhead to ~7.5 microseconds per authentication. This is faster than classical ECDSA while providing post-quantum security.
FIPS 205: SLH-DSA (SPHINCS+) — Hash-Based Backup
SLH-DSA (Stateless Hash-Based Digital Signature Algorithm), derived from SPHINCS+, provides a fundamentally different security guarantee from the lattice-based schemes. Its security relies solely on the properties of hash functions—no lattice assumptions, no number theory, no algebraic structure to potentially exploit.
Why SLH-DSA Matters
SLH-DSA exists as algorithmic insurance. If a breakthrough in lattice cryptanalysis weakens or breaks ML-KEM and ML-DSA (which would compromise both FIPS 203 and 204 simultaneously), SLH-DSA remains secure because it depends on an entirely different mathematical foundation. Any attack on SLH-DSA would require breaking SHA-256 or SHA-3—a breakthrough that would undermine virtually all of modern cryptography.
NIST explicitly selected SLH-DSA as a diversification measure. Both ML-KEM and ML-DSA are lattice-based, meaning a single mathematical breakthrough could compromise both. SLH-DSA ensures that at least one NIST-standardized signature scheme would survive such a scenario. For high-assurance environments (government, critical infrastructure), deploying SLH-DSA alongside ML-DSA provides two independent lines of defense.
SLH-DSA Tradeoffs
The conservative security of hash-based signatures comes at a cost: larger signatures and slower signing.
| Parameter Set | NIST Level | Public Key | Signature | Sign Time | Verify Time |
|---|---|---|---|---|---|
| SLH-DSA-128s (small) | 1 | 32 B | 7,856 B | ~60 ms | ~3 ms |
| SLH-DSA-128f (fast) | 1 | 32 B | 17,088 B | ~8 ms | ~0.5 ms |
| SLH-DSA-192s | 3 | 48 B | 16,224 B | ~100 ms | ~5 ms |
| SLH-DSA-256f | 5 | 64 B | 49,856 B | ~20 ms | ~1 ms |
SLH-DSA signatures range from ~8 KB to ~50 KB—10x to 100x larger than ML-DSA. Signing is milliseconds rather than microseconds. For real-time authentication at scale, ML-DSA is the clear choice. SLH-DSA is best suited for use cases where signing is infrequent and verification latency is less critical: firmware updates, code signing, root certificate authorities, and long-lived document signatures.
Key Size Comparison: Classical vs. Post-Quantum
The most visible practical impact of PQC migration is increased key and signature sizes. This table provides a direct comparison across the most commonly used algorithms.
| Algorithm | Type | Public Key | Private Key | Signature / CT | PQ-Secure |
|---|---|---|---|---|---|
| RSA-2048 | Signature / KEM | 256 B | ~1,200 B | 256 B | No |
| ECDSA P-256 | Signature | 64 B | 32 B | 64 B | No |
| X25519 | Key exchange | 32 B | 32 B | 32 B | No |
| ML-KEM-768 | Key encapsulation | 1,184 B | 2,400 B | 1,088 B | Yes |
| ML-DSA-65 | Signature | 1,952 B | 4,032 B | 3,309 B | Yes |
| SLH-DSA-128f | Signature | 32 B | 64 B | 17,088 B | Yes |
Key sizes increase by roughly 5–50x compared to ECC. ML-KEM-768 public keys are 1,184 bytes versus 32 bytes for X25519. ML-DSA-65 signatures are 3,309 bytes versus 64 bytes for Ed25519. For most network protocols, this is a minor bandwidth increase. For constrained devices (IoT, embedded), it requires careful engineering.
CNSA 2.0: The Migration Deadlines
The NSA's Commercial National Security Algorithm Suite 2.0, published in September 2022, establishes mandatory migration deadlines for all National Security Systems (NSS). These deadlines cascade into the broader federal supply chain and increasingly influence private-sector procurement requirements.
| Category | Support & Prefer By | Exclusive Use By |
|---|---|---|
| Software & firmware signing | 2025 | 2030 |
| Web browsers, servers, cloud | 2025 | 2033 |
| Traditional networking (VPNs, routers) | 2026 | 2030 |
| Operating systems | 2027 | 2033 |
| Constrained devices, large PKI | 2030 | 2033 |
NIST IR 8547 (November 2024) reinforces these deadlines at the federal level: all classical public-key cryptography (RSA, ECDSA, ECDH) will be deprecated after 2030 and disallowed in federal systems after 2035.
If you sell to the U.S. government, CNSA 2.0 compliance is not optional. If you handle healthcare data (HIPAA), financial data (SOX/PCI), or operate critical infrastructure, regulatory pressure to adopt PQC is already building. Starting January 1, 2027, all new National Security System equipment acquisitions must be CNSA 2.0-compliant by default. The procurement pipeline means you need PQC support today to be certified by then.
The Hybrid Approach: Classical + PQ for Defense in Depth
During the transition period, the recommended practice is hybrid cryptography—combining a classical algorithm (X25519/ECDH for key exchange, ECDSA/Ed25519 for signatures) with a PQC algorithm (ML-KEM for key exchange, ML-DSA for signatures) such that the system remains secure if either algorithm is broken.
Why Hybrid?
- PQC algorithms are newer. While they have undergone rigorous cryptanalysis, they lack the 40+ years of real-world deployment that RSA and ECC have. A surprise classical attack (like the one that broke SIKE) remains possible, however unlikely.
- Classical algorithms are quantum-vulnerable. Deploying only classical crypto leaves you exposed to HNDL attacks and future CRQCs.
- Hybrid gives you both guarantees simultaneously. If the PQC algorithm falls to classical cryptanalysis, the classical component still protects you. If a CRQC arrives, the PQC component still protects you.
// Hybrid key exchange: X25519 + ML-KEM-768 // Session key is secure if EITHER algorithm holds let (x25519_shared) = x25519_dh(my_ecdh_sk, peer_ecdh_pk); let (mlkem_shared, ct) = ml_kem_768_encapsulate(peer_mlkem_pk); // Combine both shared secrets into one session key let session_key = HKDF-SHA-384( x25519_shared || mlkem_shared, "hybrid-tls-session-key" ); // Even if ML-KEM is broken classically -> X25519 protects // Even if X25519 is broken by Shor's -> ML-KEM protects
Chrome, Firefox, and Cloudflare have already deployed hybrid ML-KEM + X25519 key exchange in TLS 1.3. If you are browsing the web on a modern browser, you may already be using post-quantum key exchange without knowing it.
H33's Post-Quantum Stack: Production Numbers
H33's authentication infrastructure is fully post-quantum by construction. Every component in the single-API-call pipeline uses quantum-resistant algorithms.
H33 Production Pipeline (Single API Call)
The key architectural decisions:
- Dilithium (ML-DSA) for attestation: Every authentication batch is signed with a Dilithium3 key. The ~240 microsecond sign+verify cost is amortized across 32 users per batch. This provides non-repudiable, quantum-resistant proof of authentication.
- Kyber (ML-KEM) for key exchange: All API communications use ML-KEM hybrid key exchange. Session keys are quantum-safe from the moment of establishment.
- BFV lattice-based FHE for biometrics: The biometric matching itself runs in the FHE domain (lattice-based encryption). Plaintext biometric templates never exist on the server. This protects against both real-time compromise and HNDL harvesting.
- Sustained throughput: ~1.2 million authentications per second on a single Graviton4 instance (c8g.metal-48xl, 192 vCPUs).
// Post-quantum attestation -- Dilithium sign + verify per batch let batch_digest = sha3_256(&batch_results); // Sign with ML-DSA-65 (Dilithium3) let signature = dilithium_sign(&signing_key, &batch_digest); // ~120 us sign time // Verify (any party can verify with the public key) let valid = dilithium_verify(&verify_key, &batch_digest, &signature); // ~120 us verify time // Total: ~240 us for 32-user batch attestation assert!(valid, "Attestation signature invalid");
Migration Roadmap for Organizations
PQC migration is not a weekend project. It requires systematic inventory, prioritization, testing, and phased deployment. Here is a concrete roadmap based on best practices from NIST, CNSA 2.0, and real-world enterprise migrations.
Phase 1: Inventory and Assessment (Months 1–3)
- Cryptographic inventory: Identify every system that uses public-key cryptography. TLS endpoints, VPN gateways, certificate authorities, code-signing keys, SSH keys, API authentication, database encryption, key management systems.
- Data classification: Apply Mosca's inequality (
X + Y > Z) to each data category. Prioritize data with long confidentiality requirements. - Dependency mapping: Identify which cryptographic libraries, HSMs, and cloud KMS services your systems depend on. Check their PQC readiness.
Phase 2: Hybrid Deployment for Data in Transit (Months 3–9)
- TLS key exchange: Enable ML-KEM + X25519 hybrid key exchange on all public-facing endpoints. Most modern TLS libraries (OpenSSL 3.x, BoringSSL, rustls) already support this.
- VPN migration: Upgrade VPN concentrators to support ML-KEM key exchange. WireGuard and IPSec implementations with PQC support are available.
- API authentication: Begin issuing ML-DSA signatures alongside existing ECDSA tokens. Accept both during the transition.
Phase 3: Signature and Certificate Migration (Months 9–18)
- Internal PKI: Issue new root and intermediate CA certificates with ML-DSA keys. Cross-sign with existing RSA/ECDSA CAs for backward compatibility.
- Code signing: Transition software signing to ML-DSA. Dual-sign packages for compatibility with older verification toolchains.
- SSH keys: Replace Ed25519 SSH keys with ML-DSA keys where supported.
Phase 4: Data at Rest and Full Cutover (Months 18–36)
- Re-encrypt sensitive data: For long-term storage protected by RSA/ECDH envelope encryption, re-encrypt with ML-KEM-based key wrapping.
- Deprecate classical-only endpoints: Reject connections that do not offer PQC key exchange.
- Continuous monitoring: Establish ongoing cryptographic agility—the ability to swap algorithms if new vulnerabilities emerge.
The specific algorithms will evolve. FIPS 206 (FN-DSA / FALCON) and HQC are still in the pipeline. Future breakthroughs may require algorithm changes. The most valuable outcome of PQC migration is not deploying any single algorithm—it is building cryptographic agility into your infrastructure so that algorithm swaps can be performed without re-architecting your systems.
Common Misconceptions
PQC discussions are plagued by misunderstandings that lead to either complacency or panic. Here are the most common ones, corrected.
"Quantum computers break everything"
False. Quantum computers break public-key crypto based on factoring and discrete log (RSA, ECC, DH). Symmetric ciphers (AES-256) and hash functions (SHA-3) are safe with minor key-size increases. PQC replaces only the public-key component.
"We have plenty of time"
Dangerous assumption. Even if CRQCs are 10–15 years away, Mosca's inequality shows that data with long shelf lives is already at risk from HNDL attacks. Migration itself takes 3–7 years for large organizations. The math says act now.
"Just increase RSA key sizes"
Does not work. Shor's algorithm provides an exponential speedup. Doubling the RSA key size only doubles the quantum runtime. To resist Shor's, you would need RSA keys of astronomical size—far beyond practical use. Fundamentally different math is required.
"PQC is too slow for production"
Outdated. ML-KEM key exchange is faster than both RSA and ECDH. ML-DSA signing at ~240 microseconds is faster than RSA-2048 signatures. H33 runs 1.2 million PQ-authenticated calls per second on a single server. Performance is not a blocker.
"Nobody is attacking yet"
Wrong. Harvest Now, Decrypt Later attacks are already underway. Intelligence agencies from multiple nations are collecting encrypted traffic today for future quantum decryption. The attack that matters is passive and invisible—you will not know until the data is exposed decades from now.
"We'll wait for quantum computers to exist"
By then it is too late. Data harvested before your migration is permanently compromised. The entire point of PQC is to protect data before CRQCs arrive. Retroactive protection is impossible for data already in transit.
The Bottom Line
Post-quantum cryptography is not a research curiosity. It is a finalized set of federal standards (FIPS 203, 204, 205) with hard migration deadlines (CNSA 2.0, NIST IR 8547). The algorithms are fast—ML-KEM outperforms ECDH, and ML-DSA outperforms RSA. The threat is real—Shor's algorithm will break every RSA and ECC key ever generated, and adversaries are already harvesting encrypted data for future decryption.
The four families of PQC (lattice, hash, code, multivariate) provide defense in depth against both quantum and classical attacks. The hybrid approach ensures security even if one algorithm falls. And the migration itself—while substantial—is well-defined, with production-grade implementations available today in every major TLS library, browser, and cloud provider.
The question is no longer whether to migrate to post-quantum cryptography. The question is whether your organization will complete the migration before the data you are protecting today becomes worthless.
H33 provides post-quantum authentication infrastructure with BFV lattice-based FHE biometric processing, ML-DSA digital signatures (~240 microseconds), and ML-KEM key exchange—all in a single API call at ~50 microseconds per authentication. Every component in the stack is quantum-resistant by construction, not by policy. 1.2 million auths/sec sustained on Graviton4.