In 1994, mathematician Peter Shor, then at AT&T Bell Labs, published a quantum algorithm that would eventually threaten the foundation of internet security. Shor's algorithm can efficiently solve two mathematical problems that underpin virtually all public-key cryptography deployed today: integer factorization and the discrete logarithm problem. For classical computers, both problems are believed to require exponential time. For a sufficiently large quantum computer running Shor's algorithm, they require only polynomial time.
This is not a marginal speedup. It is the difference between a computation that would take longer than the age of the universe and one that finishes in hours. Every RSA key, every Diffie-Hellman exchange, every ECDSA signature, and every Ed25519 authentication token on the internet is built on the assumption that these problems are hard. Shor's algorithm says they are not—if you have the right machine.
This article explains exactly how Shor's algorithm works, step by step; the real qubit requirements to break production cryptographic keys; the error correction overhead that dominates the engineering challenge; the latest optimizations that have dramatically reduced resource estimates; when experts believe quantum computers will be large enough; what Shor's algorithm does NOT break; and how post-quantum cryptography renders it irrelevant.
Shor's algorithm provides an exponential speedup over the best known classical algorithms for factoring integers and computing discrete logarithms. This breaks RSA, Diffie-Hellman, ECDSA, EdDSA, ElGamal, and every other cryptosystem whose security relies on the hardness of these problems. The threat is not speculative—the mathematics is proven. Only the engineering timeline remains uncertain.
What Shor's Algorithm Does
Shor's algorithm solves two specific mathematical problems in polynomial time on a quantum computer:
- Integer factorization: Given a composite integer N = p × q (the product of two large primes), find p and q. This breaks RSA at all key sizes.
- Discrete logarithm: Given g, h, and a prime p where h = gx mod p, find x. This breaks Diffie-Hellman key exchange and ElGamal encryption. A variant of the algorithm solves the elliptic curve discrete logarithm problem (ECDLP), which breaks ECDSA, EdDSA, ECDH, and all curve-based cryptography.
The factoring and discrete-log variants share the same core quantum subroutine: quantum period-finding. The insight that makes Shor's algorithm revolutionary is that finding the period of a modular exponentiation function—a problem that appears random and structureless to a classical computer—can be solved efficiently by exploiting quantum superposition and interference.
Complexity Comparison
To appreciate the scale of the speedup, compare the asymptotic complexities for factoring an n-bit integer:
| Algorithm | Type | Complexity | RSA-2048 Estimate |
|---|---|---|---|
| Trial division | Classical | O(2n/2) | Infeasible |
| General number field sieve | Classical | O(exp(1.9 n1/3 (ln n)2/3)) | ~1030 years |
| Shor's algorithm | Quantum | O(n3) | Hours |
The general number field sieve (GNFS) is the fastest known classical factoring algorithm. For RSA-2048 (a 2048-bit modulus), GNFS would require on the order of 1030 operations—far beyond the capability of any classical computer, past, present, or future. Shor's algorithm reduces this to roughly 20483 ≈ 8.6 × 109 quantum operations, which a sufficiently large and reliable quantum computer could complete in hours.
How Shor's Algorithm Works: Step by Step
Shor's algorithm consists of a classical reduction (converting factoring into period-finding) and a quantum subroutine (finding the period using quantum Fourier transform). Here is the complete procedure for factoring a composite integer N.
Classical Reduction: Factoring to Period-Finding
The classical part of Shor's algorithm relies on a well-known number theory result: if you can find the period of a specific modular function, you can extract factors of N using only classical arithmetic.
The Classical Steps
Check for trivial factors
Test whether N is even, a perfect power, or divisible by small primes. If so, factor directly. This eliminates degenerate cases before invoking the quantum computer.
Choose a random base
Pick a random integer a where 1 < a < N. Compute gcd(a, N). If gcd(a, N) > 1, you've already found a factor—output it and stop. Otherwise, a and N are coprime, and we proceed to the quantum step.
Find the period r (quantum step)
Use the quantum subroutine to find the period r of the function f(x) = ax mod N. That is, find the smallest positive integer r such that ar ≡ 1 (mod N). This is the hard step classically—and the step where the quantum computer provides an exponential speedup.
Check that r is useful
If r is odd, or if ar/2 ≡ −1 (mod N), the period is not useful. Go back to step 2 and choose a different random base a. The probability of failure at this step is at most 1/2, so a few iterations almost certainly produce a useful r.
Extract factors via GCD
If r is even and ar/2 ≢ −1 (mod N), then compute gcd(ar/2 − 1, N) and gcd(ar/2 + 1, N). At least one of these is a non-trivial factor of N. RSA is broken.
The mathematical reason this works: if ar ≡ 1 (mod N), then ar − 1 ≡ 0 (mod N), which factors as (ar/2 − 1)(ar/2 + 1) ≡ 0 (mod N). If neither factor is itself divisible by N, then each shares a non-trivial common factor with N. The GCD computation finds those factors in O(n2) time on a classical computer.
The entire classical wrapper is efficient. The hard part—and the only part that requires a quantum computer—is step 3: finding the period r.
The Quantum Subroutine: Period-Finding via QFT
The quantum period-finding subroutine is the heart of Shor's algorithm. It uses two quantum registers, a sequence of modular exponentiations, and the Quantum Fourier Transform (QFT) to extract the period r.
Quantum Period-Finding Steps
Initialize two quantum registers
Create a "control" register of approximately 2n qubits (where n is the bit-length of N) initialized to |0〉, and a "target" register of n qubits also initialized to |0〉. The control register must be large enough to resolve the period with high probability.
Create uniform superposition
Apply Hadamard gates to every qubit in the control register, putting it into an equal superposition of all values from 0 to 22n − 1. The state is now a uniform superposition over all possible exponents x.
Compute modular exponentiation
Apply the unitary transformation that maps |x〉|0〉 → |x〉|ax mod N〉. This is computed via repeated squaring as a reversible quantum circuit. After this step, the two registers are entangled: each value of x in the control register is paired with the corresponding ax mod N in the target register.
Apply the Quantum Fourier Transform
Apply QFT to the control register. The QFT converts the periodic structure in the amplitudes into sharp peaks at multiples of 22n/r in the frequency domain. This is the key step: QFT extracts the hidden periodicity that is invisible to classical measurement.
Measure the control register
Measurement collapses the control register to a value close to some multiple of 22n/r. Use the continued fractions algorithm (a classical computation) to extract r from this measurement. Repeat a few times if the first measurement doesn't yield r directly.
Why the Quantum Fourier Transform Works
The QFT is the quantum analogue of the classical discrete Fourier transform. It maps a quantum state with hidden periodicity in the time/exponent domain into a state with sharp peaks in the frequency domain.
After step Q3, the amplitudes of the control register have a periodic structure with period r (because f(x) = ax mod N repeats every r values of x). However, this periodicity is "hidden"—if you simply measured the control register at this point, you'd get a random value with no useful structure. The QFT transforms the state so that the amplitudes are concentrated at multiples of 22n/r. When you measure, you are overwhelmingly likely to get a value near one of these multiples, from which r can be extracted classically.
The QFT itself requires only O(n2) quantum gates—a polynomial number—making it efficient. The overall gate count for Shor's algorithm is dominated by the modular exponentiation circuit, which requires O(n3) gates in the straightforward implementation.
The quantum speedup in Shor's algorithm comes from a single capability: interference. The QFT causes amplitudes corresponding to the correct period to constructively interfere (adding up) while incorrect values destructively interfere (canceling out). A classical computer cannot simulate this interference efficiently because it would need to track 22n complex amplitudes simultaneously—an exponential cost.
The Discrete Logarithm Variant
For the discrete logarithm problem (breaking Diffie-Hellman and ECC), Shor's algorithm uses a similar approach but with two period-finding registers. Given g, h = gx mod p, the algorithm finds the period of the two-variable function f(a, b) = ga · hb mod p. The period structure reveals x (the discrete logarithm) via the QFT applied to both registers simultaneously.
The elliptic curve variant works on the group operation of points on an elliptic curve rather than modular arithmetic, but the underlying quantum period-finding structure is identical. This is why all curve-based cryptography—ECDSA, EdDSA, ECDH, X25519—is equally vulnerable.
Qubit Requirements: Theory vs. Reality
Understanding the qubit requirements to run Shor's algorithm against production cryptographic keys is essential for assessing the timeline. There is a crucial distinction between logical qubits (ideal, error-free qubits that the algorithm operates on) and physical qubits (real, noisy hardware qubits that must be combined using error correction to simulate logical qubits).
Logical Qubit Requirements
The theoretical minimum for Shor's algorithm on an n-bit integer is 2n + 3 logical qubits: 2n qubits for the control register (to achieve sufficient precision for the QFT) and n + 3 qubits for the target register and ancillary workspace. For RSA-2048 (n = 2048), this gives a theoretical minimum of 4,099 logical qubits.
In practice, different circuit implementations trade qubits for gates (and vice versa):
| Target | Logical Qubits | Toffoli Gates | Source |
|---|---|---|---|
| RSA-2048 (textbook) | 4,099 | ~2.6 × 1012 | Beauregard 2003 |
| RSA-2048 (Gidney-Ekårå 2021) | 2n + 1 = 4,097 | ~2.7 × 1010 | Gidney-Ekårå 2021 |
| RSA-3072 | ~6,145 | ~9.2 × 1010 | Scaling from above |
| ECDSA P-256 | 2,330–2,619 | ~1.3 × 1011 | Roetteler et al. 2017 |
| ECDSA P-384 | ~3,901 | ~4.4 × 1011 | Roetteler et al. 2017 |
Physical Qubit Requirements: The Error Correction Wall
Logical qubits do not exist in isolation. Real quantum hardware is noisy: every gate operation, every measurement, and even idle waiting introduces errors. To run a computation as long as Shor's algorithm on RSA-2048 (which requires billions of sequential gate operations), you need quantum error correction—and error correction demands a massive overhead in physical qubits.
The dominant approach is the surface code, which encodes one logical qubit into a 2D grid of physical qubits. The number of physical qubits per logical qubit depends on the code distance d, which in turn depends on the physical error rate and the depth of the circuit:
A single logical qubit in the surface code requires approximately 2d2 physical qubits, where d is the code distance. For RSA-2048 with current error rates (~10−3 per gate), d ≈ 27 is typical, giving ~1,458 physical qubits per logical qubit. Multiply by 4,099 logical qubits and you get ~6 million physical qubits just for the data qubits—before accounting for magic state distillation factories, which can double or triple the total.
The additional overhead comes from magic state distillation. Surface codes can perform Clifford gates (X, Z, H, CNOT) fault-tolerantly and efficiently, but the T gate (required for universal quantum computation) cannot be performed directly. Instead, you must prepare special "magic states" through a resource-intensive distillation process that consumes additional physical qubits. Since Shor's algorithm requires billions of T gates, the distillation factories dominate the physical qubit budget in many designs.
Gidney-Ekårå Optimizations: From 20M to Under 1M Qubits
The story of Shor's algorithm resource estimates is one of steady optimization. Early estimates put the physical qubit count at hundreds of millions. Two landmark papers dramatically reduced this.
Gidney-Ekårå (2021): Windowed Arithmetic
Craig Gidney and Martin Ekårå's 2021 paper "How to factor 2048-bit RSA integers in 8 hours using 20 million noisy qubits" introduced several key optimizations:
- Windowed arithmetic: Instead of performing modular exponentiation one bit at a time, process multiple bits simultaneously using precomputed lookup tables. This reduces the Toffoli count by approximately 5x.
- 2n + 1 logical qubits: By using a semi-classical QFT (measuring and reusing qubits sequentially instead of maintaining the full 2n-qubit control register), the algorithm needs only 2n + 1 logical qubits for an n-bit number.
- Optimized distillation: More efficient magic state distillation protocols reduced the factory overhead.
Their headline result: RSA-2048 can be factored in 8 hours using 20 million noisy qubits at a physical error rate of 10−3. This was the benchmark that the community used for several years.
Gidney (May 2025): Under 1 Million Qubits
Craig Gidney's May 2025 paper further revolutionized the estimates with three innovations:
- Approximate residue arithmetic: Instead of exact modular arithmetic, use approximate representations that tolerate small errors, dramatically reducing the circuit depth.
- Yoked surface codes: A new technique for compactly storing idle logical qubits by "yoking" multiple logical qubits into shared surface-code patches, reducing the physical qubit overhead for qubits that are not actively being operated on.
- Magic state cultivation: Replaces the expensive magic state distillation factories with a more efficient "cultivation" approach that grows high-quality magic states using fewer physical qubits.
The result: a 20x reduction in physical qubits, from ~20 million to under 1 million physical qubits for RSA-2048. The Toffoli count was reduced by over 100x.
| Paper | Physical Qubits | Runtime | Error Rate |
|---|---|---|---|
| Gidney-Ekårå 2021 | ~20,000,000 | 8 hours | 10−3 |
| Gidney 2025 | <1,000,000 | ~hours | 10−3 |
| Improvement | 20x fewer | Similar | Same |
This is significant because it narrows the gap between current hardware and a cryptographically relevant quantum computer from four orders of magnitude to three. Current machines have ~1,000 physical qubits. Sub-million is no longer an astronomical target.
Timeline Estimates: When Will QCs Be Large Enough?
Estimating when a cryptographically relevant quantum computer (CRQC) will exist is inherently uncertain, but the expert consensus is converging—and the timeline is accelerating.
Global Risk Institute Survey (December 2024)
Dr. Michele Mosca surveyed 32 global quantum computing experts. Their probability estimates for a CRQC capable of breaking RSA-2048:
| Timeframe | Probability | Year |
|---|---|---|
| Within 5 years | 5–14% | ~2029 |
| Within 10 years | 19–34% | ~2034 |
| Within 15 years | ~50% | ~2039 |
| Within 20 years | ~79% | ~2044 |
Nearly one-third of experts (10 of 32) assigned a 50%+ probability of CRQC within 10 years. Germany's BSI estimates 10–16 years. And these estimates predate the Gidney 2025 paper, which reduced the hardware target by 20x.
The Hardware Trajectory
The trajectory is clear. Qubit counts are roughly doubling every 1–2 years. Error rates are improving. And crucially, the algorithmic resource requirements are falling faster than the hardware is scaling up—the Gidney 2025 paper alone closed 20x of the gap from the software side.
All public timeline estimates are based on publicly known hardware. Nation-state quantum computing programs (NSA, Chinese military programs, others) are classified. It is possible that non-public systems are significantly ahead of commercial hardware. This uncertainty is itself a reason to migrate to post-quantum cryptography now rather than waiting for a public demonstration.
What Shor's Algorithm Does NOT Break
Shor's algorithm is devastating to public-key cryptography, but it is not a universal threat to all cryptography. Understanding what remains secure is as important as understanding what breaks.
Symmetric Ciphers: Safe
Shor's algorithm provides no speedup against symmetric encryption. AES-128, AES-256, ChaCha20, and all other symmetric ciphers are completely unaffected. The best known quantum attack against symmetric ciphers is Grover's algorithm, which provides only a quadratic speedup (not exponential like Shor's). This effectively halves the key length: AES-256 provides ~128-bit security against a quantum adversary, which is still entirely secure.
Hash Functions: Safe
SHA-2 (SHA-256, SHA-384, SHA-512), SHA-3, BLAKE2, BLAKE3, and all standard hash functions are quantum-resistant. Grover's algorithm can speed up preimage search quadratically (so SHA-256 provides ~128-bit quantum security), but this is not practically threatening. The collision-finding speedup from quantum algorithms (BHT algorithm) is even more modest: a cube-root improvement at best.
Lattice-Based Cryptography: Safe
The hardness problems underlying lattice-based cryptography—Learning With Errors (LWE), Ring-LWE, Module-LWE, and the Shortest Vector Problem (SVP)—have no known polynomial-time quantum algorithm. This is why NIST selected lattice-based algorithms as the primary post-quantum standards:
- ML-KEM / Kyber (FIPS 203): Lattice-based key encapsulation. Quantum-safe.
- ML-DSA / Dilithium (FIPS 204): Lattice-based digital signatures. Quantum-safe.
- FN-DSA / FALCON (FIPS 206, draft): NTRU lattice-based signatures. Quantum-safe.
Hash-Based Signatures: Safe
SLH-DSA / SPHINCS+ (FIPS 205) derives its security entirely from the properties of hash functions. Since hash functions are quantum-resistant, hash-based signatures are quantum-resistant. They have the advantage of relying on the most conservative security assumptions of any PQC scheme.
Code-Based Cryptography: Safe
HQC (selected by NIST as a backup KEM) is based on the hardness of decoding random linear codes, another problem with no known efficient quantum solution. Code-based cryptography dates back to McEliece (1978) and has withstood decades of cryptanalysis.
| Category | Examples | Shor's Impact | Grover's Impact | Status |
|---|---|---|---|---|
| Integer factorization | RSA | Broken (exponential speedup) | N/A | Migrate NOW |
| Discrete log (finite field) | DH, ElGamal, DSA | Broken (exponential speedup) | N/A | Migrate NOW |
| Discrete log (elliptic curve) | ECDSA, EdDSA, ECDH, X25519 | Broken (exponential speedup) | N/A | Migrate NOW |
| Symmetric ciphers | AES-256, ChaCha20 | No effect | Key halving (256→128) | Secure |
| Hash functions | SHA-256, SHA-3, BLAKE2 | No effect | Output halving (256→128) | Secure |
| Lattice-based | ML-KEM, ML-DSA, FN-DSA | No effect | Negligible | Secure (PQC) |
| Hash-based sigs | SLH-DSA (SPHINCS+) | No effect | Negligible | Secure (PQC) |
| Code-based | HQC, Classic McEliece | No effect | Negligible | Secure (PQC) |
Grover's Algorithm: The Other Quantum Threat
Shor's algorithm is often discussed alongside Grover's algorithm (1996), but the two are fundamentally different in both mechanism and impact. Understanding the distinction is critical for making sound cryptographic decisions.
Grover's: Quadratic Speedup
Grover's algorithm provides a quadratic speedup for unstructured search. If a classical computer needs N operations to search a space of N items, Grover's algorithm needs only √N quantum operations. Applied to cryptography:
- Brute-force key search on AES-256: Classical cost = 2256 operations. Grover's cost = 2128 operations. AES-256 retains 128-bit security—still completely impractical to break.
- Preimage search on SHA-256: Classical cost = 2256. Grover's cost = 2128. SHA-256 retains 128-bit security.
Shor's vs. Grover's: The Key Difference
| Property | Shor's Algorithm | Grover's Algorithm |
|---|---|---|
| Speedup type | Exponential | Quadratic |
| Target problems | Factoring, discrete log (structured) | Unstructured search (any NP problem) |
| What it breaks | RSA, DH, ECC completely | Halves symmetric key / hash strength |
| Mitigation | Replace algorithms entirely (PQC) | Double key sizes (AES-128 → AES-256) |
| Practical impact | Catastrophic for public-key crypto | Manageable (just use AES-256) |
Grover's algorithm is a real quantum threat but a manageable one. The standard mitigation is simple: use AES-256 instead of AES-128, and use SHA-384 or SHA-512 if you want 192+ bits of quantum security. No algorithmic migration is required.
Shor's algorithm is an existential threat because doubling key sizes does not help. RSA-4096 is just as vulnerable as RSA-2048—Shor's algorithm factors both in polynomial time. The only mitigation is to replace the entire cryptographic foundation with algorithms based on problems Shor's cannot solve.
Impact: What Breaks in Practice
When a CRQC running Shor's algorithm arrives, the cascading impact across internet infrastructure is severe:
TLS / HTTPS
Every HTTPS connection uses public-key cryptography for the handshake (typically ECDHE key exchange + ECDSA or RSA certificate verification). Shor's algorithm breaks both the key exchange (exposing session keys) and the certificate chain (enabling impersonation). Every TLS session ever recorded with RSA or ECDH key exchange becomes decryptable retroactively. This is the foundation of Harvest Now, Decrypt Later attacks.
Digital Certificates and PKI
The entire Web PKI (the certificate authority system that authenticates websites) relies on RSA or ECDSA signatures. A CRQC could forge certificates for any domain, enabling undetectable man-in-the-middle attacks against any website.
VPNs and Secure Communications
IPsec and WireGuard VPNs use Diffie-Hellman or ECDH for key exchange. SSH uses RSA or EdDSA for authentication. Signal Protocol uses X25519 for key agreement. All are broken by Shor's algorithm.
Cryptocurrency
Bitcoin uses ECDSA (secp256k1) for transaction signing. Ethereum uses ECDSA (secp256k1) for accounts. A CRQC could forge transactions, steal funds from any address whose public key has been revealed (which happens whenever a transaction is sent from that address). Approximately 25% of all Bitcoin is in addresses with exposed public keys.
Code Signing and Software Supply Chain
Operating system updates, package managers, firmware updates, and application stores all rely on RSA or ECDSA signatures to verify code authenticity. A CRQC could sign malicious code that appears legitimate to every verification system.
The most dangerous aspect of Shor's algorithm is not the future attacks it enables but the retroactive decryption it allows. Every encrypted communication that was ever intercepted and stored—and nation-states have been collecting encrypted traffic at scale for years—becomes readable. This data includes diplomatic communications, trade secrets, medical records, and biometric templates. Once decrypted, these secrets cannot be "re-encrypted" or taken back.
Post-Quantum Cryptography: The Defense
Post-quantum cryptographic algorithms are built on mathematical problems that Shor's algorithm cannot solve. NIST finalized the first three standards in August 2024, providing drop-in replacements for every vulnerable primitive:
Replacing RSA/ECDH Key Exchange
ML-KEM / Kyber (FIPS 203) provides post-quantum key encapsulation based on the Module-LWE problem. It replaces RSA key transport and ECDH key agreement in TLS, VPNs, and any protocol that establishes shared secrets. Hybrid deployments (ML-KEM + X25519) are already live in Chrome, Firefox, and Cloudflare.
Replacing ECDSA/RSA Signatures
ML-DSA / Dilithium (FIPS 204) provides post-quantum digital signatures based on Module-LWE. It replaces RSA, ECDSA, and EdDSA for authentication tokens, certificate signing, and code signing. H33's production stack uses Dilithium for all attestation operations at ~240 microseconds per sign+verify.
FHE: Lattice-Based by Construction
Fully Homomorphic Encryption based on the BFV scheme uses Ring-LWE as its hardness assumption—the same lattice foundation as ML-KEM and ML-DSA. This means FHE-encrypted data is quantum-resistant by construction. An adversary who intercepts BFV ciphertexts today cannot decrypt them with Shor's algorithm or any known quantum algorithm.
This is particularly important for biometric data, which cannot be rotated. FHE allows biometric matching to occur entirely on encrypted data: the plaintext template never exists on any server, in transit, or in storage. Even if a quantum adversary decrypts the transport layer, the biometric templates remain protected by lattice-based encryption.
// H33 post-quantum authentication — immune to Shor's algorithm // Every component uses lattice-based or hash-based cryptography // 1. FHE biometric verification (BFV / Ring-LWE) let encrypted_probe = bfv_encrypt(&probe, &pk); // Lattice-based let score = fhe_inner_product(&encrypted_probe, &enrolled); // Never decrypted // 2. ZKP verification (STARK / SHA3-256) let proof = stark_prove(&score, &witness); // Hash-based stark_verify(&proof, &public_inputs)?; // No ECC involved // 3. Attestation (Dilithium / ML-DSA / FIPS 204) let attestation = dilithium_sign(&digest, &sk); // Lattice-based dilithium_verify(&attestation, &pk)?; // ~240µs total // Total per-auth latency: ~50µs (batched, 32 users per ciphertext) // Shor's algorithm: irrelevant. No RSA, no ECC, no DH anywhere.
H33's Response: Quantum-Safe by Construction
H33's authentication stack was designed from the ground up to be immune to Shor's algorithm. There is no RSA, no elliptic curve cryptography, and no Diffie-Hellman anywhere in the pipeline. Every cryptographic operation uses either lattice-based or hash-based primitives:
| Component | Algorithm | Hardness Basis | Shor's Impact |
|---|---|---|---|
| FHE Biometrics | BFV (N=4096, 56-bit Q) | Ring-LWE | Immune |
| ZKP Verification | STARK Lookup | SHA3-256 (hash) | Immune |
| Digital Signatures | Dilithium (ML-DSA) | Module-LWE | Immune |
| Key Exchange | Kyber (ML-KEM) | Module-LWE | Immune |
| OCR Encryption | Kyber + AES-256-GCM + Dilithium | Lattice + Symmetric | Immune |
The entire pipeline—from biometric enrollment through encrypted matching, zero-knowledge proof generation, and attestation signing—completes in ~50 microseconds per authentication at a sustained rate of 1.2 million authentications per second on production hardware. Post-quantum security does not mean slow. It means different math.
NIST IR 8547 (November 2024) establishes the federal timeline: all classical public-key cryptography (RSA, ECDSA) will be deprecated after 2030 and disallowed after 2035. NSA's CNSA 2.0 sets even earlier deadlines for national security systems. The question is not whether to migrate to post-quantum cryptography, but whether you migrate on your own timeline or on an adversary's.
The Bottom Line
Shor's algorithm is a proven mathematical result. It factors integers and computes discrete logarithms in polynomial time on a quantum computer. When a sufficiently large quantum computer is built—and expert consensus places this within 10–20 years, with resource estimates dropping rapidly—every RSA key, every Diffie-Hellman exchange, and every elliptic curve signature on the internet will be broken. Retroactively.
What Shor's algorithm does NOT break is equally important: symmetric ciphers (AES-256), hash functions (SHA-3), and lattice-based cryptography (ML-KEM, ML-DSA, BFV FHE) are all immune. Post-quantum standards exist, are finalized, and are deployable today.
The mathematics has been settled since 1994. The engineering timeline is the only remaining question. Given that adversaries are already harvesting encrypted data for future quantum decryption, the rational response is not to wait for certainty but to deploy quantum-resistant cryptography now—while the data you're protecting still has value.
H33 provides post-quantum authentication infrastructure with FHE biometric processing (BFV lattice-based), ML-DSA digital signatures, and ML-KEM key exchange—all in a single API call at sub-millisecond latency. Every component in the stack is immune to Shor's algorithm by construction, not by policy.