How to Make Solana Faster After Post-Quantum — Not 90% Slower
Project Eleven's test proved the problem is real. A 90% throughput loss is not acceptable. But post-quantum doesn't have to mean slower. Here is a concrete architecture that would make Solana faster with post-quantum cryptography than it is today with Ed25519 — and the math to prove it.
Where the 90% Actually Comes From
To fix the problem you have to understand exactly where the throughput disappears. It is not one bottleneck. It is five, and they compound.
Vote transactions. Solana produces a slot every 400 milliseconds. Every validator submits a vote for every slot. With roughly 1,500 active validators, that is 3,750 vote signatures per second just for consensus. Each vote transaction currently carries a 64-byte Ed25519 signature. Replace that with a 3,309-byte Dilithium signature and vote transactions alone consume 12.4 megabytes per second of additional bandwidth — before a single user transaction is processed.
Turbine block propagation. Solana splits blocks into shreds and propagates them through a fanout tree called Turbine. Larger transactions mean fewer transactions per shred. Fewer transactions per shred means more shreds per block. More shreds means more fanout rounds. More fanout rounds means higher propagation latency. The 40x signature size increase cascades through Turbine as a multiplicative latency penalty at every tree level.
Banking stage verification. The leader's banking stage verifies every incoming transaction's signature before including it in a block. Ed25519 verification takes roughly 60 microseconds on Solana's hardware. Dilithium verification takes roughly 57 microseconds — actually comparable. But the banking stage also deserializes the transaction, and a transaction that is 40x larger takes proportionally longer to deserialize, hash, and index. The verification itself is not the bottleneck. The data pipeline around it is.
State size growth. Every signature is stored permanently in the ledger. At current Solana throughput of 3,000-4,000 transactions per second, Ed25519 adds roughly 250 kilobytes per second of signature data to the ledger. Dilithium at the same throughput adds 10 megabytes per second. Over a year, that is an additional 315 terabytes of signature data. Validator storage costs become untenable.
GPU verification pipeline. Solana uses GPU-accelerated Ed25519 signature verification via the sigverify stage. The GPU implementation is specifically optimized for Ed25519's curve arithmetic. There is no GPU-accelerated Dilithium verifier in production. Falling back to CPU verification for post-quantum signatures while keeping GPU for Ed25519 means post-quantum transactions are verified orders of magnitude slower than classical ones.
Solution 1: Cache-Once-Verify-Many
This is the biggest win and the one nobody in the Solana ecosystem is discussing.
In Solana's architecture, every validator independently verifies every transaction in every block. A block with 1,000 transactions gets its 1,000 signatures verified by 1,500 validators. That is 1.5 million signature verifications for the same 1,000 signatures. Every one of those 1.5 million verifications produces the same boolean result: valid or invalid.
This is pure waste. Verify once. Cache the result. Distribute the cached proof.
The leader verifies the signatures, produces a STARK proof that the verification was performed correctly, and distributes the proof alongside the block. Every other validator checks the STARK proof (sub-microsecond) instead of re-running 1,000 Dilithium verifications. The verification cost shifts from O(validators × transactions) to O(transactions) — a fundamentally different scaling curve.
This is exactly what H33's pipeline does. The first verification generates and caches a STARK proof. Every subsequent check is a DashMap lookup at 0.059 microseconds. The 402,014x speedup we measured in our v11 benchmark is this exact technique applied to biometric verification. It works identically for transaction signature verification.
Solution 2: Three-Layer Signature Architecture
Project Eleven tested one algorithm for everything. That is the wrong approach. Different parts of the protocol have different requirements:
| Layer | Requirement | Algorithm | Size | Why |
|---|---|---|---|---|
| Consensus (votes) | Small, fast, high volume | Falcon-512 | 690 bytes | 5x smaller than Dilithium, fast verify, acceptable for ephemeral vote messages |
| Transactions (user) | NIST standard, auditable | Dilithium ML-DSA-65 | 3,309 bytes | NIST FIPS 204 standard, required for compliance, batch-verifiable |
| Settlement (finality) | Maximum security, infrequent | SPHINCS+ (SLH-DSA) | 7,856 bytes | Hash-based (zero lattice risk), used only for epoch boundaries and major state transitions |
Consensus votes happen 3,750 times per second and are ephemeral — they do not need to be stored forever or survive a lattice-basis breakthrough in 2040. Falcon-512 at 690 bytes is 4.8x smaller than Dilithium and provides NIST-standardized post-quantum security. The vote bandwidth increase goes from 40x to 10.8x. Still larger than Ed25519, but manageable with Turbine optimization.
User transactions use Dilithium because it is the NIST standard that auditors and regulators will accept. But they are batch-verified — the leader collects a block of transactions, verifies all signatures, produces a single STARK proof covering the entire block, and distributes the proof. Individual validators never run individual Dilithium verifications.
Settlement signatures use SPHINCS+ at epoch boundaries. SPHINCS+ is hash-based — it relies on the security of SHA3, not lattice problems. If a breakthrough in lattice cryptography compromises Dilithium and Falcon simultaneously, SPHINCS+ remains secure because it uses a completely different mathematical foundation. Epoch boundaries happen every ~2 days on Solana. The 7,856-byte signature cost is negligible at that frequency.
Three algorithms from three mathematical families. If any one is broken, the other two still protect the network. This is the same three-key principle H33 uses in production: Dilithium + Falcon + SPHINCS+ nested signatures from three independent hardness assumptions.
Solution 3: Shared NTT Infrastructure
This is the optimization that most people miss because it requires understanding both Solana's internals and Dilithium's internals.
Solana already uses polynomial arithmetic. The Turbine block propagation system uses Reed-Solomon erasure coding to generate repair shreds. Reed-Solomon encoding involves polynomial evaluation and interpolation — the same class of operations that NTT (Number Theoretic Transform) accelerates.
Dilithium's internal operations — key generation, signing, and verification — also rely heavily on NTT for polynomial multiplication in the ring Z_q[X]/(X^256 + 1).
A shared NTT engine that serves both Reed-Solomon encoding and Dilithium operations means the CPU's L1 cache is already warm with NTT twiddle factors when post-quantum operations begin. The twiddle factors are different (different moduli), but the cache line access patterns are similar enough that the CPU's prefetcher handles the transition efficiently.
In our production system, the same Montgomery NTT engine serves both BFV FHE polynomial multiplication and Dilithium signing. The result: Dilithium sign+verify takes 189 microseconds per batch of 32, including the NTT cost. The NTT infrastructure is not additional overhead — it is shared infrastructure that makes every polynomial-based operation faster.
The Economic Argument
Technical feasibility is not enough. Validators have to be willing to upgrade. Solana validators earn revenue from transaction fees and inflationary rewards. A 90% throughput reduction means 90% fewer transactions means 90% less fee revenue. No rational validator will adopt a post-quantum upgrade that destroys their economics.
The cache-once-verify-many architecture changes this equation. Under the current architecture, every validator spends CPU cycles verifying every signature in every block. This is the single largest computational cost for validators. If cached verification eliminates 99.9% of that cost, validators actually spend less compute per block after the post-quantum upgrade than before it. Their operating costs decrease. Their margins improve.
Post-quantum becomes a cost reduction, not a cost increase. That is how you get voluntary adoption.
The Harvest-Now-Decrypt-Later Clock
Project Eleven offered a $1 million bounty to break a single ECC key with a quantum computer. Nobody claimed it. Some took this as evidence that quantum computers are not a threat.
That misses the point entirely. The question is not whether quantum computers can break Ed25519 today. The question is whether data signed with Ed25519 today will be breakable in 10 years.
Every Solana transaction signed with Ed25519 is recorded permanently in the ledger. State-sponsored adversaries are already harvesting encrypted communications and signed transactions for future decryption. When a sufficiently powerful quantum computer comes online — whether that is 2030 or 2035 — every historical Ed25519 signature becomes forgeable. An attacker could construct fraudulent historical transactions that appear to have been validly signed.
For financial transactions, this is an existential risk. For DeFi protocols holding billions in TVL, a retroactive signature forgery could unwind entire chains of ownership. The time to migrate is not when quantum computers arrive. The time to migrate is before the data you are signing today becomes vulnerable.
The $1 million bounty asks the wrong question. The right question is: what is the cost of NOT migrating, compounded over every transaction signed with a soon-to-be-broken algorithm?
The Path Forward
Solana's 90% slowdown is a real result from a real test on a real network. It proves that the naive approach — replace Ed25519 with Dilithium everywhere — does not work. But it does not prove that post-quantum is impossible for Solana. It proves that post-quantum requires architectural thinking, not drop-in replacement.
Cache-once-verify-many eliminates 99.9% of redundant verification cost. Three-layer signatures match the right algorithm to the right use case. Shared NTT infrastructure amortizes the polynomial arithmetic cost. Batch verification via STARK proofs reduces per-validator overhead from O(transactions) to O(1).
The result is a Solana that is post-quantum secure, economically viable for validators, and potentially faster than the classical network it replaces. Not 90% slower. Faster.
The technology exists. We built it. It runs 2,209,429 operations per second with Dilithium, STARK proofs, and fully homomorphic encryption in the same pipeline. The question is not whether it can be done. The question is who builds it first.
See it run
Dilithium + STARK + FHE at 2.2M ops/sec. The post-quantum pipeline that doesn't slow down.
See the Benchmarks →