Six Vendors, Six Integrations, Six Attack Surfaces
The post-quantum transition is coming. NIST finalized the standards. The deadline pressure is real. And the default response from every enterprise security team is the same: find a vendor for encryption, find a vendor for signatures, find a vendor for key management, find a vendor for biometrics, find a vendor for fraud detection, find a vendor for zero-knowledge proofs. Assemble the stack. Ship it. Move on.
This approach will make you less safe than doing nothing.
The Assembled Stack: How It Actually Works
Let's walk through what a typical enterprise post-quantum migration looks like when you assemble it from parts.
You need post-quantum encryption. You evaluate Zama, Microsoft SEAL, OpenFHE, or IBM HElib. Each is a library, not a product. Each requires your team to understand lattice-based cryptography, BFV or CKKS parameter selection, noise budget management, and polynomial ring arithmetic. You hire two cryptographic engineers. Timeline: three months to prototype, six months to production. Cost: $400K-$600K in salary alone.
You need post-quantum signatures. You evaluate liboqs from the Open Quantum Safe project. It provides Dilithium, Falcon, and SPHINCS+ as standalone C libraries. Your team integrates them into your authentication service, your API gateway, your document signing pipeline. Each integration point is a separate code path. Each needs its own key management. Timeline: two months. Cost: $150K in engineering time.
You need post-quantum key management. No vendor provides this as a turnkey service. AWS KMS supports classical RSA and ECC. Azure Key Vault supports classical RSA and ECC. Google Cloud KMS supports classical RSA and ECC. For post-quantum key management, you build it yourself. You implement key generation, rotation, distribution, revocation, and threshold splitting for Dilithium and Kyber keys. Timeline: four months. Cost: $300K.
You need encrypted biometrics. No vendor provides FHE-encrypted biometric matching. Period. The research papers exist. The production implementations do not. Your team either builds BFV biometric matching from scratch or stores biometric templates in plaintext and hopes for the best. Most choose the latter. Timeline if you build it: twelve months minimum. Cost: $800K-$1.2M.
You need fraud detection that works on encrypted data. No vendor provides this. Your fraud detection system currently operates on plaintext transaction data. Post-quantum fraud detection on encrypted data requires homomorphic computation on encrypted feature vectors. This is an active research area. Timeline: unknown. Cost: unknown.
You need zero-knowledge proofs for compliance attestation. You evaluate StarkWare, Polygon, or Risc0. Each is built for blockchain verification, not enterprise compliance. None use post-quantum hash functions by default. None integrate with your FHE pipeline. None produce Dilithium-signed attestations. Timeline to adapt: six months. Cost: $250K.
The Integration Tax
The cost above is just the build. The ongoing integration tax is where the real damage happens.
Version Mismatch
Your FHE library releases a new version that changes the ciphertext format. Your biometric matching service breaks because it expects the old format. Your key management system generated keys for the old parameter set. Your fraud detection pipeline trained on features extracted from the old ciphertext structure. One library upgrade cascades into four service outages.
This is not hypothetical. OpenSSL version mismatches have caused more production outages than any attacker. Post-quantum libraries are younger, less stable, and iterate faster. The version mismatch surface area is enormous.
Parameter Inconsistency
BFV encryption requires choosing a polynomial degree N, a ciphertext modulus Q, and a plaintext modulus t. These parameters must be consistent across every system that touches the ciphertext. Your encryption service uses N=4096. Your biometric matching service was configured with N=8192 because the engineer who set it up read a different paper. The ciphertexts are incompatible. Nothing works. Nobody knows why for three weeks.
In an assembled stack, parameter consistency is a human process. In an integrated stack, it is a compile-time guarantee.
Key Distribution Across Boundaries
Your FHE encryption keys need to be available to the encryption service, the biometric matching service, the fraud detection service, and the compliance attestation service. Each runs in a different container, possibly in a different region, maintained by a different team. The keys must be distributed securely. The keys must be rotated simultaneously. The keys must be revoked atomically.
In an assembled stack, you build a key distribution service. That service becomes the highest-value target in your infrastructure. If an attacker compromises it, they have every key for every service. You have centralized the one thing that should never be centralized.
Audit Trail Fragmentation
Your compliance team needs to prove that a specific piece of data was encrypted, processed, and never exposed in plaintext. The encryption event was logged by Vendor A's system. The processing event was logged by Vendor B's system. The access control event was logged by Vendor C's system. Three different log formats. Three different timestamp sources. Three different integrity guarantees. No unified cryptographic proof that the entire chain was executed correctly.
An auditor asks: "Prove that this SSN was never decrypted during processing." You cannot. You have three separate logs that each tell part of the story, but no single proof that covers the complete data lifecycle.
The Attack Surface Problem
Every integration boundary is an attack surface. This is not a metaphor. It is a precise technical statement.
When System A sends encrypted data to System B, the data must cross a network boundary. That crossing requires serialization, transport, and deserialization. Each step is an opportunity for:
- Plaintext leakage in transit — if the transport encryption is classical (TLS 1.3 with ECDHE), it is vulnerable to harvest-now-decrypt-later attacks. A quantum-capable adversary records the ciphertext today and decrypts it in five years.
- Deserialization vulnerabilities — the receiving system must parse the ciphertext. Malformed input can trigger buffer overflows, type confusion, or denial of service. Every integration boundary is a parsing boundary.
- Key exposure at handoff — if System A and System B use different key management, the key must be shared or translated at the boundary. This is where keys get logged, cached, or leaked to monitoring systems.
- Authentication bypass between services — internal service-to-service authentication is almost always weaker than external authentication. If an attacker compromises one service, they can often call other internal services without additional authentication.
In a six-vendor stack, you have at minimum five integration boundaries. Five places where data crosses trust domains. Five places where serialization vulnerabilities exist. Five places where key material might be exposed. Five places where an attacker who compromises one vendor's component can pivot to the next.
The Supply Chain Risk
Six vendors means six supply chains. Six build pipelines. Six sets of dependencies. Six npm/cargo/pip ecosystems. Six opportunities for a supply chain attack.
SolarWinds was one vendor. The blast radius affected 18,000 organizations. In a six-vendor post-quantum stack, a supply chain compromise in any single vendor potentially exposes your entire cryptographic infrastructure. The encryption vendor gets compromised? Your ciphertexts are now suspect. The signature vendor gets compromised? Your attestations are now suspect. The key management vendor gets compromised? Everything is suspect.
The probability of at least one supply chain compromise across six vendors over a three-year period is not small. It is a near certainty for any vendor operating in the open-source cryptography ecosystem, where dependencies are deep and maintainers are few.
The Upgrade Coordination Problem
NIST published ML-KEM and ML-DSA as final standards in 2024. They are already considering parameter updates. When the next revision arrives, every vendor in your stack needs to update. They will not update simultaneously. They will not test against each other. They will not coordinate their release cycles.
You will be left with a window — weeks to months — where some of your components support the new parameters and some do not. During that window, your stack is partially upgraded, partially legacy, and fully vulnerable to any attack that exploits the inconsistency. This is not a one-time event. It will happen with every standards revision for the next decade.
What Integration Actually Costs
| Cost Category | Assembled (6 Vendors) | Integrated (H33) |
|---|---|---|
| Cryptographic engineering hires | 4-6 engineers ($1.2M+/yr) | 0 |
| Integration engineering | 12-18 months | Minutes to days |
| Key management build | Custom ($300K+) | Built-in (3-of-5 threshold) |
| Vendor coordination | Ongoing, per upgrade | Single upgrade path |
| Audit trail | Fragmented, 3+ systems | STARK-attested, unified |
| Integration boundaries | 5+ network crossings | 0 (single process) |
| Supply chain vendors | 6 dependency trees | 1 |
| Parameter consistency | Manual, error-prone | Compile-time enforced |
| Time to full post-quantum | 2-3 years | Minutes |
| Total estimated cost (3yr) | $3M-$5M | API subscription |
The Alternative
The alternative is not "find a better set of vendors." The alternative is to stop assembling from parts entirely.
Fully homomorphic encryption, zero-knowledge proofs, post-quantum signatures, biometric matching, fraud detection, and key management are not six problems. They are one problem: how do you process sensitive data without exposing it, prove the processing was correct, and ensure the entire chain is quantum-resistant?
When these capabilities are built as one system, the integration boundaries disappear. The key management is shared because there is one key management system. The parameters are consistent because there is one parameter set. The audit trail is unified because there is one audit trail. The upgrade is atomic because there is one codebase.
The attack surface reduces to one API endpoint. One TLS connection. One authentication context. One set of dependencies. One supply chain to audit. One vendor to evaluate.
The Real Risk Calculation
CISOs evaluate risk as probability times impact. The probability of a quantum-capable adversary breaking your classical encryption in the next five years is debatable. The probability of a classical adversary exploiting an integration boundary in your six-vendor post-quantum stack in the next twelve months is high.
The assembled approach reduces one risk (quantum) while dramatically increasing another (integration complexity). The integrated approach reduces both simultaneously.
The question is not whether you need post-quantum encryption. You do. The question is whether the way you get there makes you safer or introduces new ways to fail. Six vendors, six integrations, six attack surfaces is six times the opportunity for something to go wrong.
One stack, one call, one attack surface. That is the only math that works.
See the integrated stack
FHE + ZK-STARKs + Dilithium + biometrics + fraud detection. One API call. 35.25 microseconds.
Try the Live Demo →