Post-Quantum Through an API:
Why the Infrastructure That Wins Is the One You Never Install
NIST finalized the post-quantum standards in August 2024. Every company that stores encrypted data — which is every company — needs to migrate. The question is not whether. The question is how. And the answer determines whether the transition takes an afternoon or consumes your engineering org for the next three years.
The Mandate Is Not Theoretical
NIST published FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), and FIPS 205 (SLH-DSA) as final standards. NSA's CNSA 2.0 suite requires post-quantum algorithms for all national security systems by 2030, with soft deadlines beginning in 2025. The White House issued NSM-10 directing federal agencies to inventory all cryptographic systems and begin migration.
This is not a five-year runway. Harvest-now-decrypt-later attacks are already underway. Every TLS session, every encrypted database field, every signed document produced today with RSA or ECDSA is being captured by adversaries who know they will be able to decrypt it once a sufficiently powerful quantum computer comes online. The data you encrypted yesterday is already at risk. The data you encrypt today with classical algorithms is being stored for future decryption.
The compliance calendars are tightening. PCI DSS 4.0 references cryptographic agility. SOC 2 auditors are beginning to ask about post-quantum readiness. Cyber insurance carriers are adding quantum-risk exclusions. The companies that migrate last will pay the most — in breach liability, in insurance premiums, and in emergency engineering costs when the deadline becomes a wall.
The Four Paths to Post-Quantum
Every organization facing this transition has exactly four options. Three of them are painful. One of them is an API call.
Path 1: Build It Yourself
Hire three to five cryptographic engineers. Each one costs $350,000 to $500,000 per year. They will spend six months selecting parameter sets, nine months implementing and testing, and another six months integrating with your existing infrastructure. You will need to build key management, rotation policies, hardware security module integrations, and monitoring. You will need to audit it. You will need to maintain it forever.
Timeline: 18-24 months. Cost: $3-5 million. Ongoing maintenance: 2-3 FTEs permanently.
This path makes sense if you are a defense contractor, a national intelligence agency, or if cryptography is literally your product. For everyone else, it is a catastrophic misallocation of engineering resources. You do not build your own TLS library. You do not write your own database engine. You should not be building your own post-quantum cryptographic infrastructure.
Path 2: Vendor Appliance
Buy a hardware security module or an on-premises appliance from a legacy vendor. Install it in your data center. Configure it. Integrate it with every application that touches cryptographic operations. Wait for firmware updates when NIST publishes parameter changes. Pay for the hardware, the licensing, the maintenance contract, and the team to manage it.
Timeline: 6-12 months. Cost: $500K-2M upfront plus annual licensing. Scaling: buy more hardware.
The appliance model made sense in the 1990s when cryptographic operations were computationally expensive and secrets needed to stay on dedicated hardware. In 2026, it is an anachronism. Your applications are in Kubernetes. Your data is in managed databases across three cloud regions. An appliance in a rack in Virginia does not protect the data flowing through your Lambda functions in Frankfurt.
Path 3: Open-Source Library
Pull liboqs or pqcrypto into your codebase. Write the integration layer. Handle key generation, serialization, storage, rotation. Build the enrollment flows. Build the verification flows. Write the migration scripts. Hope you got the parameter sets right. Hope your constant-time implementation is actually constant-time. Hope your random number generation is cryptographically secure on every platform you deploy to.
Timeline: 3-9 months. Cost: 2-4 engineers for months. Risk: high — you own the correctness of the implementation.
This is the trap that smart engineering teams fall into. The library is free. The integration is not. The library handles the math. It does not handle key management, access control, audit logging, algorithm agility, performance tuning, certificate rotation, or any of the operational concerns that determine whether cryptography actually protects anything in production. The algorithm is 10% of the problem. The infrastructure is the other 90%.
Path 4: An API
Send your data to an endpoint. Get back encrypted ciphertext, quantum-resistant signatures, or zero-knowledge proofs. The endpoint handles key generation, parameter selection, algorithm agility, hardware acceleration, and audit logging. You integrate once. You call it forever. When NIST updates the standards, the endpoint updates. When faster implementations emerge, the endpoint adopts them. Your code does not change.
Timeline: hours. Cost: usage-based. Risk: the vendor owns the correctness, the performance, and the compliance.
Why API-Delivered PQ Is Architecturally Superior
1. Algorithm Agility Without Code Changes
NIST is not done. HQC was selected as a backup KEM in 2024. Parameter sets will be revised. New attacks will require adjustments. The post-quantum landscape will continue to evolve for the next decade.
If your cryptography is a compiled library linked into your binary, every parameter change requires a code change, a build, a test cycle, and a deployment. If your cryptography is an API, the provider updates the implementation behind a stable interface. Your code sends plaintext, receives ciphertext. The algorithm behind that transformation can change from ML-KEM-768 to ML-KEM-1024 to whatever NIST standardizes next without touching your deployment pipeline.
This is not a convenience. This is a survival mechanism. The organizations that hardcoded RSA-2048 into their infrastructure in 2010 are the ones spending millions on migration today. The organizations that abstracted their cryptography behind a service boundary will migrate by changing a configuration flag.
2. Unified Security Stack
Post-quantum migration is not just about key encapsulation. A complete post-quantum posture requires:
- Key encapsulation (ML-KEM / Kyber) — for establishing shared secrets
- Digital signatures (ML-DSA / Dilithium) — for authentication and non-repudiation
- Hash-based signatures (SLH-DSA / SPHINCS+) — for long-lived signing keys
- Fully homomorphic encryption (FHE) — for computing on encrypted data
- Zero-knowledge proofs (ZK-STARKs) — for verification without disclosure
- Biometric authentication — with encrypted template storage
With the library approach, each of these is a separate dependency, a separate integration, a separate upgrade cycle, and a separate attack surface. With an API, they are endpoints on the same service. One authentication token. One audit trail. One provider to evaluate, contract with, and hold accountable.
3. Performance You Cannot Build
Post-quantum algorithms are computationally heavier than their classical predecessors. ML-KEM key generation is roughly 10x more expensive than X25519. Dilithium signatures are 5-8x larger than Ed25519. FHE operations involve polynomial arithmetic over large coefficient rings.
Making these algorithms fast enough for production requires deep optimization work that is economically irrational for any single company to undertake:
- Montgomery NTT with Harvey lazy reduction — eliminating division from the Number Theoretic Transform hot path
- SIMD batching — packing 32 users into a single 4096-slot ciphertext for amortized FHE cost
- NTT-domain persistence — keeping encrypted templates in transform domain to skip forward transforms during verification
- Batch attestation — amortizing one Dilithium signature across 32 users instead of signing individually
- In-process cached ZK proofs — sub-microsecond verification through DashMap-cached STARK lookups
These optimizations took thousands of engineering hours. They produce 38.5 microseconds per authentication on production hardware. No team building post-quantum as a side project will match this. No open-source library ships with this level of tuning. This performance exists behind an API endpoint, accessible with a single HTTP call.
4. Zero Key Exposure
The hardest part of any cryptographic system is key management. Where are keys generated? Where are they stored? Who has access? How are they rotated? What happens when a key is compromised?
With an installed library, your application code generates and stores keys. Your database holds key material. Your deployment pipeline has access to secrets. Every developer who can read the production config can read the keys. Every backup that includes the database includes the keys. Every log aggregator that captures request bodies might capture key material.
With an API, keys never leave the provider's hardened infrastructure. Your application sends plaintext over TLS. The API returns ciphertext. At no point does your application, your database, your logs, or your backups contain any cryptographic key material. The attack surface for key compromise reduces from "every system that touches your code" to "the API provider's key management infrastructure."
5. Compliance as a Side Effect
Every SOC 2 auditor, every PCI assessment, every HIPAA evaluation will eventually ask the same question: "How are you handling the post-quantum transition?"
If you built it yourself, you need to demonstrate the correctness of your implementation, the security of your key management, the completeness of your algorithm coverage, and the existence of your migration plan for future standard changes. That is months of documentation and testing.
If you use an API, you point to the provider's SOC 2 report, their FIPS validation, their penetration test results, and their algorithm agility guarantees. The compliance burden shifts from "prove your cryptography is correct" to "prove you call a compliant API correctly." The latter is a few pages. The former is a few hundred.
The Real Cost Comparison
| Factor | Build / Library | API |
|---|---|---|
| Time to production | 3-24 months | Hours to days |
| Cryptographic engineers | 2-5 FTEs ($350-500K each) | 0 |
| Key management | You build + maintain | Included |
| Algorithm updates | Code change + deploy | Automatic |
| Performance tuning | Months of optimization | Pre-optimized (38.5µs/auth) |
| Audit trail | You build | Included (STARK-attested) |
| Compliance evidence | You produce | Provider's SOC 2 / FIPS |
| FHE support | Separate library + integration | Same API |
| ZK proof support | Separate library + integration | Same API |
| Biometric PQ auth | Does not exist as a library | Same API |
What Integration Actually Looks Like
This is not a theoretical argument. This is what the code looks like when you integrate post-quantum encryption through an API versus doing it yourself.
Encrypting a field with an API:
const response = await fetch('https://app2.h33.ai/api/v1/encrypt', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.H33_API_KEY
},
body: JSON.stringify({
data: customerRecord.ssn,
tier: 'biometric_fast'
})
});
const { ciphertext } = await response.json();
// Store ciphertext. The SSN never touches your database in plaintext.
Enrolling a biometric under FHE:
const response = await fetch('https://app2.h33.ai/api/v2/biometric/enroll', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
user_id: user.id,
embedding: faceEmbedding, // 512-dimensional float vector
product: 'h33'
})
});
// Returns: commitment_hash, fhe_encrypted status, latency_us
// The biometric template is encrypted with BFV FHE. You never see the key.
That is the entire integration. No Cargo.toml dependencies. No CMake builds. No linking against libssl. No NTT parameter tuning. No Montgomery form debugging. No key serialization format decisions. You send data. You get back encrypted data. The post-quantum transition is done.
The Objections
"We cannot send sensitive data to a third party"
You already do. Your data flows through AWS KMS, Stripe, Twilio, Auth0, and a dozen other services. The question is not whether you trust third parties with sensitive operations — you already do for payments, authentication, and key management. The question is whether the third party handling your cryptographic operations is more qualified than your team to get the cryptography right. If you are not a cryptography company, the answer is yes.
"What about latency?"
H33 processes a full post-quantum authentication — FHE-encrypted biometric verification, Dilithium signature attestation, and STARK proof verification — in 38.5 microseconds. That is 0.0385 milliseconds. Your database query takes longer. Your JSON serialization takes longer. The network round trip to the API adds milliseconds, but the cryptographic operation itself is faster than anything you would build internally.
"What if the API goes down?"
The same thing that happens when your database goes down, when AWS goes down, or when any critical infrastructure dependency experiences an outage. You design for it. Circuit breakers. Graceful degradation. Cached results. Retry with exponential backoff. These are standard patterns for every API dependency your application already has. Post-quantum cryptography is not architecturally different from any other managed service.
"We need to own the cryptography for regulatory reasons"
No regulation requires you to implement cryptography yourself. Regulations require you to use approved algorithms, manage keys securely, maintain audit trails, and demonstrate compliance. An API that uses NIST-approved algorithms, manages keys in hardened infrastructure, provides STARK-attested audit trails, and holds SOC 2 certification satisfies every regulatory requirement more completely than a homegrown implementation ever could.
The Timeline Is Not What You Think
Most companies believe they have five to ten years before quantum computers can break RSA-2048 and ECDSA. They might be right about the timeline for fault-tolerant quantum computers. They are wrong about the timeline for compliance mandates, insurance requirements, and customer expectations.
The companies that move first will have the lowest migration costs, the cleanest compliance postures, and the strongest negotiating position with cyber insurance carriers. The companies that wait will face emergency migrations at premium engineering rates, retroactive compliance penalties, and insurance policy exclusions that leave them exposed.
The post-quantum transition is not a technology problem. It is an infrastructure decision. The technology exists. The standards are published. The APIs are live. The only variable is whether you treat cryptographic migration as a multi-year engineering project or as an API integration that ships this week.
Start the transition today
One API. ML-KEM, Dilithium, FHE, ZK-STARKs, biometric auth. 2.2M auth/sec. 38.5µs per authentication.
Get Your API Key →