In August 2024, NIST published the first three post-quantum cryptography standards: FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), and FIPS 205 (SLH-DSA). The natural instinct is to treat this as a one-time migration: swap RSA for Dilithium, swap ECDH for Kyber, and move on. That instinct is wrong.
The post-quantum transition is not the last cryptographic migration your systems will face. It is the most urgent one, but history shows that cryptographic algorithms have a finite lifespan. New attacks emerge. Standards evolve. Entire algorithm families collapse overnight—as SIKE proved in 2022. Organizations that hard-wire their cryptographic choices into every layer of their stack will face the same painful, multi-year rip-and-replace cycle every time the landscape shifts.
Crypto agility is the architectural discipline of designing systems so that cryptographic algorithms can be swapped, upgraded, or deprecated without rewriting application logic, redeploying infrastructure, or breaking backward compatibility. It is not a product. It is not a library. It is a design philosophy that touches every layer of your stack—from TLS configuration to database schemas to API contracts.
Crypto agility does not mean supporting every algorithm simultaneously. It means your system is architecturally capable of transitioning to a new algorithm with configuration changes and key rotation—not code rewrites and database migrations.
Why Crypto Agility Matters Now
Several converging forces make crypto agility a present-tense requirement, not a future aspiration.
The SIKE Lesson: Standards Can Fail
SIKE (Supersingular Isogeny Key Encapsulation) was a NIST PQC Round 4 candidate—one of the most promising alternatives to lattice-based schemes. It offered the smallest key sizes of any post-quantum candidate, making it attractive for constrained environments. NIST was seriously evaluating it for standardization.
In July 2022, Wouter Castryck and Thomas Decru published a devastating attack that broke SIKE in under an hour on a single-core classical computer. Not a quantum computer—a laptop. The attack exploited the auxiliary torsion point information that SIKE's protocol revealed, using the GPST adaptive attack framework combined with a novel isogeny computation technique.
SIKE went from "promising NIST candidate" to "completely broken" in a single paper. Any organization that had prematurely deployed SIKE would have faced an emergency migration with zero lead time.
SIKE was broken by a classical attack, not a quantum one. The algorithm that was supposed to protect against future threats couldn't survive present-day cryptanalysis. This is the strongest possible argument for crypto agility: even algorithms designed by the best researchers, vetted through multi-year standardization processes, can fail catastrophically and without warning.
NIST Itself Plans for Algorithm Changes
NIST's own guidance acknowledges that the current PQC standards may not be permanent. NIST IR 8547 explicitly recommends that organizations design their migration plans with future algorithm transitions in mind. The selection of HQC (Hamming Quasi-Cyclic) as an additional code-based KEM standard (expected ~2027) demonstrates that the algorithm portfolio is still evolving.
FIPS 206 (FN-DSA, based on FALCON) is expected as a draft in 2026, adding a fourth standardized post-quantum signature scheme. Organizations deploying PQC today must be prepared to incorporate these new standards as they arrive.
Regulatory Pressure Is Accelerating
The regulatory landscape demands not just migration but agility:
- NIST IR 8547 (Nov 2024): Deprecates RSA, ECDSA, and ECDH after 2030; disallows them in federal systems after 2035. Explicitly recommends crypto-agile architectures.
- CNSA 2.0 (Sep 2022): NSA mandates PQC for all national security systems, with hard deadlines starting 2025.
- EO 14144 (Jan 2025): Requires federal agencies to implement PQC "as soon as practicable."
- OMB M-23-02: Requires federal agencies to maintain cryptographic inventories and produce migration plans.
- PCI DSS 4.0: Mandates strong cryptography with documented key management—implying algorithm lifecycle awareness.
These regulations share a common theme: organizations must know what algorithms they use, plan for transitions, and execute migrations on defined timelines. Crypto agility is the only architecture that makes this feasible at enterprise scale.
Four More Reasons You Cannot Ignore
Drivers for Crypto Agility
- New classical attacks: Cryptanalytic breakthroughs happen regularly. AES withstands them; others don't. Your architecture must tolerate algorithm-level failures.
- Performance improvements: Future algorithms may offer better latency, smaller keys, or lower bandwidth. You should be able to adopt them without re-engineering your stack.
- Compliance drift: Different jurisdictions may mandate different algorithms. Your system must serve customers across regulatory boundaries.
- Supply chain diversity: Reliance on a single algorithm family (e.g., lattice-based only) concentrates risk. A breakthrough in lattice reduction would compromise ML-KEM, ML-DSA, and FN-DSA simultaneously.
Crypto-Agility Architecture: The Three Pillars
Crypto agility is built on three architectural pillars: abstraction, negotiation, and inventory. All three must be present. Missing any one of them degrades the entire system's ability to transition.
Pillar 1: Abstraction Layers
The foundational requirement is separating cryptographic operations from application logic through well-defined interfaces. Application code should never reference a specific algorithm by name. Instead, it calls an abstract interface, and the concrete algorithm is selected at runtime based on configuration.
// Abstract crypto interface — application code depends ONLY on this interface CryptoProvider { sign(data: Buffer, key: SigningKey): Promise<SignedPayload>; verify(payload: SignedPayload, key: VerifyKey): Promise<boolean>; encapsulate(recipientKey: PublicKey): Promise<KEMResult>; decapsulate(ciphertext: Buffer, key: PrivateKey): Promise<Buffer>; algorithmId(): string; } // SignedPayload carries algorithm metadata — critical for agility interface SignedPayload { algorithm: string; // e.g., "ml-dsa-65", "ecdsa-p256" version: number; // schema version for this algorithm keyId: string; // identifies which key was used signature: Buffer; // the actual signature bytes data: Buffer; // the signed data } // Concrete implementations — swappable via config class MLDSAProvider implements CryptoProvider { algorithmId() { return "ml-dsa-65"; } async sign(data, key) { const sig = await dilithiumSign(data, key); return { algorithm: this.algorithmId(), version: 1, keyId: key.id, signature: sig, data }; } // ... verify, encapsulate, decapsulate } class ECDSAProvider implements CryptoProvider { algorithmId() { return "ecdsa-p256"; } // ... same interface, different implementation }
The key insight: the SignedPayload includes the algorithm identifier alongside the signature. This means any verifier can determine which algorithm was used and select the correct verification logic at runtime, even years after the signature was created.
Pillar 2: Algorithm Negotiation
Negotiation is the mechanism by which two parties agree on which algorithm to use for a given operation. TLS already does this via cipher suites. Your application-layer protocols need the same capability.
# Configuration-driven algorithm selection # Change this file — no code changes required cryptography: signing: primary: ml-dsa-65 # FIPS 204, post-quantum accepted: # algorithms we can still verify - ml-dsa-65 - ml-dsa-87 - ecdsa-p256 # legacy, being phased out - ed25519 # legacy, being phased out deprecated: # warn on use, will be removed - rsa-pkcs1-v15 - rsa-pss-2048 deprecation_date: "2026-06-01" key_exchange: primary: ml-kem-768 # FIPS 203, post-quantum hybrid: true # combine with classical hybrid_classical: x25519 # classical component of hybrid accepted: - ml-kem-768 - ml-kem-1024 - x25519 # classical fallback hashing: primary: sha3-256 accepted: [sha3-256, sha-256, blake3] fhe: primary: bfv-4096 # BFV with N=4096 accepted: [bfv-4096, ckks-4096, tfhe-630]
This configuration separates the primary algorithm (used for new operations) from the accepted list (used for verification/decryption of existing data) and the deprecated list (logged, alerted, scheduled for removal). Transitioning to a new algorithm means updating configuration—not deploying new code.
Pillar 3: Cryptographic Inventory (CBOM)
You cannot migrate what you cannot find. A Cryptographic Bill of Materials (CBOM) is a comprehensive inventory of every cryptographic algorithm, key, certificate, and protocol in use across your organization.
Most enterprises cannot answer basic questions: "How many RSA-2048 keys are in production?" or "Which services still use SHA-1?" Without answers, migration planning is guesswork. OMB M-23-02 legally requires federal agencies to maintain cryptographic inventories. The private sector should follow suit.
A CBOM should catalog:
- Algorithms in use: Every signing, encryption, hashing, and key exchange algorithm, with version and parameter details
- Key inventory: Every cryptographic key, its algorithm, creation date, expiration, rotation schedule, and owning service
- Certificate chains: Every certificate, its signing algorithm, issuing CA, and expiration date
- Protocol configurations: TLS versions, cipher suites, VPN settings, SSH configurations
- Dependencies: Third-party libraries, HSM firmware, cloud KMS configurations that embed cryptographic choices
- Data at rest: What algorithm protects stored data, and what key was used
| CBOM Field | Example | Why It Matters |
|---|---|---|
| Service | auth-api | Identify which team owns the migration |
| Algorithm | RSA-2048-PKCS1 | Quantum-vulnerable? Classical risk? |
| Usage | JWT signing | Determines migration urgency |
| Key ID | k-a7f3e9b2 | Track rotation and retirement |
| Created | 2023-03-15 | Age indicates rotation urgency |
| PQ Status | Vulnerable | Priority for migration |
| Data Shelf-Life | 10+ years | Mosca inequality input |
The Hybrid Approach: Bridging Two Eras
Crypto agility during the PQC transition requires a hybrid approach—using both classical and post-quantum algorithms simultaneously. This is not a permanent architecture; it is a bridge that ensures security during the transition period when not all parties support PQC.
Why Hybrid, Not Direct Replacement
Direct replacement (swapping RSA for ML-DSA everywhere at once) is risky for three reasons:
- PQC algorithms are newer and less battle-tested. While ML-KEM and ML-DSA have undergone extensive cryptanalysis, they lack the 25+ years of real-world deployment history that RSA has. A hybrid approach means that even if a PQC algorithm is broken (like SIKE was), the classical component still protects you.
- Interoperability. Not all counterparties, clients, or systems support PQC yet. Hybrid allows you to negotiate down to classical-only with legacy partners while using PQC with capable ones.
- Compliance overlap. Some regulations still require classical algorithms (e.g., RSA-3072 or ECDSA P-384) while others now mandate PQC. Hybrid satisfies both simultaneously.
Hybrid Key Exchange in Practice
The most common hybrid pattern combines X25519 (classical ECDH) with ML-KEM-768 (post-quantum). The shared secret is the concatenation (or KDF combination) of both key exchanges. If either algorithm holds, the combined key is secure:
/// Hybrid key exchange: X25519 + ML-KEM-768 /// Security holds if EITHER algorithm is secure pub fn hybrid_encapsulate( classical_pk: &X25519PublicKey, pq_pk: &MlKemPublicKey, ) -> (HybridCiphertext, SharedSecret) { // Classical component let (classical_ct, classical_ss) = x25519_encapsulate(classical_pk); // Post-quantum component let (pq_ct, pq_ss) = ml_kem_768_encapsulate(pq_pk); // Combine shared secrets via HKDF // Even if one algorithm is broken, the other protects the key let combined = hkdf_sha3_256( &[classical_ss.as_bytes(), pq_ss.as_bytes()].concat(), b"hybrid-kem-x25519-mlkem768-v1", // domain separator ); let ct = HybridCiphertext { algorithm: "hybrid-x25519-mlkem768", version: 1, classical: classical_ct, post_quantum: pq_ct, }; (ct, combined) }
Chrome, Firefox, and Cloudflare have already deployed hybrid key exchange (X25519Kyber768Draft) for TLS 1.3. The performance overhead is minimal: ML-KEM-768 adds approximately 1 KB to the TLS handshake and <1ms to connection setup. The security gain is immediate protection against harvest-now-decrypt-later attacks.
Case Study: How TLS Evolved Through Crypto Agility
TLS is the best real-world example of crypto agility in action. Over 25 years, it has transitioned through multiple algorithm generations without breaking the internet—precisely because it was designed with negotiation and abstraction baked in.
The lesson: TLS's cipher suite negotiation mechanism made it possible to transition from RC4 to AES to ChaCha20 to post-quantum hybrids without breaking a single website. Each transition took years, involved backward compatibility periods, and required no changes to application code. This is crypto agility working at internet scale.
Contrast this with SSH, where algorithm changes historically required manual configuration edits on every server, or with custom application protocols that hard-coded RSA key sizes. Those systems face painful, error-prone migrations every time the landscape shifts.
NIST IR 8547: The Migration Roadmap
NIST Internal Report 8547, published November 2024, provides the most authoritative guidance on PQC migration timelines. Its recommendations implicitly require crypto-agile architectures.
| Algorithm | Status After 2030 | Status After 2035 |
|---|---|---|
| RSA (all key sizes) | Deprecated | Disallowed |
| ECDSA / ECDH (all curves) | Deprecated | Disallowed |
| EdDSA (Ed25519, Ed448) | Deprecated | Disallowed |
| Diffie-Hellman (finite field) | Deprecated | Disallowed |
| ML-KEM (FIPS 203) | Approved | Required |
| ML-DSA (FIPS 204) | Approved | Required |
| SLH-DSA (FIPS 205) | Approved | Approved |
| AES-256 | Approved | Approved |
| SHA-3, SHA-2 (256+) | Approved | Approved |
The five-year window between "deprecated" (2030) and "disallowed" (2035) is the hybrid transition period. During this window, systems must support both classical and PQC algorithms simultaneously—exactly the kind of multi-algorithm operation that crypto-agile architectures enable. Systems without agility will spend those five years in emergency re-engineering mode.
Enterprise Implementation: Where Crypto Hides
Cryptographic algorithms are embedded throughout the enterprise stack, often in places that aren't immediately obvious. A comprehensive migration requires identifying and addressing every touchpoint.
TLS and HTTPS
TLS is the most visible cryptographic protocol and the easiest to address because TLS 1.3 already supports cipher suite negotiation. The migration path:
- Enable hybrid key exchange (X25519+ML-KEM-768) on all public-facing endpoints
- Update certificate chains to use ML-DSA signatures (requires CA support)
- Configure cipher suite preference to prioritize PQC-hybrid suites
- Monitor negotiation logs for clients still using classical-only suites
- Set a deprecation date for classical-only TLS connections
Certificates and PKI
Certificate infrastructure is one of the hardest areas to migrate because of chain-of-trust dependencies. If your root CA signs with RSA, every certificate in the chain inherits that vulnerability.
- Dual certificates: Issue both classical and PQC certificates for the same identity during transition
- Hybrid certificates: X.509 v3 extensions can carry both RSA and ML-DSA signatures (draft-ounsworth-pq-composite-sigs)
- Short-lived certificates: Reduce the impact of any single algorithm compromise by using 90-day or shorter certificate lifetimes (as Let's Encrypt pioneered)
VPN and Network Layer
IPsec and WireGuard VPNs use IKEv2 for key exchange (typically ECDH) and RSA/ECDSA for authentication. Both must transition to PQC. IKEv2 supports algorithm negotiation natively, so the migration is primarily a configuration change—if your VPN vendor supports PQC cipher suites.
Code Signing
Code signing is particularly sensitive because signatures must remain verifiable for the entire lifetime of the software. A firmware update signed with ECDSA today must still be verifiable in 2040. This means code-signing keys should transition to PQC first, using dual signatures (classical + PQC) during the transition period.
Authentication Tokens
JWTs, OAuth tokens, and API keys are typically signed with RSA or ECDSA. Migration to ML-DSA requires:
- Updating the JWT
algheader to use PQC algorithm identifiers - Supporting verification of both legacy and PQC-signed tokens during transition
- Updating JWKS (JSON Web Key Set) endpoints to serve PQC public keys
- Coordinating with token consumers (APIs, services) to accept new algorithm identifiers
Data at Rest
Symmetric encryption (AES-256) is quantum-resistant, but the key wrapping often uses RSA or ECDH. Encrypted data at rest is vulnerable if the key-wrapping algorithm is broken. Migrate key-wrapping to ML-KEM while leaving the underlying AES encryption intact.
Common Anti-Patterns: What Not to Do
Crypto agility is as much about avoiding bad patterns as implementing good ones. These anti-patterns are pervasive in production systems and will make future migrations exponentially harder.
Anti-Pattern 1: Hardcoded Algorithms
The most common and most damaging anti-pattern. When algorithm selection is embedded in source code rather than configuration, every migration requires a code change, code review, testing cycle, and deployment.
// BAD: Algorithm hardcoded in application logic import { createSign } from 'crypto'; function signToken(payload: string, key: Buffer): Buffer { const signer = createSign('RSA-SHA256'); // Hardcoded! signer.update(payload); return signer.sign(key); } // GOOD: Algorithm from configuration function signToken(payload: string, key: SigningKey): SignedPayload { const provider = getCryptoProvider(config.signing.primary); return provider.sign(Buffer.from(payload), key); }
Anti-Pattern 2: No Algorithm Metadata
Storing signatures, ciphertexts, or key material without recording which algorithm produced them. When you encounter a signature from 2023, how do you know whether to verify it with RSA, ECDSA, or ML-DSA? Without metadata, you cannot.
-- BAD: No algorithm metadata CREATE TABLE signatures ( id UUID PRIMARY KEY, user_id UUID, signature BYTEA -- What algorithm? What key? Unknown. ); -- GOOD: Algorithm and key metadata stored with every signature CREATE TABLE signatures ( id UUID PRIMARY KEY, user_id UUID NOT NULL, algorithm VARCHAR(50) NOT NULL, -- 'ml-dsa-65', 'ecdsa-p256' algorithm_version INT NOT NULL, -- parameter version key_id VARCHAR(64) NOT NULL, -- which key signed this signature BYTEA NOT NULL, created_at TIMESTAMP NOT NULL DEFAULT NOW() );
More Anti-Patterns to Avoid
- Tight coupling between crypto and business logic: If your payment processing code directly calls
crypto.createCipheriv('aes-256-gcm', ...), every algorithm change requires modifying payment code. Separate concerns. - Single key per service: If a service has exactly one signing key with no rotation mechanism, you cannot transition to a new algorithm without downtime. Design for multiple active keys from day one.
- No deprecation alerting: If no system monitors which algorithms are in use, deprecated algorithms will persist indefinitely. Instrument your crypto layer to log algorithm usage and alert when deprecated algorithms are still active.
- Assuming library defaults are permanent: Libraries change their defaults between versions. If your code relies on
openssl's default signature algorithm, a library update could silently change your cryptographic behavior. Be explicit. - Ignoring third-party dependencies: Your application may be crypto-agile, but if your payment processor, identity provider, or cloud HSM only supports RSA, you're constrained by their timeline. Map your dependency chain.
Database Schema Design for Crypto Agility
Your database schema must accommodate algorithm transitions without data migration. This means every table that stores cryptographic material needs algorithm metadata columns.
-- Crypto-agile public key storage -- Supports multiple active keys per user during transition periods CREATE TABLE public_keys ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), user_id UUID NOT NULL REFERENCES users(id), algorithm VARCHAR(50) NOT NULL, -- 'ml-dsa-65', 'ecdsa-p256', etc. algorithm_params JSONB, -- algorithm-specific parameters key_data BYTEA NOT NULL, key_format VARCHAR(20) NOT NULL, -- 'raw', 'der', 'pem' created_at TIMESTAMP NOT NULL DEFAULT NOW(), expires_at TIMESTAMP, deprecated_at TIMESTAMP, -- NULL = active, set = phasing out is_primary BOOLEAN DEFAULT false, -- which key to use for new operations CONSTRAINT one_primary_per_user UNIQUE (user_id, is_primary) WHERE is_primary = true ); -- Index for fast lookup during verification -- (need to verify with any active key, not just primary) CREATE INDEX idx_active_keys ON public_keys(user_id) WHERE deprecated_at IS NULL AND (expires_at IS NULL OR expires_at > NOW()); -- Encrypted data table: stores algorithm metadata with ciphertext CREATE TABLE encrypted_records ( id UUID PRIMARY KEY, record_type VARCHAR(50) NOT NULL, encryption_algorithm VARCHAR(50) NOT NULL, -- 'aes-256-gcm' kek_algorithm VARCHAR(50) NOT NULL, -- 'ml-kem-768', 'rsa-oaep' wrapped_dek BYTEA NOT NULL, -- encrypted data encryption key ciphertext BYTEA NOT NULL, iv BYTEA NOT NULL, auth_tag BYTEA, created_at TIMESTAMP NOT NULL DEFAULT NOW() );
The critical design decisions: (1) multiple active keys per user during transition, (2) a deprecated_at column that allows graceful phase-out rather than hard deletion, (3) the kek_algorithm column on encrypted records that tracks which key-wrapping algorithm protects each record's data encryption key.
API Design for Crypto Agility
APIs must communicate cryptographic capabilities without leaking implementation details. This means supporting algorithm negotiation in request/response headers.
// Client advertises supported algorithms POST /api/v2/authenticate Accept-Crypto: ml-dsa-65, ml-dsa-87, ecdsa-p256;q=0.5 Accept-KEM: ml-kem-768, x25519;q=0.3 // Server responds with selected algorithm 200 OK Content-Crypto: ml-dsa-65 X-Key-Id: k-7f3e9b2a X-Algorithm-Deprecation: ecdsa-p256; sunset="2026-12-01" { "token": "eyJhbGciOiJNTC1EU0EtNjUiLC...", "algorithm": "ml-dsa-65", "key_id": "k-7f3e9b2a", "deprecation_notice": { "ecdsa-p256": { "sunset": "2026-12-01", "replacement": "ml-dsa-65" } } }
The Accept-Crypto header (modeled after HTTP content negotiation) allows clients to declare their supported algorithms with preference weights. The server selects the strongest mutually supported algorithm and communicates deprecation timelines in response headers. Clients can programmatically detect upcoming sunsets and alert their operators.
H33's Approach: Algorithm-Agnostic by Design
H33's authentication infrastructure was built with crypto agility as a first-class architectural concern. The API is algorithm-agnostic: the caller specifies what they want to do (authenticate a user, verify a biometric), and the backend selects the optimal algorithm based on configuration and security policy.
Swappable FHE Backend
H33's FHE layer supports multiple encryption schemes behind a unified interface:
| Scheme | Best For | Status | Module |
|---|---|---|---|
| BFV | Exact integer arithmetic (biometric matching) | Production | src/fhe/bfv.rs |
| CKKS | Approximate floating-point (ML inference) | Available | src/fhe/ckks.rs |
| TFHE | Boolean circuits (arbitrary computation) | Planned | src/fhe/tfhe.rs |
The production pipeline uses BFV with N=4096 for biometric authentication today. If a lattice attack reduced BFV security margins, the backend could switch to a higher parameter set (N=8192) or an entirely different scheme (TFHE) via configuration—without any API contract change.
Configurable Signature Schemes
H33's attestation layer currently uses Dilithium (ML-DSA) for post-quantum digital signatures. The signature module is abstracted behind a trait:
/// Trait for pluggable signature schemes pub trait SignatureProvider: Send + Sync { fn algorithm_id(&self) -> &str; fn sign(&self, message: &[u8], sk: &SigningKey) -> SignedAttestation; fn verify(&self, attestation: &SignedAttestation, pk: &VerifyKey) -> bool; fn key_gen(&self) -> (SigningKey, VerifyKey); } /// Current production implementation: Dilithium (ML-DSA-65) pub struct DilithiumProvider; impl SignatureProvider for DilithiumProvider { fn algorithm_id(&self) -> &str { "ml-dsa-65" } fn sign(&self, message: &[u8], sk: &SigningKey) -> SignedAttestation { let sig = dilithium_sign(message, sk); SignedAttestation { algorithm: self.algorithm_id().into(), signature: sig, timestamp: Utc::now(), } } // ... } /// Future: FALCON (FN-DSA) — smaller signatures, faster verify pub struct FalconProvider; /// Future: SLH-DSA (SPHINCS+) — hash-based, zero lattice assumptions pub struct SphincsPlusProvider;
Switching from Dilithium to FALCON (when FIPS 206 is finalized) requires implementing the FalconProvider struct and updating the configuration. No API changes. No client updates. No database migration. The attestation output includes the algorithm identifier, so verifiers automatically select the correct verification path.
Full Pipeline Agility
H33's single-API-call pipeline—FHE batch verification, ZKP proof, and Dilithium attestation—treats each component as independently swappable:
If a new STARK proof system offers better performance, it can replace the current ZKP implementation. If SLH-DSA (hash-based signatures with zero lattice assumptions) becomes preferred for defense-in-depth, the signature stage can switch independently. The API contract—send biometric template, receive authenticated result—never changes.
Implementation Roadmap: Four Phases
Implementing crypto agility is not a single project. It is a phased transformation that typically spans 12–24 months, depending on organizational complexity.
Phase 1: Discovery and Inventory (Months 1-3)
Phase 1 Deliverables
- Cryptographic Bill of Materials (CBOM): Complete inventory of all algorithms, keys, certificates, and protocols in use
- Risk assessment: Apply Mosca's inequality to each data category. Identify which systems are already past the "act now" threshold
- Dependency map: Identify third-party services, libraries, and HSMs that constrain your algorithm choices
- Stakeholder alignment: Security, engineering, compliance, and executive teams agree on timeline and priorities
Phase 2: Abstraction Layer (Months 3-9)
Phase 2 Deliverables
- Crypto abstraction library: Build or adopt a provider-pattern library that separates algorithm selection from application code
- Schema migration: Add algorithm metadata columns to all tables storing cryptographic material
- API versioning: Add algorithm negotiation headers to all external APIs
- Configuration system: Implement configuration-driven algorithm selection with hot-reload capability
- Monitoring: Instrument all crypto operations to log algorithm usage, latency, and error rates
Phase 3: Hybrid Deployment (Months 9-18)
Phase 3 Deliverables
- PQC algorithm integration: Implement ML-KEM, ML-DSA, and SLH-DSA providers within the abstraction layer
- Hybrid key exchange: Deploy X25519+ML-KEM-768 for all TLS and key exchange operations
- Dual signatures: Issue both classical and PQC signatures during transition. Verify either.
- Key rotation: Generate PQC keys for all users/services. Mark classical keys as deprecated.
- Interoperability testing: Validate with all counterparties, clients, and integrations
Phase 4: Classical Sunset (Months 18-24+)
Phase 4 Deliverables
- Deprecation enforcement: Log and alert on all classical algorithm usage. Set hard sunset dates.
- Classical removal: Remove classical algorithms from the "accepted" list. Reject classical-only connections.
- Re-encryption: For data at rest with classical key wrapping, re-wrap DEKs with PQC KEMs
- Continuous agility: The abstraction layer remains in place permanently. Future algorithm transitions (FIPS 206, HQC, etc.) follow the same pattern: add provider, update config, deprecate predecessor.
The final phase establishes a permanent operational capability. Crypto agility is not a project with an end date—it is an ongoing discipline. Your abstraction layer, monitoring, and configuration infrastructure will be used for every future algorithm transition, whether that is adopting FIPS 206, responding to a newly discovered attack, or meeting a new regulatory mandate.
Monitoring and Operational Readiness
A crypto-agile system without monitoring is a crypto-agile system in name only. You must continuously track algorithm usage to make informed migration decisions and detect anomalies.
What to Monitor
- Algorithm distribution: What percentage of operations use each algorithm? Track the migration curve over time.
- Deprecated algorithm usage: Alert when deprecated algorithms are still active. Escalate as sunset dates approach.
- Key age: Flag keys that have exceeded their rotation interval. Track keys approaching expiration.
- Negotiation failures: When two parties cannot agree on an algorithm, log the mismatch. This identifies clients that need upgrades.
- Performance per algorithm: Track latency and throughput for each algorithm. Detect performance regressions that might indicate implementation bugs or side-channel leakage.
- Error rates: Signature verification failures, decryption errors, and key-exchange failures broken down by algorithm.
Dashboard Metrics
| Metric | Target | Alert Threshold |
|---|---|---|
| PQC adoption rate | 100% by sunset | <80% at T-6 months |
| Deprecated algo usage | 0% | >0% after sunset |
| Key rotation compliance | 100% | <95% |
| Negotiation success rate | >99.9% | <99% |
| Avg signature latency | <1ms | >5ms |
Crypto-Agile API Design: Complete Example
Bringing all the principles together, here is a complete example of a crypto-agile authentication API that supports algorithm negotiation, metadata embedding, and graceful deprecation.
use std::collections::HashMap; /// Registry of all available crypto providers pub struct CryptoRegistry { providers: HashMap<String, Box<dyn SignatureProvider>>, config: CryptoConfig, } impl CryptoRegistry { /// Select the best mutually-supported algorithm pub fn negotiate( &self, client_supported: &[String], ) -> Result<&dyn SignatureProvider, NegotiationError> { // Try primary first if client_supported.contains(&self.config.primary) { return Ok(self.providers[&self.config.primary].as_ref()); } // Fall back through accepted list (ordered by preference) for algo in &self.config.accepted { if client_supported.contains(algo) { if self.config.deprecated.contains(algo) { log_deprecation_warning(algo); } return Ok(self.providers[algo].as_ref()); } } Err(NegotiationError::NoCommonAlgorithm) } /// Verify any signed attestation, regardless of algorithm pub fn verify_any( &self, attestation: &SignedAttestation, pk: &VerifyKey, ) -> Result<bool, VerifyError> { // Read algorithm from the attestation itself let provider = self.providers .get(&attestation.algorithm) .ok_or(VerifyError::UnknownAlgorithm)?; Ok(provider.verify(attestation, pk)) } }
This pattern—a registry of providers selected by configuration and negotiation—is the core of crypto agility. New algorithms are added by implementing the trait and registering the provider. Old algorithms are removed by moving them from "accepted" to "deprecated" to "removed" in configuration. Zero code changes to application logic.
The Bottom Line
The post-quantum transition is not a one-time event. It is the first of many cryptographic migrations your systems will face. SIKE's collapse proves that even carefully vetted algorithms can fail without warning. NIST's ongoing standardization of FIPS 206 and HQC proves the algorithm portfolio is still evolving. And the 2030/2035 deadlines in NIST IR 8547 prove the regulatory clock is already ticking.
Crypto agility is the only architecture that survives this environment. It requires upfront investment in abstraction layers, algorithm metadata, negotiation protocols, and monitoring infrastructure. But the alternative—hard-coded algorithms, missing metadata, and no migration path—means repeating a painful, multi-year rip-and-replace cycle every time the landscape shifts. And in a world where algorithms can be broken overnight, "multi-year" may be more time than you have.
Build the abstraction now. Inventory your algorithms. Deploy hybrid PQC. And make sure the next migration—whatever triggers it—is a configuration change, not an emergency.
H33 provides post-quantum authentication infrastructure built on crypto-agile principles. The FHE backend (BFV/CKKS), signature layer (ML-DSA/SLH-DSA), and key exchange (ML-KEM) are all independently swappable via configuration. One API call. ~50µs per authentication. Algorithm-agnostic by design.