HATS: Continuous Post-Quantum Attestation Explained
HATS is a publicly available technical conformance standard for continuous AI trustworthiness; certification under HATS provides independently verifiable evidence that a system satisfies the standard's defined controls. That sentence is the official definition of the H33 AI Trust Standard, and every word in it was chosen deliberately. This article is a deep dive into what HATS is, how it works, why it was designed the way it was, and what it means for organizations that adopt it.
The AI industry has a trust problem. Organizations are deploying AI systems that make decisions affecting people's lives, livelihoods, and liberties, and the people affected by those decisions have no way to verify that the systems are operating correctly, fairly, or safely. Trust is based on the AI vendor's assertions: "our system is accurate," "our system is fair," "our system is secure." But assertions are not evidence. HATS was created to bridge the gap between assertion and evidence by providing a framework for continuous, cryptographically verified, independently auditable proof of AI trustworthiness.
Why Existing Standards Are Not Sufficient
Before examining how HATS works, it is worth understanding why existing standards and frameworks do not adequately address the AI trust problem. There are several frameworks that address aspects of AI governance, including ISO 42001, the NIST AI Risk Management Framework, the EU AI Act, and various industry-specific guidelines. Each of these has value, but each has a fundamental limitation when it comes to providing verifiable evidence of AI trustworthiness.
The first limitation is temporal. Most existing frameworks operate on an annual or periodic assessment cycle. An organization is assessed at a point in time, certified as compliant, and then operates for months or years before the next assessment. During the interval between assessments, the organization's compliance status is assumed rather than verified. An AI system that was operating correctly at assessment time may have drifted, been updated, or been compromised by the time the next assessment occurs. The certification reflects a historical state, not a current one.
The second limitation is evidentiary. Existing frameworks rely on auditor judgment applied to documentation, interviews, and system demonstrations. The evidence is qualitative, not mathematical. An auditor may review a system's documentation and conclude that the controls are adequately designed. But "adequately designed" is a judgment, not a proof. Different auditors may reach different conclusions from the same evidence. The assessment depends on the auditor's expertise, methodology, and even their subjective interpretation of the standard's requirements.
The third limitation is verifiability. When an organization receives a certification under an existing framework, the certification is a statement by the certifying body that the organization met the standard's requirements at the time of assessment. But the underlying evidence is not independently verifiable. A customer, regulator, or insurer who wants to verify the organization's compliance cannot examine the evidence directly. They must trust the certifying body's judgment. The certification is an assertion about an assessment, not a verifiable proof of compliance.
The fourth limitation is scope. Existing AI governance frameworks focus primarily on policies, procedures, and organizational controls. They address questions such as "does the organization have a process for assessing AI fairness?" and "does the organization have governance structures for AI risk management?" These are important questions, but they do not address the more fundamental question: "is the AI system actually operating correctly right now?" HATS addresses this question by defining technical controls that can be continuously verified at the system level, not just the organizational level.
The HATS Architecture
HATS is built on three foundational principles: continuous verification, cryptographic attestation, and independent verifiability. Each principle addresses one or more of the limitations of existing frameworks.
Continuous verification means that every control defined by the standard is checked on an ongoing basis, not at annual or periodic intervals. The frequency of verification varies by control: some controls are verified with every operation, some are verified at regular intervals, and some are verified when specific events occur. But no control is verified only once per assessment cycle. The continuous nature of the verification eliminates the temporal gap between assessment and reality that plagues periodic assessment frameworks.
Cryptographic attestation means that every verification produces a mathematical proof rather than a log entry or a human judgment. Each verification generates an H33-74 attestation: a 74-byte cryptographic receipt that contains a commitment to the verification, a timestamp, and a chain link to the previous attestation. The attestation is signed with three-family post-quantum signatures. The attestation chain forms a tamper-evident record of every verification that has been performed. This eliminates the evidentiary limitation of existing frameworks by replacing qualitative judgments with mathematical proofs.
Independent verifiability means that any party with access to the public verification key can confirm that each attestation is authentic and that the chain is intact. The organization being assessed does not need to provide access to its systems. The certifying body does not need to be trusted. The evidence is self-proving. Any stakeholder, whether a customer, regulator, insurer, or partner, can independently verify that the standard's controls are being continuously satisfied. This eliminates the verifiability limitation of existing frameworks.
The Control Framework
The HATS standard defines a set of technical controls that collectively constitute a comprehensive measure of AI trustworthiness. These controls span several domains, and each control is defined with sufficient precision that its verification can be automated and attested.
The data protection domain covers controls related to the handling of data by the AI system. This includes controls for data encryption during processing, which verifies that the system uses fully homomorphic encryption to process data without decrypting it. It includes controls for data provenance, which verifies that the system maintains a cryptographic record of data origins and transformations. And it includes controls for data minimization, which verifies that the system accesses only the minimum data necessary for each operation.
The model integrity domain covers controls related to the AI model itself. This includes controls for model versioning, which verifies that the system uses a specific, attested version of the model. It includes controls for model drift detection, which verifies that the system continuously monitors for changes in model behavior that may indicate drift or degradation. And it includes controls for model output attestation, which verifies that every output produced by the model is cryptographically attested.
The operational security domain covers controls related to the security of the AI system's operational environment. This includes controls for authentication, which verifies that access to the system is continuously authenticated using post-quantum cryptographic methods. It includes controls for authorization, which verifies that every action within the system is authorized based on cryptographically verified permissions. And it includes controls for audit trail integrity, which verifies that the system's audit trail is maintained as a tamper-evident attestation chain.
The fairness and transparency domain covers controls related to the AI system's decision-making properties. This includes controls for decision explainability, which verifies that the system can produce explanations for its decisions that are consistent with the decision logic. It includes controls for bias monitoring, which verifies that the system continuously evaluates its outputs for statistical disparities across protected categories. And it includes controls for decision contestability, which verifies that the system provides a mechanism for challenging decisions and that challenges are processed and attested.
Each control in each domain is defined with a specific verification method, a specific verification frequency, and a specific attestation format. This precision is essential for automated verification and cryptographic attestation. A control that says "the system should protect data" cannot be automatically verified. A control that says "every computation on personal data must be performed using FHE with specific parameters, and each computation must produce an H33-74 attestation" can be automatically verified and attested.
Three-Family Post-Quantum Signatures
The attestations that HATS produces are signed with three-family post-quantum signatures. This is a critical design decision that requires explanation, because it directly affects the long-term trustworthiness of the attestation chain.
Classical digital signatures, such as RSA and ECDSA, are based on mathematical problems that quantum computers can efficiently solve. When sufficiently powerful quantum computers are available, classical signatures will no longer provide security guarantees. Any attestation chain signed with classical signatures will become vulnerable to forgery, which would destroy its evidentiary value.
Post-quantum signatures are based on mathematical problems that are believed to be resistant to quantum attacks. NIST has standardized several post-quantum signature algorithms, including ML-DSA (based on lattice problems), SLH-DSA (based on hash functions), and FALCON (based on structured lattice problems). Each algorithm is based on a different mathematical hardness assumption.
H33-74 uses three signature families rather than one. Each attestation is signed with signatures from all three families, where each family is based on a different and independent mathematical hardness assumption. This means that forging an H33-74 attestation requires simultaneously breaking three distinct mathematical problems: the MLWE lattice problem, the structured lattice problem that underlies FALCON, and the security of stateless hash functions. If any one of these problems turns out to be easier than currently believed, whether due to advances in mathematics, classical computing, or quantum computing, the attestation remains secure because the other two families provide independent protection.
This three-family approach is not redundancy for its own sake. It is a deliberate response to the uncertainty inherent in post-quantum cryptography. The field is relatively young, and while the standardized algorithms have been extensively analyzed, the possibility that one mathematical family could be broken cannot be excluded. By using three independent families, HATS attestations remain trustworthy even in a scenario where one family is compromised. The probability that all three independent mathematical assumptions will be simultaneously broken is negligible by any reasonable assessment.
The Attestation Chain
Individual attestations are valuable, but the real power of the HATS framework comes from the attestation chain. Each attestation contains a chain link: a cryptographic reference to the previous attestation in the sequence. This chain link creates a tamper-evident sequence that has several important properties.
First, the chain is append-only. Once an attestation is added to the chain, it cannot be removed without breaking the chain link in the next attestation. This means that the complete history of verifications is preserved. There is no way to selectively remove attestations that show control failures or gaps.
Second, the chain is tamper-evident. If any attestation in the chain is modified, the chain link in the following attestation will not match, making the modification immediately detectable. This property extends to the entire chain: modifying any attestation invalidates every subsequent attestation, because each one references the previous one.
Third, the chain is verifiable from any point. A verifier can start at any attestation in the chain and verify backward to the genesis attestation, confirming the integrity of the entire chain. This means that verification does not require access to the system that generated the attestations. The chain itself contains all the information needed for verification.
Fourth, the chain provides a temporal record. Because each attestation includes a timestamp and is linked to the previous attestation, the chain provides a precise chronological record of every control verification. This temporal record is particularly valuable for compliance, because it shows not just that controls were in place at a specific point in time, but that they were continuously maintained over the entire period.
For an organization seeking HATS certification, the attestation chain is the certification evidence. The organization does not present documentation to an auditor and hope the auditor is satisfied. The organization presents an attestation chain that mathematically proves that every defined control was continuously verified throughout the certification period. The auditor's role shifts from evaluating evidence to verifying proofs, which is a more objective and reproducible process.
Continuous Versus Periodic: Why the Difference Matters
The shift from periodic to continuous verification is not just a quantitative change in the frequency of assessment. It is a qualitative change in the nature of the assurance provided.
Periodic assessment provides a snapshot. It answers the question: "Were the controls in place at the time of assessment?" This is useful but limited, because controls can be implemented for the assessment and relaxed afterward. The compliance industry has a term for this: "audit preparation." Organizations invest significant effort in preparing for audits, which often means bringing controls into compliance specifically for the audit rather than maintaining them continuously. Periodic assessment incentivizes this behavior because the consequences of non-compliance are concentrated at the assessment point.
Continuous verification provides a timeline. It answers the question: "Were the controls in place at every moment during the period?" This is fundamentally more valuable because it eliminates the incentive for audit preparation. There is no point in implementing controls only for the assessment when the assessment is continuous. Controls must be maintained at all times because the verification is happening at all times.
Continuous verification also provides a different kind of information about control failures. In a periodic assessment, a control failure is binary: the control was either in place or it was not at the time of assessment. In continuous verification, a control failure has duration and context. The attestation chain shows exactly when a control failed, how long the failure lasted, and when the control was restored. This temporal information is valuable for risk assessment, incident response, and regulatory reporting.
For insurers, the continuous nature of HATS attestation is particularly valuable. Instead of pricing risk based on a point-in-time assessment that may not reflect the organization's ongoing security posture, insurers can assess risk based on the continuous attestation record. Organizations that maintain controls consistently demonstrate lower risk than organizations with frequent gaps, even if both organizations pass periodic assessments.
The HATS Certification Process
The HATS certification process is designed to leverage the continuous nature of the standard. An organization that seeks HATS certification deploys the HATS verification infrastructure, which automatically verifies the defined controls and generates H33-74 attestations. The organization operates with the verification infrastructure in place for the certification period, during which the attestation chain accumulates evidence of control operation.
At the end of the certification period, the attestation chain is submitted for certification review. The review process is primarily automated: the attestation chain is verified for integrity, completeness, and compliance with the standard's requirements. Each attestation is verified to confirm that it is authentic, that it is properly chained, and that it represents a valid verification of the relevant control. The review identifies any gaps in the chain, any control failures, and any anomalies in the attestation pattern.
The certification decision is based on the complete attestation record for the certification period. Unlike periodic assessment, where the certification reflects the organization's posture at the time of assessment, HATS certification reflects the organization's posture throughout the entire certification period. This provides a much more accurate and meaningful representation of the organization's actual trustworthiness.
Certification is not binary. The attestation chain may show that the organization maintained all controls continuously, or it may show that certain controls experienced brief gaps that were promptly remediated. The certification report includes the complete attestation record, allowing stakeholders to make informed assessments based on the actual evidence rather than a pass/fail determination.
Implications for the AI Industry
HATS has implications that extend beyond the organizations that adopt it. As a publicly available standard with independently verifiable evidence, HATS has the potential to change how the entire AI industry approaches trust.
For AI vendors, HATS provides a way to differentiate based on verifiable trustworthiness rather than marketing claims. An AI vendor that can present a HATS attestation chain demonstrating continuous compliance with defined controls offers something qualitatively different from a vendor that can only offer assertions of trustworthiness. The attestation chain is objective evidence that anyone can verify. This raises the bar for all vendors in the market, because customers will increasingly expect verifiable evidence rather than assertions.
For AI customers, HATS provides a way to evaluate AI vendors based on objective, verifiable criteria. Instead of relying on vendor demonstrations, reference checks, and questionnaires, customers can examine the attestation chain directly. This reduces information asymmetry between vendors and customers and enables more informed purchasing decisions.
For regulators, HATS provides a compliance framework that is amenable to automated enforcement. Instead of examining documentation and conducting interviews, regulators can verify attestation chains. This makes regulatory oversight more scalable, more objective, and less dependent on the expertise of individual examiners. As AI regulation becomes more stringent, the ability to verify compliance automatically will become increasingly valuable.
For insurers, HATS provides a continuous, verifiable signal of AI system trustworthiness that can be incorporated into underwriting and claims processes. The attestation chain provides the kind of objective, real-time risk information that enables more accurate premium pricing, more efficient claims validation, and more effective portfolio risk management.
The Technical Foundation
HATS is built on H33's production cryptographic infrastructure. The FHE pipeline that enables data processing without exposure operates at production scale. The H33-74 attestation system generates and verifies attestations at microsecond latencies. The three-family post-quantum signature system provides defense in depth against both classical and quantum cryptographic attacks. The attestation chain verification system can process and verify chains of arbitrary length.
This technical foundation is not theoretical. It is operational and has been validated at scale. The FHE pipeline processes over a million authentications per second on production hardware. The attestation system generates H33-74 proofs as part of every operation. The signature verification system has been tested against NIST test vectors and validated against independent implementations. The entire system operates as an integrated pipeline, not as a collection of disconnected components.
For organizations considering HATS adoption, the technical requirements are designed to be achievable. The HATS verification infrastructure integrates with existing systems through APIs and middleware. The attestation generation adds minimal latency to existing operations. The attestation chain can be stored using standard infrastructure. The verification tools are available for any stakeholder who needs to verify the chain.
What Comes Next
HATS is not a static standard. It is designed to evolve as the AI industry matures, as new risks emerge, and as new verification capabilities become available. The control framework will be updated to address new categories of AI risk as they are identified. The cryptographic foundations will be updated as post-quantum standards evolve. The verification methods will be extended to cover new types of AI systems and new deployment architectures.
But the foundational principles, continuous verification, cryptographic attestation, and independent verifiability, will remain. These principles are not specific to any particular technology or any particular moment in the evolution of AI. They are the minimum requirements for a trust framework that produces actual evidence rather than assertions. They are the minimum requirements for a certification that means something to anyone who examines it, not just the auditor who issued it.
The AI industry will either develop meaningful, verifiable trust standards or it will face increasingly restrictive regulation imposed by authorities who have lost patience with self-governance. HATS represents the industry's best opportunity to demonstrate that verifiable trust is possible, that continuous compliance is achievable, and that cryptographic proof can replace qualitative judgment as the foundation for AI trustworthiness.
The standard is publicly available. The technology is operational. The certification process is defined. The question for every organization that deploys AI systems is whether they will adopt verifiable trust before their customers, regulators, and insurers require it, or after.
Explore HATS Certification
Schedule a technical demonstration to see how HATS provides continuous, cryptographically verified, independently auditable proof of AI trustworthiness through H33-74 post-quantum attestation.
Schedule a Demo