BenchmarksStack Ranking
APIsPricingStandardDocsWhite PaperTokenBlogAboutSecurity Demo
Log InTalk to UsGet API Key
Open Standard — Published 2026-03-17

H33 AI Trust Standard (HATS) v1.0

The first certification framework for AI trustworthiness backed by cryptographic proof rather than self-attestation. Open, versioned, and submitted to NIST.

Download HATS v1.0 (PDF) Submit Comments Get Certified →
3
Certification Tiers
40+
Requirements
8
Regulatory Frameworks
30yr
Evidence Validity (Tier 3)
Document ID
HATS-1.0-2026
Status
Published
Effective Date
2026-03-17
Issuing Authority
H33, Inc.
Classification
Public
Version
1.0
Abstract

The H33 AI Trust Standard (HATS) v1.0 defines a three-tiered certification framework for evaluating and attesting to the trustworthiness of artificial intelligence systems that process sensitive data, make consequential decisions, or operate within regulated industries. HATS establishes verifiable, cryptographically enforceable requirements across three orthogonal trust dimensions: governance proof, data separation, and audit permanence.

Unlike guidance frameworks that describe aspirational properties, HATS is a conformance standard. A system either satisfies the requirements of a given tier or it does not. Compliance is determined by continuous automated monitoring augmented by cryptographic attestation, not by periodic manual review. Evidence of compliance is machine-verifiable by any third party without access to the certifying authority's infrastructure.

HATS is designed for interoperability with the NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 42001:2023, the EU Artificial Intelligence Act (Regulation 2024/1689), SOC 2 Trust Services Criteria, HIPAA, SOX Section 404, GDPR, CCPA/CPRA, FDA 21 CFR Part 11, and FCA UK principles.

Certification Tiers

Three Levels of AI Trustworthiness

Each tier is strictly additive: a higher tier includes all requirements of the lower tiers. Partial compliance results in certification at the highest tier for which all requirements are met.

Tier 1

Governed AI

Continuous, cryptographically attested governance. Every inference is traceable to a policy, every decision is recorded, and violations are detected and remediated within defined time bounds.

  • Every inference governed by signed, versioned policy
  • Decision records with SHA3-256 hashes for every production inference
  • STARK zero-knowledge proofs binding policy, decision, and timestamp
  • Merkle tree compression at 60-second intervals
  • 100% inference monitoring (no sampling)
  • 60-second max detection gap
  • 72-hour policy gap remediation window
  • 7-year minimum evidence retention
Tier 2

Privacy-Protected AI

All Tier 1 requirements plus cryptographic guarantees that sensitive data is not accessible to the AI model in plaintext form.

  • All Tier 1 requirements
  • FHE, proxy redaction, or TEE for all sensitive data
  • BFV N≥4096, Q≥2^56 minimum FHE parameters
  • ML-DSA-65+ data blindness attestation per inference
  • ZKP proof of data separation per inference
  • 99.5% sensitive field detection rate
  • Differential privacy budget enforcement
  • Bypass triggers immediate suspension
  • 10-year minimum evidence retention
Tier 3

Quantum-Secured AI

All Tier 1 and Tier 2 requirements plus post-quantum cryptographic guarantees for all signatures, key exchanges, and evidentiary records with a 30-year validity horizon.

  • All Tier 1 + Tier 2 requirements
  • ML-DSA (FIPS 204) for ALL digital signatures
  • ML-KEM (FIPS 203) for ALL key encapsulation
  • No classical-only cryptographic algorithms
  • 30-year evidentiary validity horizon
  • Behavioral fingerprinting with hourly drift scores
  • Compliance Oracle for agentic workflows
  • On-chain Merkle root anchoring (hourly)
  • 30-year minimum evidence retention
Full Standard Document

Complete Specification

Each section of the HATS v1.0 standard is presented below. Click any section to expand the full text.

1 Purpose and Scope

1.1 Purpose

HATS defines what it means for an AI system to be certified as trustworthy. It provides a machine-verifiable, cryptographically grounded specification that enables organizations to demonstrate -- and third parties to independently confirm -- that an AI system meets defined trustworthiness requirements on a continuous basis.

HATS certifies trustworthiness across three layers:

  1. Governance Proof. Every inference executed by the AI system is governed by a valid, versioned, cryptographically signed policy, and every decision is recorded with a zero-knowledge proof binding the policy, the decision, and the time of execution.
  2. Data Separation. Sensitive data is cryptographically protected before it reaches the AI model. The system can prove, via zero-knowledge attestation, that no plaintext sensitive data was accessible to the model at inference time.
  3. Audit Permanence. The evidentiary record of the system's behavior is cryptographically signed, Merkle-compressed, and retained for a period sufficient to satisfy legal and regulatory requirements, including under post-quantum cryptographic assumptions where required.

1.2 Scope

This standard applies to any AI system that satisfies one or more of the following conditions:

  • Processes personally identifiable information (PII), protected health information (PHI), financial data, or legally privileged information.
  • Makes or materially contributes to decisions affecting individuals' rights, access to services, employment, credit, insurance, healthcare, or legal standing.
  • Operates in an industry subject to regulatory oversight, including healthcare, financial services, insurance, legal services, government, defense, and critical infrastructure.
  • Is deployed as a component in a multi-agent or agentic AI workflow where intermediate reasoning steps are not directly observable by a human operator.

This standard does not certify model accuracy, output quality, fairness, or bias. HATS certifies the operational trustworthiness of the system in which a model operates -- the governance, privacy, and evidentiary integrity of the system's behavior over time.

1.3 Relationship to Existing Standards

StandardRelationship to HATS
NIST AI RMF 1.0HATS implements the GOVERN, MAP, MEASURE, and MANAGE functions with concrete cryptographic requirements.
ISO/IEC 42001:2023HATS provides auditable evidence artifacts that satisfy ISO 42001 Annex A controls.
EU AI Act (2024/1689)HATS Tier 2+ satisfies transparency, data governance, and record-keeping obligations for high-risk AI systems.
SOC 2 TSCHATS evidence artifacts map to CC6 (Logical and Physical Access), CC7 (System Operations), CC8 (Change Management), and the Privacy criteria.
HIPAAHATS Tier 2+ satisfies the technical safeguard requirements of 45 CFR 164.312.

1.4 Normative Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

2 Normative References

  • NIST AI 100-1: Artificial Intelligence Risk Management Framework (AI RMF 1.0), January 2023
  • NIST FIPS 203: Module-Lattice-Based Key-Encapsulation Mechanism Standard (ML-KEM), August 2024
  • NIST FIPS 204: Module-Lattice-Based Digital Signature Standard (ML-DSA), August 2024
  • NIST FIPS 202: SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions, August 2015
  • ISO/IEC 42001:2023: Information technology -- Artificial intelligence -- Management system
  • Regulation (EU) 2024/1689: Artificial Intelligence Act
  • RFC 2119: Key words for use in RFCs to Indicate Requirement Levels
  • AICPA SOC 2 Trust Services Criteria (2017, updated 2022)
  • Fan, J. and Vercauteren, F.: Somewhat Practical Fully Homomorphic Encryption, IACR ePrint 2012/144
  • Ben-Sasson, E. et al.: Scalable, transparent, and post-quantum secure computational integrity (STARKs), IACR ePrint 2018/046

3 Definitions

TermDefinition
AI SystemA machine-based system that infers from input how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (EU AI Act Article 3(1)).
AI TrustworthinessThe property whereby an AI system's governance, data handling, and evidentiary record can be independently verified by a third party using cryptographic proofs.
Behavioral FingerprintA statistical characterization of an AI model's behavior (response length, latency, semantic consistency, edge case behavior), hashed using SHA3-256, used as a baseline for drift detection.
BypassAny event where plaintext sensitive data reaches the AI model's input layer without required encryption/redaction/enclave processing. Immediate compliance violation.
CertificateA digitally signed attestation that a specific AI system satisfies the requirements of a specified HATS tier, subject to continuous monitoring.
Compliance OracleA cryptographic attestation service evaluating each step in agentic AI workflows against governing policy, producing a ZKP of compliance per step. Required for Tier 3.
Compliance ScoreNumerical value (0-100) computed from governance proof completeness, data separation coverage, monitoring continuity, and remediation timeliness.
Data BlindnessThe property whereby the AI model never has access to plaintext sensitive data, achieved through FHE, proxy redaction, or TEE processing.
Decision RecordA structured evidence artifact documenting a single AI inference: input hash, output hash, governing policy hash, timestamp, and zero-knowledge proof.
Drift ScoreNumerical value (0.0-1.0) computed as a weighted combination of statistical divergences across behavioral fingerprint dimensions.
Evidence ArtifactAny cryptographic proof, signature, hash, Merkle root, or structured data record produced by the monitoring system to attest to a compliance property.
Governance ProofA zero-knowledge proof demonstrating that a specific AI decision was executed under authority of a specific, versioned, signed policy.
Post-QuantumResistant to cryptanalysis by both classical and quantum computers. Limited to NIST-standardized algorithms: ML-DSA (FIPS 204) and ML-KEM (FIPS 203).
Proof BundleA structured data object containing all evidence artifacts for a defined time period, formatted per Appendix A schema.
Proof ChainOrdered sequence of ZKPs for multi-step AI workflows, aggregated into a Merkle tree signed with ML-DSA.
Sensitive DataSSN, email, phone, DOB, medical codes, financial account numbers, legal privilege markers, biometric identifiers, or operator-designated sensitive fields.

4.1 Tier 1 -- Governed AI

4.1.1 Policy Binding

REQ-1.1

Every AI inference MUST be governed by a valid, versioned, cryptographically signed policy. The policy MUST be signed by an authorized policy administrator using a digital signature algorithm with a minimum security level of 128 bits.

REQ-1.2

The policy version and policy hash MUST be bound to each decision record at inference time. The binding MUST be performed before the inference result is returned to the caller.

REQ-1.3

Policy versions MUST be monotonically increasing. A policy with a lower version number MUST NOT be applied after a policy with a higher version number has been activated.

REQ-1.4

Policy updates MUST be logged as distinct evidence artifacts, including the old policy hash, the new policy hash, the update timestamp, and the authorizing administrator's signature.

4.1.2 Decision Records

REQ-1.5

A decision record MUST be created for every production inference. Statistical sampling is not permitted. Each record must contain: unique decision ID, SHA3-256 hash of input, SHA3-256 hash of output, SHA3-256 hash of governing policy, millisecond-precision timestamp (NTP stratum 2+), and a zero-knowledge proof.

REQ-1.6

Each decision record MUST include a zero-knowledge proof: ZKP = STARK_prove(H(policy_hash || decision_hash || timestamp)) where H is SHA3-256. The proof MUST be verifiable by any party possessing the public verification parameters.

REQ-1.7

Decision records MUST be compressed into a SHA3-256 Merkle tree, recomputed at intervals not exceeding 60 seconds. The Merkle root MUST be signed by the monitoring system.

4.1.3 Monitoring and Detection

REQ-1.8 through REQ-1.10

100% of production inferences MUST be processed. Maximum detection gap: 60 seconds. Alert generation within 5 minutes of any governance violation.

4.1.4 Evidence Retention

REQ-1.11 / REQ-1.12

All evidence artifacts MUST be retained for a minimum of 7 years, integrity-protected using SHA3-256 hash chains. Any tampering MUST be detectable.

4.1.5 Remediation

REQ-1.13 through REQ-1.16

Policy gaps MUST be remediated within 72 hours. Active violations within 24 hours. All remediation actions recorded as evidence artifacts. Certificate suspended if compliance score < 70 for 48+ continuous hours.

4.2 Tier 2 -- Privacy-Protected AI

4.2.1 Data Encryption

REQ-2.1 / REQ-2.2

ALL sensitive data fields MUST be encrypted or redacted before reaching the AI model. Acceptable mechanisms: (a) Fully Homomorphic Encryption (BFV, N≥4096, t≡1 mod 2N, Q≥2^56), (b) Proxy Redaction with Dilithium-signed attestation, (c) Trusted Execution Environment with hardware attestation renewed every 24 hours.

4.2.2 FHE Wrapper Integrity

REQ-2.4 / REQ-2.5

FHE wrapper MUST be continuously monitored for integrity. Any detected or suspected bypass triggers immediate certificate suspension. Suspension is automatic; no manual intervention required.

4.2.3 Data Blindness Attestation

REQ-2.6 through REQ-2.8

A data blindness attestation MUST be produced for every inference, signed using ML-DSA-65 or higher. Must include: inference identifier, encryption mechanism, hash of encrypted input, ZKP of data separation, timestamp, and signing key ID.

4.2.4 Sensitive Field Detection

REQ-2.9 / REQ-2.10

Automated detection of SSN, email, phone, DOB, medical codes, financial account numbers, and legal privilege markers. Minimum 99.5% detection rate per category, tested every 30 days.

4.2.5 Differential Privacy

REQ-2.11 through REQ-2.13

Systems performing aggregate analytics MUST establish an (epsilon, delta) privacy budget. When exhausted, further queries MUST be blocked at the data access layer, not just the application layer.

4.3 Tier 3 -- Quantum-Secured AI

4.3.1 Post-Quantum Cryptography

REQ-3.1 through REQ-3.3

ALL signatures MUST use ML-DSA (FIPS 204), minimum ML-DSA-65 (Level 3). ALL key encapsulation MUST use ML-KEM (FIPS 203), minimum ML-KEM-768. Classical-only algorithms (RSA, ECDSA, ECDH, X25519) MUST NOT be used as sole protection. Hybrid schemes permitted during 24-month transition period.

4.3.2 Evidentiary Validity

REQ-3.4 / REQ-3.5

All evidence artifacts MUST provide minimum 30-year evidentiary validity. Evidence for legal proceedings MUST include complete proof bundle, signing algorithm, public key, timestamp, and sufficient context for independent verification.

4.3.3 Model Behavioral Fingerprinting

REQ-3.6 through REQ-3.9

Behavioral fingerprint MUST be captured at certification time. Drift score computed hourly. Thresholds: 0.00-0.14 (normal), 0.15-0.29 (warning), 0.30-0.49 (suspended), 0.50-1.00 (revoked). Any model change without re-certification triggers immediate suspension.

Drift ScoreAction
0.00 - 0.14No action required. Normal operational variance.
0.15 - 0.29Warning state. Investigation required within 72 hours.
0.30 - 0.49Suspended. Immediate investigation required.
0.50 - 1.00Revoked. Re-certification required.

4.3.4 Compliance Oracle for Agentic Workflows

REQ-3.10 through REQ-3.13

Multi-step/agentic AI workflows MUST deploy a Compliance Oracle. Each step evaluated against policy with ZKP of compliance before proceeding. Per-step proofs aggregated into a Proof Chain (Merkle tree signed with ML-DSA). Failed steps halt the workflow.

4.3.5 On-Chain Audit Trail

REQ-3.14 through REQ-3.16

Merkle roots MUST be anchored to a public Layer 1 blockchain (Solana recommended) at intervals not exceeding 1 hour. Records must be independently verifiable by any party with blockchain access and the H33 public verification key.

5 Monitoring Requirements

5.1 Sampling Rate

100% of production inferences across all tiers. Statistical sampling is prohibited. If the monitoring system is unable to process an inference, the event must be flagged. Unmonitored inferences exceeding 0.1% in any 24-hour period reduce compliance score by at least 10 points.

5.2 Evidence Artifact Types

Artifact TypeDescriptionTier
Decision RecordStructured record of a single inference1+
Governance ZKPZKP binding policy, decision, and timestamp1+
Merkle RootSHA3-256 root of decision record tree1+
Policy RecordSigned record of policy version, hash, activation1+
Alert RecordDetected violation, severity, and timestamp1+
Remediation RecordCorrective action taken1+
Data Blindness AttestationML-DSA-signed attestation of data separation2+
Data Separation ZKPZKP that model input has no plaintext sensitive data2+
FHE Integrity RecordMonitoring record of FHE wrapper status2+
Privacy Budget RecordCumulative differential privacy budget consumption2+
Behavioral FingerprintBaseline and periodic model behavior characterization3
Drift Score RecordPeriodic drift measurement result3
Proof ChainMerkle tree of per-step ZKPs for agentic workflows3
On-Chain AnchorTransaction hash and block reference for anchoring3

5.3 Detection Latency

TierMax Detection GapMax Alert Latency
Tier 160 seconds5 minutes
Tier 230 seconds2 minutes
Tier 315 seconds1 minute

5.5 Compliance Score Computation

Score = w_g * G + w_d * D + w_m * M + w_r * R

Where G = governance completeness (0-100), D = data separation coverage (0-100), M = monitoring continuity (0-100), R = remediation timeliness (0-100).

Tierw_gw_dw_mw_r
Tier 10.500.000.300.20
Tier 20.300.300.250.15
Tier 30.250.250.300.20

6 Data Blindness Technical Specification

6.1 FHE Parameter Requirements

ParameterMinimumRecommended
SchemeBFVBFV
Polynomial degree (N)4,0964,096 or 8,192
Plaintext modulus (t)t ≡ 1 (mod 2N)t = 65537 (for N = 4096)
Ciphertext modulus (Q)≥ 2^56 (single)CRT, product ≥ 2^110
Security level≥ 128 bits (classical)≥ 128 bits classical + quantum (Tier 3)

6.2 ZKP Requirements for Data Separation

REQ-6.3 / REQ-6.4

Acceptable constructions: (a) STARK proofs using SHA3-256 (recommended for Tier 3), (b) SNARK proofs (Groth16, PLONK) with MPC ceremony of 100+ participants (acceptable for Tier 1/2). ZKP MUST be verifiable in under 10 milliseconds on commodity hardware.

6.3 Proxy Redaction Requirements

Synthetic tokens must be same data type and approximate length as originals, not derivable from originals, consistent within a single inference, and reversible only by the redaction service. Mapping stored in encrypted key-value store inaccessible to the AI model.

6.4 TEE Requirements

Acceptable technologies: Intel SGX with DCAP, AWS Nitro Enclaves, ARM TrustZone. Enclave measurement verified against known-good reference at startup and every 24 hours. Attestation documents included in evidence bundle.

7 Model Integrity Specification

7.1 Behavioral Fingerprint Components

DimensionDescriptionWeightSaturation
Response Length DistributionStatistical distribution of output token counts over reference input set0.150.5
Latency ProfileDistribution of inference latency over reference input set0.100.5
Semantic ConsistencyCosine similarity of outputs at certification time vs. measurement time0.500.2
Edge Case BehaviorBehavior on curated edge case inputs probing boundary conditions0.250.3

7.2 Fingerprint Hash Construction

fingerprint_hash = SHA3-256(
  normalize(dim1_stats) ||
  normalize(dim2_stats) ||
  normalize(dim3_stats) ||
  normalize(dim4_stats)
)

7.3 Drift Measurement

drift_score = sum(w_i * sigmoid(d_i / s_i)) for i in {1,2,3,4}

Where d_i is the raw divergence for dimension i, s_i is the saturation parameter, w_i is the weight, and sigmoid(x) = 1 / (1 + exp(-x)). Drift score normalized to [0.0, 1.0].

7.4 Re-certification After Model Update

After any model change, re-certification requires submission of a new behavioral fingerprint. If drift score exceeds 0.50: full re-certification. Between 0.15 and 0.50: expedited re-certification. Below 0.15: administrative update only.

8 Evidence Format and Retention

8.1 Proof Bundle Format

All evidence packaged in Proof Bundles conforming to the JSON schema (Appendix A). Contains: HATS version, bundle ID (UUID v4), certificate ID, tier, time range, Merkle root (hex SHA3-256), signature (algorithm + key ID + base64 value), artifacts array, and optional on-chain anchor.

8.2 Retention Periods

TierMinimum Retention
Tier 17 years
Tier 210 years
Tier 330 years

8.3 Export Formats

  • JSON Proof Bundle: Canonical format as specified in Section 8.1.
  • Signed PDF: Human-readable report with embedded digital signature, Merkle root, certificate status, and independent verification instructions. Tier 3 uses PQ-signed hash reference.
  • SIEM-Compatible Event Stream: Structured events in CEF, LEEF, or JSON over syslog RFC 5424 format.
REQ-8.1 through REQ-8.3

For Tier 3, all retained evidence MUST be protected with post-quantum signatures. Evidence MUST be stored in at least two geographically distinct locations with integrity verification at the storage layer.

9 Certificate Lifecycle

9.1 Certificate States

StateDescription
ActiveAll requirements satisfied. Certificate valid and presentable to third parties.
WarningOne or more warning conditions triggered. Certificate remains valid but warning state visible to verifiers.
SuspendedOne or more suspension conditions triggered. Certificate NOT valid. System must not represent itself as HATS-certified.
RevokedCertificate permanently invalidated. Re-certification from the beginning required.

9.2 Key State Transitions

Active to Warning: Compliance score 70-80 for 24+ hours, single policy gap detected, drift score >0.15 (Tier 3), or monitoring gap exceeding threshold.

Warning to Active: All warning conditions resolved, compliance score above 80 for 24+ hours, all remediation records filed.

To Suspended: Compliance score <70 for 48+ hours, FHE bypass detected (Tier 2+), drift score >0.30 (Tier 3), model change without re-certification, unresolved violation >72 hours, or privacy budget exhaustion failure.

Suspended to Active: Root cause analysis, evidence of remediation, 30-day observation period (score >80), Issuing Authority approval.

To Revoked: Compliance score <50 for 7+ days, confirmed data exposure, or certificate fraud.

9.3 Expiration and Renewal

REQ-9.1 through REQ-9.3

Certificates expire 90 days from issuance or last renewal. Automatic renewal if: Active state, score never below 80 in preceding 90 days, all monitoring operational, no unresolved violations. Otherwise, renewal application required (expedited re-certification).

9.4 Appeals Process

System operators may appeal suspension/revocation within 15 business days. Independent review panel (3+ members not involved in original decision) issues a binding decision within 30 business days. Certificate remains in current state during appeal.

10 Verification Protocol

REQ-10.1

Any third party MUST be able to verify a HATS certificate without access to the Issuing Authority's internal systems, without an account or API key, and without the cooperation of the certificate holder.

Verification Inputs

  1. The certificate identifier (unique string assigned at issuance)
  2. The H33 public verification key (published at well-known URL and on blockchain registry)
  3. Access to the on-chain registry (Tier 3) or public verification API (all tiers)

Verification Procedure

  1. Retrieve certificate record from public verification API or on-chain registry
  2. Verify ML-DSA signature using H33 public verification key
  3. Check expiration against current date
  4. Check revocation via revocation registry
  5. Verify on-chain state (Tier 3): confirm Merkle root anchor on blockchain

Verification API

Unauthenticated, rate-limited (100 req/min per IP), 99.9% uptime SLA, JSON response format. Response includes: certificate ID, status, tier, system name, holder, issue/expiry dates, frameworks covered, last audit event, compliance score, on-chain hash (Tier 3), and ML-DSA signature.

11 Regulatory Framework Mapping

HATS maps to 8 regulatory frameworks. The following consolidated cross-reference shows coverage per tier.

Regulatory Framework Provision HATS Coverage Min. Tier
NIST AI RMF GOVERN (Policies, Accountability) REQ-1.1 through REQ-1.4: cryptographically signed, versioned policies bound to every inference Tier 1
NIST AI RMF MEASURE (Metrics, Evaluation) REQ-3.6 through REQ-3.8, REQ-7.1: behavioral fingerprinting with continuous drift monitoring Tier 3
EU AI Act Art. 9 (Risk management) Tiers 1-3: continuous risk monitoring with defined thresholds and escalation Tier 1
EU AI Act Art. 10 (Data governance) Tier 2+: data blindness with cryptographic attestation Tier 2
EU AI Act Art. 13 (Transparency) Public, unauthenticated certificate verification (REQ-10.1 through REQ-10.3) Tier 1
HIPAA 164.312(a) Access Control FHE/redaction/TEE prevents model access to PHI Tier 2
HIPAA 164.312(b) Audit Controls 100% inference monitoring, decision records, Merkle-compressed audit trail Tier 1
HIPAA 164.312(e) Transmission Security ML-KEM key encapsulation for all key exchange operations Tier 3
SOX 404 Internal controls, evidence Every AI decision governance-bound, continuous compliance scoring, defined retention Tier 1
GDPR Art. 22 (Automated decisions) Decision records provide evidentiary basis for explaining/contesting automated decisions Tier 1
GDPR Art. 25 (Data protection by design) Data blindness as architectural requirement, not policy overlay Tier 2
CCPA/CPRA Right to know, automated decisions Decision records, governance proofs support opt-out and access rights for profiling Tier 1
FDA 21 CFR 11 Audit trail, signature linking SHA3-256 Merkle trees with digital signatures, 100% coverage, signer ID + timestamp Tier 1
FCA UK Principles 3, 6, 11; Consumer Duty Policy governance, data blindness, public verification, auditable decision records Tier 1

12 Conformance and Auditing

Assessment Methods by Tier

TierSelf-AssessmentIndependent Verification
Tier 1PermittedRecommended
Tier 2Permitted with restrictionsRecommended
Tier 3Not permittedRequired
REQ-12.2

Tier 3 independent verification MUST be performed by an auditor with no financial interest in the system operator, demonstrated competence in cryptographic verification (including PQ signature verification and ZKP validation), and approval by the Issuing Authority or recognized audit body.

Evidence Exportability

REQ-12.3 through REQ-12.5

All evidence artifacts exportable at any time without cooperation from the Issuing Authority. Exported evidence MUST be self-contained: verifiable using only the exported data, the H33 public verification key, and (for Tier 3) blockchain access.

Continuous Monitoring Obligation

HATS certification is continuous, not point-in-time. Maximum permissible monitoring maintenance window: 4 hours. All inferences during maintenance logged locally and retroactively processed. Exceeding 4 hours transitions certificate to Warning state.

13 Version History and Amendment Process

Version History

VersionEffective DateDescription
1.02026-03-17Initial publication

Amendment Process

Proposed amendments require a minimum 60-day public comment period, review by a 5-member expert board (AI governance, cryptography, regulatory compliance, enterprise security), and 90-day implementation notice before taking effect.

Backwards Compatibility

Existing certificates maintain validity for 180 calendar days after a new version takes effect. Extensions of up to 12 months may be granted case-by-case for technically infeasible requirements.

A Appendix A: Proof Bundle Schema

The canonical Proof Bundle format is defined as a JSON Schema. Key fields:

{
  "hats_version": "1.0",
  "bundle_id": "<UUID v4>",
  "certificate_id": "<certificate identifier>",
  "tier": 1|2|3,
  "time_range": { "start": "<ISO 8601>", "end": "<ISO 8601>" },
  "merkle_root": "<hex SHA3-256>",
  "signature": {
    "algorithm": "ML-DSA-65|ML-DSA-87|Ed25519",
    "key_id": "<key identifier>",
    "value": "<base64 signature>"
  },
  "artifacts": [...],
  "on_chain": { "chain": "...", "tx_hash": "...", "block": "...", "timestamp": "..." }
}

The on_chain field is REQUIRED for Tier 3, OPTIONAL for Tiers 1 and 2. Artifact types enumerated: decision_record, governance_zkp, merkle_root, policy_record, alert_record, remediation_record, data_blindness_attestation, data_separation_zkp, fhe_integrity_record, privacy_budget_record, behavioral_fingerprint, drift_score_record, proof_chain, on_chain_anchor.

B Appendix B: Regulatory Mapping Tables

Consolidated cross-reference between HATS requirements and regulatory provisions. See Section 11 for detailed mapping by framework. Each cell indicates the HATS requirement(s) that address the regulatory provision.

Regulatory ProvisionTier 1 RequirementsTier 2 RequirementsTier 3 Requirements
NIST AI RMF GOVERNREQ-1.1 to REQ-1.4(inherits Tier 1)(inherits Tier 2)
NIST AI RMF MEASURESection 5.5(inherits Tier 1)REQ-3.6 to REQ-3.8, REQ-7.1
EU AI Act Art. 9REQ-1.8 to REQ-1.10REQ-2.4, REQ-2.5REQ-3.6 to REQ-3.9
EU AI Act Art. 10--REQ-2.1 to REQ-2.3(inherits Tier 2)
EU AI Act Art. 12REQ-1.5 to REQ-1.7REQ-2.6 to REQ-2.8REQ-3.14 to REQ-3.16
HIPAA 164.312(a)--REQ-2.1 to REQ-2.3(inherits Tier 2)
HIPAA 164.312(b)REQ-1.5, REQ-1.8(inherits Tier 1)(inherits Tier 2)
HIPAA 164.312(e)----REQ-3.1, REQ-3.2
SOX 404REQ-1.1 to REQ-1.7, Sec 5.5(inherits Tier 1)(inherits Tier 2)
GDPR Art. 22REQ-1.5(inherits Tier 1)(inherits Tier 2)
GDPR Art. 25--REQ-2.1(inherits Tier 2)
FDA 21 CFR 11.10(e)REQ-1.5 to REQ-1.7(inherits Tier 1)REQ-3.14 to REQ-3.16
CCPA 1798.100REQ-1.5REQ-2.1(inherits Tier 2)

C Appendix C: Drift Score Computation

The drift score is computed as:

drift_score = sum(w_i * sigmoid(d_i / s_i)) for i in {1, 2, 3, 4}

Where:

  • d_i = raw divergence for dimension i (KL divergence for distributions, 1 - cosine similarity for semantic consistency, 1 - edge case pass rate for edge cases)
  • s_i = saturation parameter (prevents any single dimension from dominating)
  • w_i = weight for dimension i
  • sigmoid(x) = 1 / (1 + exp(-x))
DimensionWeight (w_i)Saturation (s_i)Divergence Metric
Response Length0.150.5KL divergence
Latency0.100.5KL divergence
Semantic Consistency0.500.21 - cosine similarity
Edge Case Behavior0.250.31 - pass rate

The drift score is normalized to the range [0.0, 1.0]. Sigmoid saturation prevents outlier divergence in a single dimension from triggering false positives.

Get Started

Ready to Get Certified?

HATS is the first AI trust standard backed by cryptographic proof. Start your certification journey today.

Get Certified → Download PDF Submit Comments Read Docs