1.1 Purpose
HATS defines what it means for an AI system to be certified as trustworthy. It provides a machine-verifiable, cryptographically grounded specification that enables organizations to demonstrate -- and third parties to independently confirm -- that an AI system meets defined trustworthiness requirements on a continuous basis.
HATS certifies trustworthiness across three layers:
- Governance Proof. Every inference executed by the AI system is governed by a valid, versioned, cryptographically signed policy, and every decision is recorded with a zero-knowledge proof binding the policy, the decision, and the time of execution.
- Data Separation. Sensitive data is cryptographically protected before it reaches the AI model. The system can prove, via zero-knowledge attestation, that no plaintext sensitive data was accessible to the model at inference time.
- Audit Permanence. The evidentiary record of the system's behavior is cryptographically signed, Merkle-compressed, and retained for a period sufficient to satisfy legal and regulatory requirements, including under post-quantum cryptographic assumptions where required.
1.2 Scope
This standard applies to any AI system that satisfies one or more of the following conditions:
- Processes personally identifiable information (PII), protected health information (PHI), financial data, or legally privileged information.
- Makes or materially contributes to decisions affecting individuals' rights, access to services, employment, credit, insurance, healthcare, or legal standing.
- Operates in an industry subject to regulatory oversight, including healthcare, financial services, insurance, legal services, government, defense, and critical infrastructure.
- Is deployed as a component in a multi-agent or agentic AI workflow where intermediate reasoning steps are not directly observable by a human operator.
This standard does not certify model accuracy, output quality, fairness, or bias. HATS certifies the operational trustworthiness of the system in which a model operates -- the governance, privacy, and evidentiary integrity of the system's behavior over time.
1.3 Relationship to Existing Standards
| Standard | Relationship to HATS |
|---|---|
| NIST AI RMF 1.0 | HATS implements the GOVERN, MAP, MEASURE, and MANAGE functions with concrete cryptographic requirements. |
| ISO/IEC 42001:2023 | HATS provides auditable evidence artifacts that satisfy ISO 42001 Annex A controls. |
| EU AI Act (2024/1689) | HATS Tier 2+ satisfies transparency, data governance, and record-keeping obligations for high-risk AI systems. |
| SOC 2 TSC | HATS evidence artifacts map to CC6 (Logical and Physical Access), CC7 (System Operations), CC8 (Change Management), and the Privacy criteria. |
| HIPAA | HATS Tier 2+ satisfies the technical safeguard requirements of 45 CFR 164.312. |
1.4 Normative Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.