BenchmarksStack RankingAPIsPricingDocsWhite PaperTokenBlogAbout
Log InGet API Key
AI Security OPINION HATS · 10 min read

Is ISO 42001 Enough for AI Security?
Why Documentation Isn't Protection

ISO 42001 is the world's first AI management system standard. It tells organizations to document their AI governance, conduct impact assessments, assign roles, and review risks. What it doesn't do is prevent a single byte of customer data from being exposed to an AI model in plaintext. That's not a gap in the standard. It's a chasm.

What ISO 42001 Actually Requires

ISO/IEC 42001:2023 follows the same Plan-Do-Check-Act structure as ISO 27001 and ISO 9001. If you've been through an ISO certification before, the framework is familiar: establish an AI Management System (AIMS), define policies, conduct risk assessments, implement controls, monitor, review, improve. The standard covers leadership commitment, organizational context, support resources, operational planning, performance evaluation, and continual improvement.

Specifically, ISO 42001 requires organizations to:

This is thorough governance. It's well-structured. And for organizations that have no AI governance at all, it's a meaningful improvement over nothing. The problem isn't what ISO 42001 requires. The problem is what it doesn't.

What ISO 42001 Doesn't Require

Read the entire standard cover to cover, and you'll find requirements for documentation, policies, assessments, roles, reviews, and management commitment. What you won't find is a single technical requirement that prevents AI systems from accessing sensitive data in plaintext. Not one.

ISO 42001 does not require:

Put simply: ISO 42001 certifies that you thought about AI risk. It does not certify that you eliminated it. You can achieve full ISO 42001 certification while sending unencrypted patient health records to a third-party AI model every millisecond. The auditor will check your documentation, not your data flows.

The Fundamental Problem: Policy vs. Math

Every compliance framework in existence — ISO 27001, SOC 2, HIPAA, GDPR, PCI DSS, and now ISO 42001 — operates on the same assumption: if you write the right policies, assign the right roles, and conduct the right reviews, security follows. This is the governance model. It works when the controls are enforced by humans who follow procedures.

AI breaks this model. The speed, scale, and autonomy of AI systems mean that policy-based controls can't keep up. A misconfigured API endpoint can leak millions of records to a model provider in the time it takes a governance committee to schedule a meeting. An AI agent can autonomously access, process, and transmit sensitive data without triggering any policy-based control, because the controls were designed for human-speed operations.

The alternative is mathematical enforcement. Instead of writing a policy that says "AI models shall not access plaintext data," you architect the system so that AI models physically cannot access plaintext data. The data is encrypted before it reaches the model. The model operates on ciphertext. The result returns encrypted. The model provider doesn't have the key, never had it, and can't obtain it. This isn't a policy. It's a property of the system. No human compliance required. No governance committee can override it. The math doesn't care about your org chart.

Introducing HATS: The H33 AI Trust Standard

HATS (H33 AI Trust Standard) is what ISO 42001 would look like if it required cryptographic proof instead of documentation. It doesn't replace ISO 42001 — you can implement both. But where ISO 42001 stops at governance, HATS starts at enforcement.

HATS has seven requirements. Each one is verifiable through cryptographic proof, not documentation review:

1. Encrypted Inference

All data submitted to AI models must be FHE-encrypted before reaching the model endpoint. The model processes ciphertext. The provider never sees plaintext inputs. Verification: the client proves encryption via a zero-knowledge proof attached to the API call. If the proof is absent or invalid, the request is rejected. Not documented — enforced.

2. Output Authenticity

All AI model outputs must be signed with a post-quantum digital signature (ML-DSA / CRYSTALS-Dilithium) that binds the output to the specific model version, input hash, and timestamp. This creates a tamper-evident chain: you can prove which model produced which output from which input at which time. ISO 42001 requires you to monitor model performance. HATS requires you to prove it cryptographically.

3. Data Provenance

Every piece of data entering an AI pipeline must carry a STARK proof of origin — where it came from, who authorized it, and what transformations were applied. This makes it possible to audit the entire data lineage from source to model to output without trusting any single party's logs. ISO 42001 requires an AI system inventory. HATS requires a cryptographically verifiable data lineage for every inference.

4. Device Binding

AI API calls must include a device attestation proof — a 192-byte STARK proof that binds the request to a physical device with verified integrity, network jurisdiction, and endpoint security posture. Bot farms, automated abuse scripts, and headless scrapers cannot produce valid device proofs. This addresses the AI-generated content problem at the API boundary: if you can't prove a real device made the request, the request doesn't execute.

5. Content Origin Certification

AI-generated content must be cryptographically tagged at the point of generation. The tag includes the model identifier, generation timestamp, device proof of the requester, and a hash of the prompt (encrypted, so the prompt content remains private). Any content without a valid origin certificate can be flagged as potentially AI-generated. ISO 42001 doesn't address AI content detection at all. HATS makes it a structural property of the generation process.

6. Training Data Isolation

Customer data submitted for inference must be cryptographically isolated from training pipelines. Under HATS, inference data is FHE-encrypted — the model provider literally cannot use it for training because they cannot read it. This isn't a contractual promise backed by a lawsuit threat. It's a mathematical property enforced by the encryption scheme. The key never leaves the customer.

7. Post-Quantum Readiness

All cryptographic operations within the AI pipeline — key exchange, digital signatures, encryption, proof generation — must use NIST-standardized post-quantum algorithms (FIPS 203 ML-KEM, FIPS 204 ML-DSA) or quantum-resistant alternatives (lattice-based FHE, hash-based STARKs). AI data has long-term sensitivity. Models trained today will be attacked by quantum computers in the future. HATS requires that the cryptographic foundation is quantum-resistant from day one, not retrofitted in 2034.

ISO 42001 vs. HATS: Side by Side

Requirement ISO 42001 HATS
Data protection during inference Document a policy FHE encryption — model never sees plaintext
Output integrity Monitor model performance Dilithium-signed outputs with model/input/time binding
Data lineage Maintain AI system inventory STARK proof of provenance on every data point
Bot prevention Not addressed 192-byte device attestation proof per API call
AI content detection Not addressed Cryptographic origin certificate at generation
Training data isolation Contractual Mathematical — FHE makes training on encrypted data impossible
Quantum resistance Not addressed FIPS 203/204 mandatory, lattice FHE, hash-based ZKPs
Verification method Documentation audit Cryptographic proof verification
Enforcement Human compliance Mathematical — cannot be overridden
Breach scenario Mitigated by policy Eliminated by architecture

Why This Matters Now

Fewer than 25% of organizations have fully operationalized their AI governance. The EU AI Act is in force. HIPAA hasn't been updated for AI-era threats. The SEC is asking public companies about AI risk disclosures. And every enterprise is racing to deploy AI across customer-facing operations — customer support, fraud detection, clinical decision support, financial analysis, legal review.

The velocity of AI deployment is outrunning the velocity of AI governance. ISO 42001 is a governance framework designed for committee-speed oversight of systems that operate at API-speed. By the time your AI governance board reviews the risk assessment, a million API calls have already sent customer data to a model provider in plaintext.

HATS doesn't slow down AI deployment. It makes AI deployment safe by default. Encrypt before inference. Sign every output. Prove every data lineage. Attest every device. Tag every generated artifact. Use post-quantum cryptography. These aren't governance activities. They're API parameters. They execute in microseconds, not meeting cycles.

The Bottom Line

ISO 42001 is a fine management standard. Get certified if your customers or regulators require it. But don't confuse certification with security. ISO 42001 certifies that you built a governance system. It does not certify that your data is protected.

HATS certifies protection. Not because someone reviewed your documentation. Because the cryptography makes any other outcome mathematically impossible.

ISO 42001 is governance on paper. HATS is governance in math. When your customer's medical records, financial data, or biometric templates hit an AI model endpoint, the difference between the two is the difference between a policy and a proof.

We'll take the proof.

Learn more about H33 AI Compliance  |  HATS Standard  |  Start free at h33.ai/pricing