PricingDemo
Log InGet API Key
Cryptographic Proof for Every AI Decision

Prove It. Not With Logs. With Math.

AI makes loan approvals, sanctions decisions, claims adjudications, diagnostic recommendations, and hiring screens. Today, none of these decisions are provable. If a regulator asks "prove this AI decision was correct," every company on earth answers with logs, dashboards, and policy documents.

H33 produces a 74-byte cryptographic attestation for every AI decision. Inputs committed. Model version committed. Outputs committed. Authority signed with post-quantum Dilithium. Independently verifiable without vendor access.

See Provable Decisions The Evidence Chain
74B
Attestation per decision
71µs
Independent verification
30yr
Post-quantum proof validity
6
Regulatory frameworks
The Problem

AI decides. Nobody can prove what happened.

AI systems make high-stakes decisions across regulated industries every second. Loan denials. Sanctions flags. Claims rejections. Diagnostic classifications. Not one of these decisions produces cryptographic evidence that the computation was correct.

🏦

Financial Decisions Without Evidence

A credit model denies a loan application. The applicant disputes. The bank produces model documentation and feature importance scores. None of this proves the specific computation that ran on the specific inputs that produced the specific denial. It's an explanation, not evidence.

🚨

Sanctions Screening Without Proof

A transaction is flagged by an AI sanctions screening system. The compliance team reviews. But there is no cryptographic record that the screening model was the correct version, that the sanctions list was current, or that the specific transaction data was accurately processed.

🏥

Clinical Decisions Without Audit

A diagnostic AI recommends a treatment pathway. The physician follows it. Three years later, a malpractice suit asks: prove the AI recommendation was based on the correct patient data and the correct model version. The hospital has logs. Logs are not proof.

Critical Distinction

Explainability tells you WHY. Provability tells you THAT.

XAI (explainability) and provable decisions solve different problems. Regulators are starting to require both. Most companies have the first. Nobody has the second.

📊
Explainability (XAI)
Answers: "Why did the model make this decision?"

Uses SHAP values, LIME, attention maps, feature importance, counterfactual explanations. Tells you which input features influenced the output and by how much.

Does NOT prove the computation was correct. Does NOT commit the inputs or outputs cryptographically. Does NOT survive tampering.
"Feature income_ratio contributed 0.34 to denial" "Attention concentrated on tokens 14-22" "Top 3 features: credit_score, dti, ltv"
🔒
Provability (H33-74)
Answers: "Did this specific computation happen correctly on these specific inputs producing these specific outputs?"

Commits input hashes, model version, output hashes, and signing authority with post-quantum Dilithium signatures. Independently verifiable. Tamper-evident. Court-admissible.

Complements XAI. You explain WHY with SHAP. You prove THAT with H33-74.
input_hash: SHA3-256(patient_record) model_hash: SHA3-256(model_v3.2.1_weights) output_hash: SHA3-256(classification_result) authority: Dilithium-signed, 71µs verify
H33-74 Attestation

74 bytes. Four commitments. One proof.

Every AI decision gets a 74-byte attestation. It commits four things: what went in, what model ran, what came out, and who signed it. Any third party verifies in 71 microseconds.

Field 01

Input Commitment

SHA3-256 hash of the exact inputs the model received. Proves the specific data that was processed. Tamper-evident: changing one bit changes the hash.

32 bytes · SHA3-256
Field 02

Model Version

Hash of the model weights, parameters, and version identifier. Proves which exact model produced the output. No ambiguity about which version ran.

Committed in attestation
Field 03

Output Commitment

SHA3-256 hash of the exact outputs the model produced. Proves the specific result. If anyone claims a different output, the hash mismatch is instant evidence.

32 bytes · SHA3-256
Field 04

Authority Signature

Dilithium post-quantum digital signature from the signing authority. Proves who authorized the attestation. Quantum-resistant: valid for 30+ years against both classical and quantum adversaries.

Dilithium · PQ-secure
Auditor Workflow

Auditors verify without vendor access

No system credentials. No API keys. No vendor cooperation required. The auditor receives the attestation and verifies independently using H33's open verification protocol.

📩

Receive Attestation

Auditor receives the 74-byte H33-74 attestation from the organization being audited.

🔍

Verify Signature

Dilithium signature is verified against the public key. Confirms the attestation was issued by the claimed authority.

🔗

Check Hash Chain

Attestation's position in the SHA3-256 hash chain is validated. Confirms ordering and prevents insertion or deletion of records.

Confirm Commitments

Input and output hashes are compared against provided data. The auditor confirms the attested computation matches the claimed inputs and outputs.

The Evidence Chain

From inference to immutable record

Every AI decision flows through a five-stage evidence chain. Each stage produces a cryptographic artifact. The chain is independently verifiable at every link.

🧠
AI Inference
Model processes input
Input/output captured
📝
H33-74 Attestation
74-byte proof generated
Dilithium signed
🔗
Hash Chain
SHA3-256 chain link
Sequential ordering
💾
Cachee Storage
Decision replay + audit
Sub-µs retrieval
Bitcoin Anchor
Optional BTC timestamp
Immutable public record
Evidence Quality

What auditors see today vs. what H33 produces

Today's "Proof"
{ "timestamp": "2026-05-10T14:23:01Z", "model": "credit-score-v3", "decision": "denied", "reason": "DTI ratio exceeded threshold", "logged_by": "inference-server-07" }
A JSON log entry. Mutable. No signature. No input commitment. No proof the model version is accurate. No evidence the timestamp wasn't altered. Inadmissible as independent evidence.
H33-74 Attestation
{ "input_hash": "a7f3c8e2...32B SHA3", "model_hash": "9b2c4f71...committed", "output_hash": "3d1a2b5f...32B SHA3", "authority": "Dilithium-III signed", "chain_pos": 847291, "btc_anchor": "7f8d9ef2...mainnet", "verify": "h33.ai/verify/a7f3c8" }
Cryptographic attestation. Input committed. Model version committed. Output committed. Post-quantum signed. Hash chain positioned. Optionally Bitcoin-anchored. Independently verifiable in 71µs without vendor access.
Integration

Add provability to any AI decision

Wrap your inference call. Get cryptographic proof with every result.

decisions.py — provable AI decisions
from h33 import ProvableDecision

# Wrap your AI decision with H33-74 attestation
decision = ProvableDecision(
    model="credit-score-v3",
    input_data=applicant_record,
    anchor="bitcoin"  # optional: anchor to BTC mainnet
)

# Execute with attestation
result = decision.execute()

# result.output        — the AI's decision
# result.attestation   — 74-byte H33-74 proof
# result.input_hash    — SHA3-256 of inputs
# result.model_hash    — SHA3-256 of model version
# result.output_hash   — SHA3-256 of output
# result.chain_pos     — position in hash chain
# result.verify_url    — h33.ai/verify/<proof_id>

# Auditor verifies independently (no API key needed)
from h33 import verify
valid = verify(result.attestation)  # True, 71µs
Regulatory Drivers

The regulations demanding provable AI decisions

Six regulatory frameworks are converging on the same requirement: cryptographic evidence that AI decisions were computed correctly. Here's what each demands and how H33-74 satisfies it.

EU AI Act

Articles 13 & 14

High-risk AI systems must provide transparency, human oversight, and auditable decision records. Conformity assessments require evidence of correct operation. H33-74 attestation provides per-decision cryptographic proof that satisfies Articles 13 and 14 audit requirements. Penalties reach 7% of global revenue.

FFIEC Model Risk

Model Risk Management Guidance

FFIEC requires financial institutions to validate and document model outputs, maintain model inventories, and provide evidence that models perform as intended. H33-74 commits the model version hash per decision, creating an immutable record of which model produced which output.

OCC SR 11-7

Supervisory Letter on Model Risk

OCC expects banks to validate model outputs, test models against out-of-sample data, and maintain audit trails of model decisions. H33-74 provides the cryptographic audit trail: every decision provably linked to its inputs, model version, and outputs.

HIPAA Audit Controls

§164.312(b) — Audit Controls

HIPAA requires mechanisms to record and examine activity in information systems containing ePHI. When AI processes patient data, every inference must be auditable. H33-74 attestation provides per-inference audit evidence that satisfies §164.312(b) technical safeguards.

SOX Section 404

Internal Controls Assessment

When AI is part of financial reporting controls, SOX 404 requires evidence of control effectiveness. H33-74 proves each AI financial decision was computed correctly, creating audit-ready evidence packages with Dilithium signatures that carry criminal liability weight.

GDPR Article 22

Automated Decision-Making

Data subjects have the right to contest automated decisions and obtain human intervention. H33-74 attestation provides the cryptographic evidence needed to reconstruct exactly what happened: what data was processed, what model decided, and what the output was.

Proof at production speed. Not a compliance bottleneck.

391µs
Attestation generation
71µs
Independent verification
0.358µs
Cachee decision replay
Real Examples

Provable decisions in production

These aren't hypothetical scenarios. These are the decisions that H33-74 makes provable today.

Lending

Encrypted Loan Scoring

Credit model runs on FHE-encrypted applicant data using BFV exact arithmetic. The model scores without seeing income, debt, or credit history. H33-74 attestation proves: these encrypted inputs, this model version, this encrypted output. The applicant's data was never exposed. The decision is provable.

Compliance

Sanctions Screening on Ciphertext

Transaction data is FHE-encrypted before reaching the sanctions screening model. The model compares encrypted transaction details against the sanctions list on ciphertext. H33-74 attestation commits the sanctions list version, the encrypted input, and the match/no-match output. Every screening is provable.

Insurance

Claims Triage with Proof

Claims adjudication AI processes encrypted policyholder data. The model triages, flags anomalies, and recommends actions on ciphertext. Each triage decision gets an H33-74 attestation linking the encrypted claim data to the model version to the triage output. Auditors replay any decision from Cachee in sub-microsecond time.

Explore the H33 AI platform

Make your AI decisions provable in 10 minutes

Wrap your AI endpoint. H33 generates cryptographic proof for every decision. Auditors verify independently. No refactoring required.

See Provable Decisions Schedule Demo