AI makes loan approvals, sanctions decisions, claims adjudications, diagnostic recommendations, and hiring screens. Today, none of these decisions are provable. If a regulator asks "prove this AI decision was correct," every company on earth answers with logs, dashboards, and policy documents.
H33 produces a 74-byte cryptographic attestation for every AI decision. Inputs committed. Model version committed. Outputs committed. Authority signed with post-quantum Dilithium. Independently verifiable without vendor access.
AI systems make high-stakes decisions across regulated industries every second. Loan denials. Sanctions flags. Claims rejections. Diagnostic classifications. Not one of these decisions produces cryptographic evidence that the computation was correct.
A credit model denies a loan application. The applicant disputes. The bank produces model documentation and feature importance scores. None of this proves the specific computation that ran on the specific inputs that produced the specific denial. It's an explanation, not evidence.
A transaction is flagged by an AI sanctions screening system. The compliance team reviews. But there is no cryptographic record that the screening model was the correct version, that the sanctions list was current, or that the specific transaction data was accurately processed.
A diagnostic AI recommends a treatment pathway. The physician follows it. Three years later, a malpractice suit asks: prove the AI recommendation was based on the correct patient data and the correct model version. The hospital has logs. Logs are not proof.
XAI (explainability) and provable decisions solve different problems. Regulators are starting to require both. Most companies have the first. Nobody has the second.
Every AI decision gets a 74-byte attestation. It commits four things: what went in, what model ran, what came out, and who signed it. Any third party verifies in 71 microseconds.
SHA3-256 hash of the exact inputs the model received. Proves the specific data that was processed. Tamper-evident: changing one bit changes the hash.
Hash of the model weights, parameters, and version identifier. Proves which exact model produced the output. No ambiguity about which version ran.
SHA3-256 hash of the exact outputs the model produced. Proves the specific result. If anyone claims a different output, the hash mismatch is instant evidence.
Dilithium post-quantum digital signature from the signing authority. Proves who authorized the attestation. Quantum-resistant: valid for 30+ years against both classical and quantum adversaries.
No system credentials. No API keys. No vendor cooperation required. The auditor receives the attestation and verifies independently using H33's open verification protocol.
Auditor receives the 74-byte H33-74 attestation from the organization being audited.
Dilithium signature is verified against the public key. Confirms the attestation was issued by the claimed authority.
Attestation's position in the SHA3-256 hash chain is validated. Confirms ordering and prevents insertion or deletion of records.
Input and output hashes are compared against provided data. The auditor confirms the attested computation matches the claimed inputs and outputs.
Every AI decision flows through a five-stage evidence chain. Each stage produces a cryptographic artifact. The chain is independently verifiable at every link.
Wrap your inference call. Get cryptographic proof with every result.
from h33 import ProvableDecision # Wrap your AI decision with H33-74 attestation decision = ProvableDecision( model="credit-score-v3", input_data=applicant_record, anchor="bitcoin" # optional: anchor to BTC mainnet ) # Execute with attestation result = decision.execute() # result.output — the AI's decision # result.attestation — 74-byte H33-74 proof # result.input_hash — SHA3-256 of inputs # result.model_hash — SHA3-256 of model version # result.output_hash — SHA3-256 of output # result.chain_pos — position in hash chain # result.verify_url — h33.ai/verify/<proof_id> # Auditor verifies independently (no API key needed) from h33 import verify valid = verify(result.attestation) # True, 71µs
Six regulatory frameworks are converging on the same requirement: cryptographic evidence that AI decisions were computed correctly. Here's what each demands and how H33-74 satisfies it.
High-risk AI systems must provide transparency, human oversight, and auditable decision records. Conformity assessments require evidence of correct operation. H33-74 attestation provides per-decision cryptographic proof that satisfies Articles 13 and 14 audit requirements. Penalties reach 7% of global revenue.
FFIEC requires financial institutions to validate and document model outputs, maintain model inventories, and provide evidence that models perform as intended. H33-74 commits the model version hash per decision, creating an immutable record of which model produced which output.
OCC expects banks to validate model outputs, test models against out-of-sample data, and maintain audit trails of model decisions. H33-74 provides the cryptographic audit trail: every decision provably linked to its inputs, model version, and outputs.
HIPAA requires mechanisms to record and examine activity in information systems containing ePHI. When AI processes patient data, every inference must be auditable. H33-74 attestation provides per-inference audit evidence that satisfies §164.312(b) technical safeguards.
When AI is part of financial reporting controls, SOX 404 requires evidence of control effectiveness. H33-74 proves each AI financial decision was computed correctly, creating audit-ready evidence packages with Dilithium signatures that carry criminal liability weight.
Data subjects have the right to contest automated decisions and obtain human intervention. H33-74 attestation provides the cryptographic evidence needed to reconstruct exactly what happened: what data was processed, what model decided, and what the output was.
These aren't hypothetical scenarios. These are the decisions that H33-74 makes provable today.
Credit model runs on FHE-encrypted applicant data using BFV exact arithmetic. The model scores without seeing income, debt, or credit history. H33-74 attestation proves: these encrypted inputs, this model version, this encrypted output. The applicant's data was never exposed. The decision is provable.
Transaction data is FHE-encrypted before reaching the sanctions screening model. The model compares encrypted transaction details against the sanctions list on ciphertext. H33-74 attestation commits the sanctions list version, the encrypted input, and the match/no-match output. Every screening is provable.
Claims adjudication AI processes encrypted policyholder data. The model triages, flags anomalies, and recommends actions on ciphertext. Each triage decision gets an H33-74 attestation linking the encrypted claim data to the model version to the triage output. Auditors replay any decision from Cachee in sub-microsecond time.
Wrap your AI endpoint. H33 generates cryptographic proof for every decision. Auditors verify independently. No refactoring required.