Every AI decision attested. Every inference encrypted. The auditor verifies proofs but never sees the data. Decision finality anchored to Bitcoin mainnet.
AI systems make decisions that affect credit approvals, insurance underwriting, hiring, medical diagnoses, and criminal sentencing. Regulators increasingly require that these decisions be explainable, auditable, and reproducible. The EU AI Act, NIST AI RMF, and sector-specific regulations demand evidence that AI systems operate within their approved parameters.
The problem is fundamental: how do you audit an AI decision without exposing the data that decision was made on? Patient records, financial data, and personal information cannot be shared with auditors in plaintext. But without seeing the inputs, auditors cannot verify the outputs.
H33-Agent-Zero solves this. Every AI decision produces a cryptographic attestation that proves the decision was made by a specific model version, on specific encrypted inputs, at a specific time, under a specific policy. The auditor verifies the proof. The auditor never sees the data.
The critical insight is that the attestation does not contain the decision itself. It contains the cryptographic proof that a decision was made by a verified model on verified inputs under a verified policy. The attestation is a commitment: it binds together the model, the input hash, the output hash, the policy, and the timestamp into a single signed statement.
If any component changes — a different model, different inputs, a different policy version — the hash changes, and the attestation no longer verifies. This is decision finality: once attested, the decision cannot be retroactively altered or attributed to a different model or policy.
H33 uses two FHE schemes for AI workloads, each matched to the computation type. The FHE-IQ router selects the correct scheme automatically based on the operation.
CKKS (Cheon-Kim-Kim-Song) operates on approximate fixed-point arithmetic. It supports the matrix multiplications, weighted sums, and activation function approximations needed for neural network inference. Model weights can be public; the input data remains encrypted throughout.
TFHE handles the decision boundary: threshold checks, comparisons, and branching. After CKKS produces an encrypted score, TFHE determines whether that score exceeds the decision threshold — without decrypting the score. The decision bit is the only output.
This two-engine architecture means the AI system never operates on plaintext at any stage. The data is encrypted before it enters the pipeline (CKKS for inference), and the decision is made on ciphertext (TFHE for thresholds). The result is an encrypted decision bit and a signed attestation. The server never learns what the input was, what the score was, or which way the decision went.
An auditor reviewing AI decisions through H33 sees cryptographic proofs, not data. They can verify every property that regulations require without accessing any protected information.
| Audit Evidence | Auditor Access | Contains PII/PHI |
|---|---|---|
| Model version identifier | Full | No |
| Policy version identifier | Full | No |
| Decision timestamp | Full | No |
| Input data hash (SHA3-256) | Full | No (hash only) |
| Output data hash (SHA3-256) | Full | No (hash only) |
| Three-key PQ signature | Full | No |
| Chain position and hash | Full | No |
| Bitcoin anchor transaction | Full (public) | No |
| Actual input data | None | N/A (never exposed) |
| Actual decision output | None | N/A (never exposed) |
The auditor verifies proofs. They never see data. They can confirm that model version X processed data at time T under policy version P, and that the decision was signed with three independent post-quantum signature schemes. They can verify that the attestation is chained and anchored. They cannot determine what the input data was or what the decision outcome was.
Every AI decision attestation includes a policy version identifier. This binds the decision to the specific set of rules, thresholds, and constraints that were active at the time of inference. If the policy changes — a threshold is adjusted, a feature is added, a constraint is removed — the new policy version is reflected in all subsequent attestations.
This solves a critical compliance problem: proving which rules were in effect when a specific decision was made. In traditional systems, policy configurations are stored in databases that can be modified without audit trails. In H33, the policy version is cryptographically bound to the decision attestation. Changing it retroactively would break the hash chain.
Decision finality means that once an AI decision has been attested, it cannot be retroactively changed, reattributed, or denied. The attestation creates a permanent record that a specific model version, operating under a specific policy version, processed a specific set of inputs at a specific time and produced a specific output.
The finality chain is three layers deep. First, the attestation is signed with three independent post-quantum signature schemes based on three independent hardness assumptions: MLWE lattices, NTRU lattices, and hash-based signatures. Forging the signature requires breaking all three simultaneously. Second, the attestation is chained into the tenant's SHA3-256 hash chain. Altering any attestation invalidates every subsequent link. Third, the chain head is anchored to Bitcoin mainnet via OP_RETURN every 60 seconds. The Bitcoin block provides an independent, adversarial timestamp.
This means the evidence exists in three independent systems: H33's attestation service (signed proof), the hash chain (tamper-evident sequence), and Bitcoin (immutable anchor). Destroying the evidence would require compromising all three simultaneously.
This is what regulators want. The EU AI Act requires that high-risk AI systems maintain records of automated decisions. NIST AI RMF requires traceability and accountability. H33-Agent-Zero provides cryptographic evidence that satisfies both requirements without exposing protected data to the auditor.
Diagnostic AI processes encrypted patient data (HIPAA-protected PHI). The decision is attested. The auditor can verify the model version and policy without seeing the patient record. The hospital proves compliance without exposing data.
Credit scoring AI evaluates encrypted financial records. The decision attestation binds the model, the policy, and the timestamp. The regulator verifies the proof chain. The borrower's financial data is never exposed to the auditor.
Claims processing AI evaluates encrypted claim data against policy terms. Every approval and denial is attested with the policy version in effect at the time. Disputes reference the hash chain, not conflicting recollections.
Benefits eligibility, fraud detection, and security clearance AI systems produce attested decisions. Citizens can verify that decisions were made by approved model versions under published policies. Classified inputs remain encrypted.
7 patents pending. 300+ patent claims. The AI audit trail architecture, the encrypted inference pipeline, the policy version binding mechanism, and the decision finality chain are protected by pending patent applications covering the full Agent-Zero attestation system.