PricingDemo
Log InGet API Key
H33-128H33-CKKSH33-256H33-FHE-IQH33-TFHEFHE OverviewH33-CompileZK LookupsBiometricsH33-3-KeyH33-MPCZK-TrustlessZK-PhishZK-VerifyPQC ArchitecturePQ VideoStorage EncryptionAI DetectionEncrypted Search
AI Governance · 12 min read

The Clarity ACT Requires AI Explainability —
We Built It Into Every Decision

Most enterprises answer "why was this blocked?" with log analysis and institutional knowledge. We answer it with structured, verifiable, exportable explanations backed by cryptographic evidence.

Why?
Every Decision
Verified
Citations
Cited
Evidence
Exportable
Reports

The Clarity ACT (S.4495) asks a simple question that most AI systems cannot answer: why?

Why was this loan application denied? Why was this insurance claim flagged? Why was this transaction blocked? Why was this patient's data routed to this specific computation engine? Why was this signer key used and not another?

Most enterprises answer these questions with a combination of log analysis, policy documentation, and institutional knowledge. An engineer looks at the logs, references the policy document, consults the team that configured the system, and writes a narrative explanation.

This process has three fundamental problems. First, it is slow — often taking days or weeks to reconstruct what happened. Second, it is incomplete — the logs may not capture the full decision context. Third, it is unverifiable — the explanation is a human interpretation of system behavior, not a mathematical proof of what actually occurred.

At H33, we built explainability into every decision. Not as a reporting layer on top of the system, but as a structural property of the governance runtime itself.

What the Clarity ACT Actually Requires

The Clarity ACT is not vague about what it expects. It requires financial institutions using AI to:

The key word is "clear." Not "available upon request after six weeks of investigation." Clear. As in: the consumer, the regulator, the auditor can understand what happened and why, in a timeframe that is meaningful.

This requires infrastructure, not process. You cannot build clear, fast, verifiable explanations from unstructured logs. You need structured explainability objects that are produced at decision time, not reconstructed after the fact.

The Structured Explanation Object

Every explanation in the H33 governance runtime is a structured object with defined fields:

This is not a log entry. It is a structured, verifiable, exportable explanation that can be independently validated against the governance graph.

Explainability in Practice

"Why was this engine selected?"

Example: Engine Selection Explanation

Question: Why was this engine selected?

Answer: The IQ router evaluated the request against the active policy (SEC_REPORTING v2.1) with security target Q128 and latency target 50 microseconds. BFV-64 was selected because the operation is polynomial (inner product), the data type is integer (requires exact arithmetic), and BFV-64 meets the security target at N=4096. CKKS was rejected: data_type_incompatible. TFHE was rejected: operation_incompatible. BFV-32 was rejected: security_tier_insufficient.

Evidence: RouteDecisionReceipt (verified), PolicyDecisionReceipt (verified), Stage 1 Determinism Hash (verified)

"Why was this blocked?"

Example: Enforcement Explanation

Question: Why was this execution blocked?

Answer: The governance enforcer detected a NamespaceContinuityBreak in namespace "fhe_session." State transition sequence 847 was followed by sequence 849 — sequence 848 is missing. This violates the namespace continuity requirement of the Banking governance profile. The enforcer at Enforced level applied IsolateTenantNamespace.

Severity: Critical

Evidence: EnforcementDecisionReceipt (verified), StateTransitionReceipt #847 (verified), StateTransitionReceipt #849 (verified), GovernanceProfile: Banking template (verified), ReplayFrame (verified)

"Why did verification fail?"

Example: Verification Failure Explanation

Question: Why did verification fail?

Answer: Integrity root divergence detected. The computed root does not match the expected root. This indicates that the governance chain has been modified, events are missing, or events have been reordered since the last checkpoint. The MonotonicVerifier flagged an ImpossibleOrdering violation: a state transition has a parent event with a later timestamp, which is temporally impossible.

Severity: Critical

Evidence: ReplayFrame (verified), IntegrityRoot (verified), MonotonicVerifier violation (verified), StateTransitionReceipt (verified)

Citation Integrity

Every explanation cites governance objects. But a citation is only valuable if it is verifiable. A fake citation is worse than no citation — it creates false confidence.

The H33 explainability system enforces citation integrity:

The system never presents an unverified citation as verified. If the governance graph cannot confirm that a cited object exists, the explanation says so explicitly. This prevents the most dangerous failure mode of explainability systems: confidently citing evidence that does not exist.

The AI Copilot Layer

On top of the structured explainability system, we built an AI copilot that operates exclusively on verified governance state. Customers can ask natural language questions:

Every copilot response includes a "Verified Citations" section. The copilot never fabricates governance state. If it cannot find verified evidence to support a claim, it says so.

This is not a chatbot. It is an investigation tool that operates over a cryptographically verified evidence base.

Exportable Explanations

Every explanation can be exported as a standalone report including the structured explanation, all evidence nodes with verification status, the replay frame context, the enforcement decision if applicable, and a unique report ID for audit tracking.

Exported explanations can be shared with regulators, auditors, insurers, and legal teams. Each explanation is self-contained — it includes enough information to independently verify the cited governance objects.

Why This Approach Matters for the Clarity ACT

Timely

Explanations must be available when needed, not weeks later. Structured explanations are generated at decision time and available immediately.

Accurate

Explanations must reflect what actually happened, not what the system was configured to do. Structured explanations are derived from signed governance objects, not configuration files.

Verifiable

Explanations must be independently confirmable. Every citation can be verified against the governance graph. Every replay frame is deterministic and reproducible.

Complete

Explanations must cover the full decision context: what policy applied, what alternatives were considered, what enforcement was active. The governance lineage provides this structurally.

Actionable

Explanations must enable the consumer, regulator, or auditor to understand what happened and what to do about it. Structured explanations include severity levels, enforcement context, and exportable reports.

Meeting all five requirements simultaneously is not possible with log-based investigation. It requires structured explainability built into the governance substrate.

Conclusion

The Clarity ACT requires AI explainability. Not as an aspiration. As a legal requirement.

Most organizations will attempt to meet this requirement with better logging, improved dashboards, and more detailed compliance reports. Those approaches will fail under scrutiny because they cannot answer "why?" with verifiable evidence.

We built "Why was this blocked?" into every decision. Not as a feature. As a structural property of the governance runtime. Every explanation cites verified governance objects. Every citation can be independently confirmed. Every explanation can be exported as a standalone audit report.

This is what Clarity ACT compliance looks like when it is built into the architecture, not bolted on after the fact.

Schedule a Demo

See the governance runtime in action.

Schedule Demo → Read the Docs
Verify It Yourself