The Clarity ACT (S.4495) asks a simple question that most AI systems cannot answer: why?
Why was this loan application denied? Why was this insurance claim flagged? Why was this transaction blocked? Why was this patient's data routed to this specific computation engine? Why was this signer key used and not another?
Most enterprises answer these questions with a combination of log analysis, policy documentation, and institutional knowledge. An engineer looks at the logs, references the policy document, consults the team that configured the system, and writes a narrative explanation.
This process has three fundamental problems. First, it is slow — often taking days or weeks to reconstruct what happened. Second, it is incomplete — the logs may not capture the full decision context. Third, it is unverifiable — the explanation is a human interpretation of system behavior, not a mathematical proof of what actually occurred.
At H33, we built explainability into every decision. Not as a reporting layer on top of the system, but as a structural property of the governance runtime itself.
What the Clarity ACT Actually Requires
The Clarity ACT is not vague about what it expects. It requires financial institutions using AI to:
- Provide consumers with clear explanations of AI-driven decisions
- Maintain auditable records of AI system behavior
- Demonstrate that AI systems operate within defined governance boundaries
- Enable independent oversight of AI decision-making processes
The key word is "clear." Not "available upon request after six weeks of investigation." Clear. As in: the consumer, the regulator, the auditor can understand what happened and why, in a timeframe that is meaningful.
This requires infrastructure, not process. You cannot build clear, fast, verifiable explanations from unstructured logs. You need structured explainability objects that are produced at decision time, not reconstructed after the fact.
The Structured Explanation Object
Every explanation in the H33 governance runtime is a structured object with defined fields:
- question — The human-readable question being answered
- answer — The dynamically constructed explanation based on actual decision context
- severity — Critical, warning, or informational
- evidence_nodes — An array of governance objects that substantiate the explanation, each with node type, canonical hash, verification status, and timestamp
- replay_frame_id — The replay frame at which this explanation is anchored
- enforcement_decision_hash — If related to an enforcement action, the hash of the triggering receipt
- exportable_report_id — A unique identifier for generating an exportable audit report
This is not a log entry. It is a structured, verifiable, exportable explanation that can be independently validated against the governance graph.
Explainability in Practice
"Why was this engine selected?"
Example: Engine Selection Explanation
Question: Why was this engine selected?
Answer: The IQ router evaluated the request against the active policy (SEC_REPORTING v2.1) with security target Q128 and latency target 50 microseconds. BFV-64 was selected because the operation is polynomial (inner product), the data type is integer (requires exact arithmetic), and BFV-64 meets the security target at N=4096. CKKS was rejected: data_type_incompatible. TFHE was rejected: operation_incompatible. BFV-32 was rejected: security_tier_insufficient.
Evidence: RouteDecisionReceipt (verified), PolicyDecisionReceipt (verified), Stage 1 Determinism Hash (verified)
"Why was this blocked?"
Example: Enforcement Explanation
Question: Why was this execution blocked?
Answer: The governance enforcer detected a NamespaceContinuityBreak in namespace "fhe_session." State transition sequence 847 was followed by sequence 849 — sequence 848 is missing. This violates the namespace continuity requirement of the Banking governance profile. The enforcer at Enforced level applied IsolateTenantNamespace.
Severity: Critical
Evidence: EnforcementDecisionReceipt (verified), StateTransitionReceipt #847 (verified), StateTransitionReceipt #849 (verified), GovernanceProfile: Banking template (verified), ReplayFrame (verified)
"Why did verification fail?"
Example: Verification Failure Explanation
Question: Why did verification fail?
Answer: Integrity root divergence detected. The computed root does not match the expected root. This indicates that the governance chain has been modified, events are missing, or events have been reordered since the last checkpoint. The MonotonicVerifier flagged an ImpossibleOrdering violation: a state transition has a parent event with a later timestamp, which is temporally impossible.
Severity: Critical
Evidence: ReplayFrame (verified), IntegrityRoot (verified), MonotonicVerifier violation (verified), StateTransitionReceipt (verified)
Citation Integrity
Every explanation cites governance objects. But a citation is only valuable if it is verifiable. A fake citation is worse than no citation — it creates false confidence.
The H33 explainability system enforces citation integrity:
- Every citation includes a canonical hash
- Every citation is verified via
lookupHash()against the governance graph - Successful lookup displays as "VERIFIED" with a green indicator
- Failed lookup displays as "UNVERIFIED" with a red indicator and explicit warning
- Pending citations display as "PENDING" with a yellow indicator
The system never presents an unverified citation as verified. If the governance graph cannot confirm that a cited object exists, the explanation says so explicitly. This prevents the most dangerous failure mode of explainability systems: confidently citing evidence that does not exist.
The AI Copilot Layer
On top of the structured explainability system, we built an AI copilot that operates exclusively on verified governance state. Customers can ask natural language questions:
- "Why was tenant acme-corp isolated 5 minutes ago?"
- "Show me all enforcement events in the last hour"
- "Compare the last two checkpoints"
- "Why did the router select TFHE instead of BFV-64?"
Every copilot response includes a "Verified Citations" section. The copilot never fabricates governance state. If it cannot find verified evidence to support a claim, it says so.
This is not a chatbot. It is an investigation tool that operates over a cryptographically verified evidence base.
Exportable Explanations
Every explanation can be exported as a standalone report including the structured explanation, all evidence nodes with verification status, the replay frame context, the enforcement decision if applicable, and a unique report ID for audit tracking.
Exported explanations can be shared with regulators, auditors, insurers, and legal teams. Each explanation is self-contained — it includes enough information to independently verify the cited governance objects.
Why This Approach Matters for the Clarity ACT
Timely
Explanations must be available when needed, not weeks later. Structured explanations are generated at decision time and available immediately.
Accurate
Explanations must reflect what actually happened, not what the system was configured to do. Structured explanations are derived from signed governance objects, not configuration files.
Verifiable
Explanations must be independently confirmable. Every citation can be verified against the governance graph. Every replay frame is deterministic and reproducible.
Complete
Explanations must cover the full decision context: what policy applied, what alternatives were considered, what enforcement was active. The governance lineage provides this structurally.
Actionable
Explanations must enable the consumer, regulator, or auditor to understand what happened and what to do about it. Structured explanations include severity levels, enforcement context, and exportable reports.
Meeting all five requirements simultaneously is not possible with log-based investigation. It requires structured explainability built into the governance substrate.
Conclusion
The Clarity ACT requires AI explainability. Not as an aspiration. As a legal requirement.
Most organizations will attempt to meet this requirement with better logging, improved dashboards, and more detailed compliance reports. Those approaches will fail under scrutiny because they cannot answer "why?" with verifiable evidence.
We built "Why was this blocked?" into every decision. Not as a feature. As a structural property of the governance runtime. Every explanation cites verified governance objects. Every citation can be independently confirmed. Every explanation can be exported as a standalone audit report.
This is what Clarity ACT compliance looks like when it is built into the architecture, not bolted on after the fact.