The Missing Piece in Homomorphic Encryption
For the past fifteen years, the homomorphic encryption community has focused on a single capability: computing on encrypted data. Add two encrypted numbers and get an encrypted sum. Multiply an encrypted vector by a plaintext matrix and get an encrypted result. Evaluate a polynomial on encrypted inputs and get encrypted outputs. The research papers, the benchmarks, the startup pitches -- they all center on the same idea: "Look, we can do arithmetic on ciphertext."
This is valuable. But it is incomplete. Arithmetic on encrypted data gives you encrypted compute. What it does not give you is encrypted decisions. In every real system, compute is only half the story. The other half is control flow: if this condition is true, do one thing; if it is false, do another. Threshold checks, conditional routing, branch selection, argmax, policy enforcement -- these are the operations that turn computation into action.
Traditional FHE handles the compute. It does not handle the control flow. When your CKKS model produces an encrypted classification score, the question "is this score above the threshold?" requires a comparison. That comparison is a decision. And in traditional architectures, decisions require seeing the data.
Encrypted control flow is the capability that closes this gap. It is the ability to evaluate branch conditions on encrypted data, where the condition, the inputs, and the output are all ciphertext. The server evaluates all possible branches and uses encrypted conditional selection (MUX) to produce the correct output, without knowing which branch was taken or what the inputs were.
This is not an incremental improvement on FHE. It is a qualitative leap. It transforms homomorphic encryption from a compute tool into an enforcement tool. And it is the foundation of everything H33-Agent-Zero does.
What "Encrypted" Really Means Here
When we say "encrypted control flow," every word matters. Let us be precise about what is encrypted and what is not.
The branch condition is encrypted. In a plaintext system, you evaluate "if score > threshold" and the result is a Boolean: true or false. In encrypted control flow, the score is an encrypted value, the comparison produces an encrypted bit, and no party learns whether the condition was true or false. The server computes the comparison result as a ciphertext. It holds the answer but cannot read it.
The inputs to each branch are encrypted. In a plaintext system, the true-branch and false-branch receive plaintext values and produce plaintext outputs. In encrypted control flow, both branches receive encrypted inputs and produce encrypted outputs. The server evaluates both branches to completion.
The output is encrypted. In a plaintext system, only the taken branch produces the output. In encrypted control flow, both branches produce encrypted outputs, and an encrypted MUX selects the correct one based on the encrypted condition bit. The server sends the final encrypted result to the client, having never learned the condition, the inputs, or which output was selected.
The control flow structure is public. The fact that there is a branch, the number of branches, the depth of nesting -- these are visible to the server. This is analogous to knowing the structure of a decision tree (which features are compared at which nodes) while not knowing the feature values. The structure is the model; the values are the data. The model is public; the data is encrypted.
This distinction is critical. Encrypted control flow does not hide the program. It hides the data flowing through the program and the decisions the program makes about that data. The server knows what computation it is performing (evaluate a decision tree, check a threshold, compute an argmax) but learns nothing about the values being processed or the outcome.
GT: The Foundation of Encrypted Decisions
Greater-than (GT) comparison is the foundational primitive for encrypted control flow. Nearly every decision operation that matters in production systems can be expressed in terms of GT.
Threshold check: "Is X greater than T?" is a direct GT operation. This is the most common decision primitive in compliance engines, fraud detection, risk scoring, and policy enforcement. A single 8-bit GT comparison costs 2(8)-1 = 15 programmable bootstraps.
Greater-than-or-equal (GTE): "Is X greater than or equal to T?" can be computed as NOT(T GT X), which is the negation of the reverse comparison. NOT is free in TFHE (just negate the LWE ciphertext), so GTE costs the same as GT: 15 programmable bootstraps for 8-bit values.
Argmax: "Which of these N values is the largest?" is a tournament of GT comparisons. Compare value 1 against value 2. Compare the winner against value 3. Continue until all values have been compared. For N values, argmax requires N-1 GT comparisons, each costing 2n-1 programmable bootstraps. For 10 categories at 8-bit precision, argmax costs 9 times 15 = 135 programmable bootstraps.
Decision trees: Each branch in a decision tree is a GT comparison. A tree with d levels requires d sequential GT evaluations per path, with parallel evaluation across branches at each level. As detailed in our companion article on encrypted decision trees, the total cost scales with tree depth and fan-out.
Range checks: "Is X between A and B?" decomposes into (X GT A) AND (B GT X), which is two GT comparisons and one AND gate: 2 times 15 + 1 = 31 programmable bootstraps for 8-bit values.
Min/max selection: "Return the smaller of X and Y" is a GT comparison followed by a MUX: if X GT Y, return Y; else return X. The GT comparison costs 15 programmable bootstraps, and the MUX adds a small additional cost per output bit.
This is why we say GT is the primitive that matters. Equality (EQ) comparisons are used less frequently in decision logic -- most real-world policies ask "is this value above a threshold?" rather than "is this value exactly equal to a specific number?" And GT has better structural properties in the TFHE Boolean gate setting.
The Honest Story on EQ vs GT
We are transparent about the operational characteristics of both primitives because trust requires honesty about limitations.
GT works reliably at 8-bit precision. An 8-bit GT circuit chains 15 programmable bootstraps in sequence, and the noise accumulation through this chain remains within the decryption tolerance of standard TFHE parameters. This means GT-based primitives -- thresholds, argmax, decision trees, GTE, range checks -- all work at 8-bit feature precision in production.
EQ (equality comparison) has a different noise profile. At 4-bit precision and below, EQ works cleanly with standard parameters. At 8-bit precision, the chained comparison noise depth exceeds the standard noise budget. This is a structural property of how comparison noise compounds through the EQ gate chain -- EQ requires a different pattern of bootstrap chaining than GT, and the noise accumulates differently. To support 8-bit EQ, the TFHE parameters must be upgraded to use TRLWE with ring dimension N=1024, which increases the bootstrapping key size and the per-bootstrap computation cost.
This is not a bug. It is not a limitation that will be patched in a future release. It is a fundamental characteristic of the noise dynamics in Boolean TFHE circuits. Different gate topologies produce different noise profiles, and EQ's topology is noisier than GT's at higher bit widths.
The practical impact is minimal because, as we have established, GT covers the vast majority of real decision logic. Threshold checks, argmax, decision trees, GTE, range checks, and min/max selection all use GT. The cases where EQ is needed (exact match lookups, identity verification, set membership) can either work at 4-bit precision or use the upgraded parameters. For H33-Agent-Zero's core use cases -- confidence boundaries, policy enforcement, encrypted classification -- GT is the only primitive needed.
MUX: The Encrypted If/Else
The MUX (multiplexer) operation is what turns encrypted comparisons into encrypted control flow. Without MUX, a GT comparison produces an encrypted bit that nobody can read. With MUX, that encrypted bit selects between two encrypted values, producing an encrypted result that corresponds to the correct branch -- all without anyone learning which branch was taken.
The MUX operation takes three encrypted inputs: a condition bit c, a value a (selected when c=1), and a value b (selected when c=0). It produces an encrypted output that equals a if c was 1, or b if c was 0. The server evaluates the MUX without learning c, a, b, or the result.
Algebraically, MUX(c, a, b) = (c AND a) OR (NOT(c) AND b). In TFHE, this can be computed more efficiently as a single programmable bootstrap with a carefully chosen lookup table, reducing the cost from three bootstraps (the naive composition) to one or two bootstraps depending on the implementation.
MUX is the encrypted if/else. It is the building block for all conditional logic in encrypted control flow. Decision trees use MUX at every level to propagate the correct branch result upward. Argmax uses MUX after each GT comparison to keep track of the running maximum. Policy enforcement uses MUX to select the correct action based on encrypted condition bits.
The composition of GT and MUX gives us the full vocabulary of encrypted control flow. GT evaluates the condition. MUX selects the outcome. Together, they implement encrypted branching.
From Encrypted Compute to Encrypted Enforcement
The distinction between encrypted compute and encrypted enforcement is the single most important concept in applied homomorphic encryption, and it is the concept that the rest of the industry has largely missed.
Encrypted compute means performing arithmetic on encrypted data. Adding two encrypted numbers. Multiplying an encrypted vector by a matrix. Evaluating a neural network on encrypted inputs. The output is an encrypted result that must be decrypted before anyone can act on it. Encrypted compute is passive: it transforms data but does not make decisions.
Encrypted enforcement means making decisions on encrypted data and taking actions based on those decisions, without any party ever seeing the data or the decision in plaintext. The classification result is encrypted. The threshold check is encrypted. The policy tag is encrypted. The routing decision is encrypted. The enforcement action is driven by encrypted conditions through encrypted MUX operations. Encrypted enforcement is active: it decides and acts.
The leap from compute to enforcement is the leap from "we can evaluate a model on your encrypted data and give you back an encrypted result" to "we can evaluate a model on your encrypted data, decide what to do based on the result, enforce a policy based on the decision, and take action -- all without ever seeing the data, the model output, the decision, or the action."
This is what H33-Agent-Zero implements. The CKKS stage is encrypted compute (classification on encrypted features). The TFHE stage is encrypted enforcement (threshold checks, argmax, decision trees, and policy tag computation on encrypted classification results). The combination gives you a complete pipeline from encrypted input to encrypted action.
Encrypted Threshold: The Simplest Useful Pattern
The simplest encrypted control flow pattern is the encrypted threshold. An encrypted value X is compared against a public threshold T. The result is an encrypted bit indicating whether X exceeds T. An encrypted MUX then selects between two possible actions based on the result.
Consider a fraud detection system. The fraud score for a transaction is computed on encrypted features (amount, location, merchant category, user history) using CKKS. The encrypted fraud score is then compared against a threshold using an 8-bit GT operation. If the score exceeds the threshold, the transaction is flagged for review (encrypted flag bit = 1). If not, the transaction is approved (encrypted flag bit = 0). The server sends the encrypted flag bit back to the client.
The total cost is: CKKS classification (variable, depending on model complexity) + 15 programmable bootstraps for the 8-bit GT comparison + 1 programmable bootstrap for the MUX selection. The server learned nothing about the fraud score, nothing about whether the threshold was exceeded, and nothing about whether the transaction was flagged or approved.
This pattern alone covers an enormous range of production use cases. Any system that computes a score and compares it against a threshold -- risk scoring, credit decisioning, anomaly detection, compliance screening, access control -- can use this pattern to enforce policy on encrypted data.
Encrypted Argmax: Selecting Among Categories
Argmax is the second most common decision primitive after threshold checks. Given a vector of encrypted scores (one per category), argmax determines which category has the highest score. This is used in document classification (which category does this document belong to?), intent detection (what is the user asking for?), risk tiering (which risk tier does this entity fall into?), and any multi-class decision problem.
Encrypted argmax works as a tournament of GT comparisons. Start with the first two categories. Compare their encrypted scores using GT. Use MUX to select the winner and its index. Compare the winner against the third category. Continue until all categories have been compared.
For N categories at n-bit precision, encrypted argmax requires (N-1) GT comparisons, each costing 2n-1 programmable bootstraps, plus (N-1) MUX operations for value selection and (N-1) additional MUX operations for index tracking. For 10 categories at 8-bit precision: 9 times 15 = 135 bootstraps for GT comparisons, plus MUX overhead, for a total of approximately 150-170 programmable bootstraps.
At approximately one millisecond per bootstrap, encrypted argmax over 10 categories takes about 170 milliseconds on a single core. With parallelism (the tournament has log2(N) sequential stages, and within each stage the comparisons are independent), the wall-clock time drops to approximately log2(10) times 15 = roughly 60 milliseconds.
The server performs this entire computation without learning any of the scores, which category won, or even the relative ordering of the categories. It sends the encrypted winner index and encrypted winning score to the client, which decrypts locally.
Encrypted MUX Chains: Multi-Level Decisions
Real-world decision logic is rarely a single comparison. It is a chain of decisions, where the outcome of one decision determines the context for the next. "If the fraud score is high AND the transaction amount is above $10,000 AND the destination is a high-risk jurisdiction, then block. If the fraud score is high but the amount is below $10,000, then flag for review. Otherwise, approve."
In encrypted control flow, this chain of decisions is implemented as a cascade of GT comparisons and MUX operations. Each GT comparison produces an encrypted condition bit. Each MUX selects an encrypted intermediate result based on the condition. The final MUX in the chain produces the encrypted action: block, flag, or approve.
The server evaluates all possible paths through the decision chain. For a chain with k binary decisions, the server evaluates 2^k leaf outcomes and uses k levels of MUX to select the correct one. The cost scales linearly with k for the comparisons and logarithmically for the MUX depth (since MUX operations at the same level are independent and can be parallelized).
This is equivalent to evaluating a decision tree, and the cost model is the same. But the framing is different: instead of thinking about "a decision tree," we are thinking about "a chain of encrypted if/else statements." This framing is more natural for developers building policy enforcement logic, where the mental model is procedural code with conditional branches, not a tree data structure.
Decision Finality on Ciphertext
One of the key concepts in H33-Agent-Zero's architecture is "decision finality on ciphertext." This phrase encapsulates what encrypted control flow achieves.
In a traditional system, a decision is "final" when an authorized party reviews the data, evaluates the conditions, and commits to an outcome. The finality comes from a human or a system with plaintext access making a judgment call. The decision is final because the decision-maker saw the data and made the call.
In encrypted control flow, the decision is final without anyone seeing the data. The GT comparison was evaluated correctly (this is a mathematical operation, not a judgment call). The MUX selected the correct branch (this is deterministic, given the encrypted condition bit). The output is the unique encrypted result that corresponds to the encrypted inputs and the public decision logic. No party had to see the data to make the decision. The decision is final because the cryptography guarantees correctness.
This concept is central to Agent-Zero's confidence boundary. When the agent evaluates whether to act autonomously or escalate, the decision is made through encrypted GT comparisons on encrypted confidence scores. The outcome is final: the agent either acts or escalates. But the finality is cryptographic, not observational. No one saw the confidence scores. No one saw the comparison results. No one decided to act or escalate. The encrypted control flow produced the correct outcome deterministically.
This is a different kind of trust. Traditional systems require you to trust the decision-maker (they saw the data; did they make the right call?). Encrypted control flow requires you to trust the decision logic (the GT threshold and MUX chain; did you configure them correctly?) and the mathematics (LWE hardness; can someone break the encryption?). The first is a design question with verifiable answers. The second is a foundational assumption shared with all of modern cryptography.
Why Not Use Garbled Circuits?
Garbled circuits (Yao's protocol) also enable computation on private inputs without revealing them. The natural question is: why use TFHE encrypted control flow instead of garbled circuits?
Garbled circuits are one-time-use. Each circuit can be evaluated once. To evaluate the same function on different inputs, you need a new garbled circuit. This means the circuit must be regenerated (and transmitted) for every evaluation. For a compliance engine processing millions of transactions per day, this is prohibitively expensive in communication bandwidth.
TFHE evaluation keys are reusable. Once the client generates the bootstrapping key (BSK) and key-switching key (KSK), the server can evaluate any circuit any number of times on any number of different encrypted inputs. The key setup cost is amortized over all evaluations. For high-throughput applications, this amortization makes TFHE dramatically more efficient than garbled circuits.
Additionally, garbled circuits require interaction between the parties during circuit generation (oblivious transfer for the input selection). TFHE is non-interactive after key setup: the client encrypts, the server evaluates, the client decrypts. There is no back-and-forth during computation. This makes TFHE suitable for asynchronous architectures where the client may not be online during evaluation.
Finally, garbled circuits provide security against semi-honest adversaries by default. Achieving security against malicious adversaries (where the garbler might cheat) requires additional mechanisms (cut-and-choose, authenticated garbling) that increase cost substantially. TFHE is secure against malicious servers by default: the server cannot deviate from correct computation in a way that benefits it, because it cannot decrypt any intermediate or final values regardless of what it computes.
Why Not Use Multi-Party Computation?
Multi-party computation (MPC) is another approach to computing on private data. In MPC, multiple parties hold shares of the input data and jointly compute a function without any party learning the others' inputs. Secret sharing-based MPC protocols (like SPDZ or MASCOT) can evaluate arbitrary circuits with low per-gate cost.
The challenge with MPC for encrypted control flow is the trust model. MPC requires multiple non-colluding parties to hold shares of the data. If a sufficient number of parties collude, they can reconstruct the plaintext. This means the security of MPC depends on an assumption about the behavior of the parties: they must not collude. This is an organizational guarantee, not a mathematical one.
TFHE encrypted control flow requires only one trust assumption: the client keeps its secret key secret. There are no multiple parties to coordinate, no collusion thresholds to maintain, no honest-majority assumptions to enforce. The client trusts itself; everyone else is untrusted. This is the simplest possible trust model, and it is the correct trust model for scenarios where a single entity owns the data and wants to evaluate policies on it without trusting the evaluation server.
MPC has its place: it is well-suited for scenarios where multiple parties each contribute private inputs to a joint computation (secure auctions, private set intersection, collaborative analytics). But for the single-client, untrusted-server model that dominates enterprise data protection -- where a bank wants to evaluate compliance rules on its customers' data without the compliance engine seeing the data -- TFHE encrypted control flow is the right tool.
Composing CKKS and TFHE: The Scheme Transition
H33-Agent-Zero uses two different FHE schemes in a single pipeline, and the transition between them is where encrypted control flow begins.
CKKS handles the "soft" computation: neural network inference, linear algebra, polynomial evaluation. CKKS operates on encrypted floating-point vectors, supports SIMD parallelism with up to 4,096 slots per ciphertext, and is optimized for the kind of approximate arithmetic that machine learning models require. The output of the CKKS stage is an encrypted vector of class scores or feature representations.
TFHE handles the "hard" decisions: threshold comparisons, argmax, conditional selection, policy enforcement. TFHE operates on encrypted bits, supports exact Boolean gate evaluation, and is optimized for the kind of precise comparison logic that decision-making requires.
The transition from CKKS to TFHE involves discretizing the encrypted class scores. The continuous encrypted values from CKKS are converted to fixed-precision encrypted integers suitable for TFHE gate evaluation. This conversion introduces a quantization step that maps floating-point scores to n-bit integer representations. The quantization precision is chosen to preserve the decision-relevant information (which category has the highest score, whether a score exceeds a threshold) while keeping the bit width manageable for TFHE gate costs.
At 8-bit precision, the quantization preserves 256 distinct levels per score. For most classification tasks, this is far more precision than needed to distinguish between classes. A well-trained model typically produces class scores that differ by more than 1/256 of the score range, so 8-bit quantization does not change the classification outcome.
The scheme transition is the moment where encrypted compute becomes encrypted control flow. Before the transition, we have encrypted numbers that we can add and multiply. After the transition, we have encrypted decisions that route encrypted data through encrypted policies. That is the qualitative leap.
The Cost Model
Encrypted control flow has a clear, predictable cost model that makes capacity planning straightforward.
GT comparison (n-bit): 2n-1 programmable bootstraps. At 8-bit: 15 programmable bootstraps.
MUX (1-bit condition, selecting between m-bit values): m programmable bootstraps (one MUX per output bit).
Threshold check (n-bit): One GT comparison = 2n-1 programmable bootstraps.
Argmax (N categories, n-bit scores): (N-1)(2n-1) programmable bootstraps for GT comparisons, plus MUX overhead.
Decision tree (d levels, n-bit features): Sum of comparisons at each level times 2n-1, plus MUX overhead per level.
Per-bootstrap latency: Approximately 1 millisecond on current hardware.
This cost model lets you estimate the total latency of any encrypted control flow pipeline before writing a single line of code. Count the GT comparisons. Count the MUX operations. Multiply by the per-bootstrap cost. Factor in parallelism (independent operations at the same level can run simultaneously). The result is a reliable latency estimate.
H33 meters encrypted control flow at the individual gate level. Each programmable bootstrap consumed is tracked and billed. This gives customers precise visibility into their usage and the ability to optimize their decision logic for cost efficiency.
From Theory to Production: Agent-Zero's Confidence Boundary
H33-Agent-Zero's confidence boundary is the production embodiment of encrypted control flow. Here is how it works in practice.
An AI agent receives a task that involves sensitive data: classify a document, evaluate a risk score, enforce a compliance policy. The agent does not see the data. Instead, the data is encrypted on the client device and sent to the Agent-Zero evaluation pipeline.
The pipeline computes an encrypted confidence score using CKKS (how confident is the model in its classification?). The encrypted confidence score is then compared against the agent's confidence threshold using an 8-bit GT comparison in TFHE. If the confidence exceeds the threshold, the agent acts autonomously on the encrypted classification result. If the confidence is below the threshold, the task is escalated to a human reviewer.
The agent never sees the data. The agent never sees the confidence score. The agent never knows whether it acted autonomously or escalated. It simply receives an encrypted action command (computed by the MUX stage of the pipeline) and executes it. The decision was made by encrypted control flow, not by any party with plaintext access.
This is decision finality on ciphertext in practice. The agent acted correctly. The decision was driven by the encrypted data and the public confidence threshold. The correctness is verifiable through H33's 74-byte post-quantum attestation. And no party at any point saw the underlying data.
Encrypted control flow is the technology that makes this possible. GT is the primitive. MUX is the conditional. TFHE is the substrate. And Agent-Zero is the product that puts it all in the hands of developers building the next generation of privacy-preserving systems.