Secure MPC vs FHE: When to Use Which
Two technologies have emerged as the leading approaches to computing on private data without exposing it: secure multiparty computation (MPC) and fully homomorphic encryption (FHE). Both promise the same outcome at a high level. You can process sensitive information while keeping it private. But the engineering tradeoffs, deployment models, and practical limitations are fundamentally different. Choosing the wrong one for your use case does not just cost performance; it can compromise the security properties you were trying to achieve in the first place.
This guide breaks down when each approach makes sense, where they overlap, and why most real-world deployments end up using elements of both.
The Fundamental Models
MPC and FHE solve the private computation problem from different starting points. Understanding these starting points is essential to making the right architectural choice.
Secure Multiparty Computation involves two or more parties who each hold private inputs. They want to compute a function of their combined inputs without any party learning anything about the other parties' data beyond what is revealed by the output. The classic example is the millionaires' problem: two people want to know who is richer without revealing their actual net worth. MPC solves this through interactive protocols where the parties exchange encrypted messages, with the protocol mathematically guaranteeing that no party learns more than the function output.
Fully Homomorphic Encryption allows a single party to encrypt data and then send it to an untrusted server for computation. The server operates directly on the ciphertext, performing additions and multiplications that correspond to operations on the underlying plaintext. The result, when decrypted by the data owner, is identical to what would have been produced by computing on the plaintext directly. The server never sees the data, never holds the decryption key, and learns nothing about the input or the result.
The distinction is structural. MPC is a multi-party protocol requiring communication. FHE is a single-party encryption scheme requiring computational power but no communication during the computation phase itself.
Communication vs Computation: The Core Tradeoff
The most important practical difference between MPC and FHE is the resource they consume most heavily. MPC is communication-bound. FHE is computation-bound.
In an MPC protocol, the parties must exchange messages during every step of the computation. For a garbled circuit protocol (the most common two-party MPC approach), one party generates an encrypted version of the circuit representing the function, and the other party evaluates it. The amount of data exchanged is proportional to the size of the circuit. For complex functions, this can mean gigabytes of network traffic. Even with optimizations like OT extension and free XOR gates, the communication overhead remains the primary bottleneck for most practical deployments.
FHE has no communication during computation. Once the data is encrypted and sent to the server, all processing happens locally on the server. The bottleneck is the computational cost of homomorphic operations. A single homomorphic multiplication is orders of magnitude more expensive than its plaintext equivalent. The exact overhead depends on the FHE scheme (BFV, CKKS, TFHE), the parameter set, and the implementation quality, but the overhead is substantial and must be managed carefully through batching and algorithm design.
This tradeoff has a direct architectural implication. If your parties are on the same local network with high-bandwidth, low-latency connections, MPC's communication overhead is manageable. If your computation must happen in a remote cloud with variable network conditions, FHE's independence from communication during computation is a significant advantage.
Threat Models: Who Are You Protecting Against?
The threat models for MPC and FHE differ in important ways that affect which one is appropriate for a given deployment.
MPC protects against honest-but-curious (semi-honest) adversaries in its basic form: parties who follow the protocol correctly but try to learn additional information from the messages they receive. Protecting against malicious adversaries (who deviate from the protocol) is possible but significantly more expensive, typically adding substantial overhead. Most practical MPC deployments assume semi-honest adversaries and rely on legal agreements or institutional incentives to ensure protocol compliance.
FHE protects against a fundamentally different threat: an untrusted computation provider. The server performing the computation never has access to the decryption key and learns nothing regardless of its behavior. There is no distinction between semi-honest and malicious in the FHE model because the server cannot deviate from the protocol in any meaningful way that compromises data privacy. It either computes the correct function on the ciphertext (producing the right answer) or it does not (producing garbage). The data remains encrypted either way.
This makes FHE strictly stronger than MPC against a single adversarial server, but it cannot provide the multi-input functionality that MPC enables. If you need multiple data owners to contribute private inputs to a joint computation, pure FHE does not solve the problem because someone must hold the decryption key.
Use Cases Where MPC Wins
MPC is the right choice when the computation inherently involves multiple independent data owners who do not trust each other and will not share a decryption key.
Private set intersection (PSI) is the canonical MPC application. Two organizations, say a bank and a regulatory agency, each have a list of entities. They want to know which entities appear on both lists without revealing the non-overlapping entries. MPC protocols for PSI are highly optimized and can handle very large datasets with reasonable performance. FHE cannot solve this problem without one party encrypting their entire set under a key that the other party can query against, which requires a trusted key holder and changes the trust model entirely.
Secure auctions are another natural fit. Multiple bidders submit encrypted bids, and the auction protocol determines the winner without revealing the losing bids. Each bidder holds their own private input, and no single party should learn all bids. MPC handles this naturally through secret-sharing-based protocols where the bid values are split across multiple servers.
Multi-institution fraud detection involves multiple banks sharing information about suspicious transactions without revealing their customers' legitimate activity. Each bank holds private data, and the goal is to detect patterns across institutions without centralizing the data. The multi-party structure and the sensitivity of each party's contribution makes MPC the natural approach for these collaborative intelligence scenarios.
Collaborative machine learning where multiple organizations train a model on their combined datasets without sharing raw data. Each party contributes gradients or model updates computed on their private data, and MPC protocols ensure that no party can reconstruct another's training data from the shared updates. Federated learning with MPC-based aggregation is becoming increasingly common in healthcare and financial services.
Use Cases Where FHE Wins
FHE is the right choice when a single data owner needs to outsource computation to an untrusted server, or when the computation must happen without any interaction after the initial data submission.
Encrypted database queries allow a client to search an encrypted database on a remote server without the server learning what was queried or what was returned. The client encrypts the query, the server processes it homomorphically against the encrypted data, and the encrypted result is returned. This is a single-party problem: the data owner controls both the query and the data, and the server is purely a computation provider with no need to see anything in plaintext.
Biometric authentication is where H33 focuses much of its FHE engineering. A user enrolls a biometric template (face, fingerprint, iris) that is encrypted using FHE before it ever leaves the device. When the user authenticates, the fresh biometric capture is also encrypted, and the matching computation happens entirely in the encrypted domain. The server never sees the biometric data in plaintext at any point in the process. This is fundamentally a single-party problem: one user, one template, one match. MPC would require the user and the server to engage in an interactive protocol for every authentication, which adds latency and complexity that are impractical at scale.
Cloud-based machine learning inference allows a data owner to submit encrypted inputs to a model hosted on an untrusted cloud. The inference runs homomorphically, and the encrypted prediction is returned. The model owner never sees the input data, and the data owner receives the prediction without revealing their data. FHE's non-interactive nature is essential here because the model owner does not want to participate in an interactive protocol for every inference request from every client.
Encrypted analytics on sensitive datasets (medical records, financial data, classified intelligence) where a cloud provider runs aggregation, statistical, or machine learning computations without accessing the underlying data. The data stays encrypted throughout the entire computation pipeline, and only the data owner can decrypt the results. This is particularly valuable for regulated industries where data residency and access controls are legally mandated.
Performance Comparison: Practical Numbers
Abstract comparisons are useful, but architects need concrete performance data to make deployment decisions. The performance characteristics of MPC and FHE are different enough that the right choice often becomes obvious once you consider the specific workload.
For MPC, the state of the art in garbled circuit protocols can evaluate circuits at significant speed over a LAN. Over a WAN with typical round-trip latency, performance drops considerably due to communication rounds. Secret-sharing-based MPC can be faster for arithmetic circuits but requires a preprocessing phase that generates correlated randomness, which itself can be expensive.
For FHE, performance depends heavily on the scheme and the operation. H33's BFV implementation processes 32 biometric authentications in a single batched ciphertext operation in approximately 943 microseconds on production hardware. That works out to roughly 30 microseconds of FHE time per authentication. The batching capability of BFV and CKKS schemes is what makes FHE practical for high-throughput workloads: you pack multiple plaintexts into a single ciphertext and process them all with one set of operations.
TFHE (the third major FHE family) excels at boolean circuits and operations with small integer precision. H33's TFHE implementation achieves hundreds of operations per second for gate-level computations, making it suitable for precise, bit-level processing like encrypted comparison operations and exact arithmetic.
Hybrid Approaches: Using Both
In practice, many advanced systems combine MPC and FHE to get the benefits of both. This is not a compromise; it is an engineering optimization that leverages each technology where it is strongest.
One common pattern is to use FHE for bulk computation and MPC for decryption. In a multi-key FHE scheme, multiple parties each encrypt their data under their own keys. The server performs homomorphic computations on all the ciphertexts, producing an encrypted result. But decrypting the result requires all parties to participate in an MPC protocol (because no single party holds the combined decryption key). This gives you FHE's non-interactive bulk computation with MPC's multi-party input capability, combining the best of both worlds.
Another pattern uses MPC to convert between FHE schemes. Sometimes a computation starts in BFV (for integer arithmetic), needs to switch to CKKS (for approximate arithmetic in a neural network layer), and then back to BFV. The scheme-switching operation can be implemented as a lightweight MPC protocol between the data owner and the server, which is more efficient than doing the conversion purely homomorphically.
H33 uses a hybrid approach for certain multi-party biometric verification scenarios. The individual biometric matching is done purely in FHE (no interaction required), but the aggregation of match results across multiple enrollments from different identity providers uses an MPC protocol to ensure that no single provider learns the full match profile of any individual user.
Quantum Resistance Considerations
Both MPC and FHE can be made post-quantum secure, but the paths differ in important ways.
MPC protocols that use oblivious transfer (OT) as their core primitive often rely on the hardness of problems like Diffie-Hellman or factoring. These are vulnerable to quantum computers. Post-quantum OT constructions exist (based on lattice problems or other quantum-resistant assumptions), but they are significantly less efficient than classical OT. The communication overhead of MPC protocols increases substantially when using PQ-secure OT variants.
FHE based on lattice problems (BFV, CKKS, and most practical schemes) is inherently post-quantum secure because the MLWE and RLWE problems that underlie these schemes are believed to be hard for quantum computers. You get quantum resistance as a natural consequence of using lattice-based FHE. This is one of FHE's underappreciated advantages: adopting FHE for privacy also gives you quantum resistance at no additional cost or performance penalty.
H33's entire stack is built on lattice-based FHE, which means every biometric authentication, encrypted search, and private computation is quantum-resistant by construction. This is not an add-on feature; it is an inherent property of the mathematical foundations we build on.
Making the Decision
The decision framework is straightforward once you identify your deployment structure. If multiple parties each hold private data and need to compute a joint function: use MPC. If a single data owner needs to outsource computation to an untrusted server: use FHE. If you need both multi-party input and heavy computation: use a hybrid approach. If post-quantum security is a requirement: prefer lattice-based FHE for the computational core.
The era of treating MPC and FHE as competing technologies is ending. The future of private computation is hybrid architectures that use each technology where it is strongest, orchestrated by platforms that abstract the complexity away from application developers and let them focus on their actual business problems rather than cryptographic engineering details.
Build with Encrypted Computation
H33 provides production-grade FHE and hybrid MPC capabilities through a single API.
Get API Key Read the Docs