The Shift Munich Re Identified
In its 2025 Cyber Insurance Market Outlook, Munich Re made a statement that should have reverberated through every underwriting department in the industry: cyber underwriting has fundamentally shifted from actuarial modeling to technical verification. The implication is stark. Actuarial models work when the input data is reliable. In property insurance, the building either has a sprinkler system or it doesn't — an inspector can verify it. In cyber insurance, the organization either has MFA deployed across all accounts or it doesn't — but nobody inspects. The underwriter asks. The applicant answers. The policy is bound on trust.
This trust-based model worked when cyber was a niche line with small limits and low claim frequency. It does not work when cyber premiums represent billions in annual written premium, when single claims regularly exceed $10 million, and when the gap between what applicants report and what actually exists in their environments is widening every renewal cycle. Munich Re's observation is not theoretical. It is a recognition that the data feeding underwriting models is systematically unreliable, and that the industry needs a different approach to collect verifiable technical state from policyholders.
The timing matters. The Insurance Tech Conference in Chicago on June 10–11, 2026, will feature multiple sessions on automated underwriting verification. The conversation is accelerating because the losses are accelerating. Every claim that originates from a control that was reported but not implemented erodes the actuarial foundation the entire market is built on.
The Fraud Problem: 58% BEC and Funds Transfer
Coalition's 2026 Cyber Claims Report provides the data that quantifies the problem. Fifty-eight percent of all cyber insurance claims in their book are now attributable to business email compromise (BEC) and funds transfer fraud (FTF). This is not a marginal category. This is the majority of claims, and the number has been climbing for three consecutive years.
BEC attacks succeed when an attacker compromises a business email account and uses it to redirect payments, exfiltrate sensitive information, or impersonate executives. The primary defense against BEC is multi-factor authentication. Every cyber insurance application in the market asks whether MFA is enabled on email accounts. Every applicant checks the box. And yet, 58% of claims are BEC and FTF.
The disconnect is not accidental. It exists because the question on the application and the reality in the environment are two different things. MFA might be enabled on the primary domain but not on shared mailboxes. It might be configured for administrator accounts but not for finance department accounts that process wire transfers. It might have been enabled at the time of application but disabled three months later when an employee complained about login friction. The application captures a moment in time. The claim happens in real time. The gap between them is where the losses live.
The Coalition report also reveals that the average BEC claim exceeds $300,000 when funds transfer fraud is involved. When you multiply that average by the volume — thousands of claims per year across the industry — you begin to understand why loss ratios have not improved as much as the investment in preventive controls would suggest. The controls were reported. They were not verified. The losses materialized in the gap.
How Self-Reporting Creates Liability for Both Sides
Self-reported controls create a legal liability trap that harms both the policyholder and the carrier. For the policyholder, the application is a signed document. When it states that MFA is enabled across all user accounts and a BEC claim reveals that MFA was disabled on the compromised account, the carrier has grounds for claim denial based on material misrepresentation. The policyholder paid premiums for coverage they may never receive because the application contained an inaccuracy that may have been unintentional.
This is not a hypothetical scenario. Claim denials based on application discrepancies are increasing across the market. Carriers are investing in post-incident forensic analysis specifically to identify gaps between reported controls and actual state at the time of incident. When they find a gap — and they frequently do — the claim is denied or settled at a fraction of the requested amount. The policyholder discovers at the worst possible moment that the application they signed a year ago is now evidence against them.
For carriers, self-reported data creates a different problem: adverse selection masked by optimistic reporting. When every applicant reports "yes" to every control question, the underwriting model cannot differentiate between organizations that genuinely have comprehensive MFA and organizations that have partial MFA with gaps in critical accounts. The premium is the same for both. The claims come disproportionately from the latter. The carrier's loss ratio deteriorates not because they priced risk incorrectly in theory, but because the input data was wrong in practice.
There is also regulatory exposure. As state insurance departments increase scrutiny of claim denial practices, carriers face pressure to either verify the data they collect or stop using it as a basis for denial. You cannot ask a question, accept an unverified answer, bind a policy, collect premiums for a year, and then deny a claim because the answer turned out to be wrong — without eventually facing regulatory pushback. The model is unsustainable from every direction.
The Questionnaire Problem Is Structural
The cyber insurance application has evolved significantly over the past five years. Early applications were simple checklists. Modern applications from carriers like Beazley, AXA XL, and Tokio Marine can run to 15 pages with hundreds of questions covering network segmentation, endpoint detection, backup architecture, privileged access management, incident response testing, and vendor risk management. The level of detail has increased dramatically.
But the fundamental architecture has not changed. The application is still a questionnaire. It is still filled out by a human being. It is still completed at a point in time, typically 30 to 60 days before policy inception. And it is still unverified. The person completing the application may be a CISO who genuinely understands the environment, or it may be a broker's assistant working from notes provided by the IT director, or it may be a CFO who checked with the managed service provider and got a verbal "yes, we have that."
None of these scenarios produce reliable technical data. Even when the CISO completes the application personally, they are answering based on their understanding of the environment at that moment. They may not know that the Azure AD conditional access policy has an exception for the CFO's account. They may not know that the backup retention policy was changed from 90 days to 30 days to save storage costs. They may not know that a subsidiary acquired six months ago is running a separate email system without MFA. The application asks questions that assume centralized visibility. Modern IT environments do not provide centralized visibility by default.
The result is a systematic bias toward overstatement. Nobody intentionally lies on an insurance application — the penalties for fraud are severe. But the incentive structure rewards optimistic answers. Saying "yes" to more controls results in lower premiums. Saying "no" or "partial" triggers follow-up questions, supplemental documentation requirements, and potentially higher rates or coverage restrictions. The rational applicant interprets ambiguous questions favorably. "Do you have MFA?" becomes "yes" even when MFA covers 80% of accounts but not the shared service accounts that process vendor payments.
What Technical Verification Actually Requires
When Munich Re says underwriting must shift to technical verification, what does that mean in practice? It means the underwriter needs to know the actual state of controls in the policyholder's environment — not what the policyholder says, but what the systems report. This requires three capabilities that the traditional application process does not provide.
First, direct connector access to the policyholder's security stack. The verification system needs to query the identity provider (Azure AD, Okta, Google Workspace) to confirm MFA enrollment status across all accounts. It needs to query the endpoint detection platform (CrowdStrike, SentinelOne, Microsoft Defender) to confirm deployment coverage. It needs to query the backup system to confirm retention policies and immutability settings. This is not a scan or a penetration test. It is an API-based read of configuration state from the authoritative systems.
Second, continuous monitoring rather than point-in-time assessment. A control that is verified at binding is only useful if it remains in place throughout the policy period. MFA can be disabled. EDR agents can be uninstalled. Backup schedules can be changed. The verification must be ongoing, with alerts when a verified control degrades below the attested threshold. An annual questionnaire captures state once per year. Controls degrade in days.
Third, cryptographic attestation of the verified state. The verification results must be tamper-evident. If the system reports that MFA is enabled across 100% of accounts at 2:14 PM on April 28, 2026, that report must be cryptographically signed so that it cannot be altered after the fact. This matters for claims adjudication. When a claim occurs, both parties need an immutable record of what the controls looked like at the time of the incident — not what anyone remembers or reconstructs from logs.
How HATS Replaces Trust with Proof
The HATS Terminal is designed to solve precisely this problem. It connects directly to the policyholder's security tools through 12 pre-built connectors — identity providers, endpoint detection, email security, backup systems, vulnerability scanners, and cloud security posture management tools. The setup takes three clicks: authorize the connector, select the tenant, confirm. No agents to install. No network changes. No firewall rules.
Once connected, the Terminal reads configuration state from each tool and maps it to standardized security controls. MFA enrollment percentage is derived directly from the identity provider's API. EDR coverage percentage is derived directly from the endpoint platform's API. Backup retention and immutability are derived directly from the backup vendor's API. The policyholder does not answer questions about these controls. The controls are observed, measured, and attested automatically.
Each attestation is cryptographically signed with post-quantum algorithms (ML-DSA-65, FALCON-512). The signature binds the control state to a specific timestamp, connector version, and tenant identifier. The attestation cannot be altered without invalidating the signature. This produces an evidence chain that is admissible, immutable, and independently verifiable. When a claim occurs six months after binding, the carrier does not need to rely on the policyholder's recollection or forensic reconstruction. The attested state at the time of the incident is on record.
The continuous monitoring component runs at configurable intervals — daily, weekly, or on-demand. If MFA coverage drops below the attested threshold (for example, because a new employee was onboarded without MFA), the system generates an alert. The broker and the policyholder can remediate before the gap becomes a claim. The carrier can adjust risk posture in real time rather than discovering the gap during post-incident forensics when it is too late to prevent the loss.
The Underwriter's New Decision Framework
For underwriters, cryptographic attestation changes the decision framework fundamentally. Instead of evaluating an application that may or may not reflect reality, the underwriter evaluates a verified dataset that was generated by direct connector access to the policyholder's systems. The dataset is timestamped, signed, and cannot be altered by the policyholder.
This enables several capabilities that the questionnaire model cannot support. First, the underwriter can differentiate between organizations that report 100% MFA and actually have 100% MFA versus organizations that report 100% MFA and have 85% MFA with gaps in shared accounts. The premium can reflect the actual risk, not the reported risk. Second, the underwriter can condition coverage on continuous attestation rather than point-in-time application. If MFA coverage drops below 95%, the carrier is notified and can adjust terms mid-policy. Third, the underwriter can streamline the application process dramatically. If 80% of the questions on a typical cyber application can be answered by direct connector queries, the application shrinks from 15 pages to 3. The remaining questions cover organizational and procedural controls that require human attestation — incident response testing cadence, board reporting frequency, vendor due diligence processes.
The efficiency gains are significant. A typical cyber insurance submission today involves 2–4 weeks of back-and-forth between the broker and the underwriter as questions are clarified, supplemental information is requested, and coverage terms are negotiated. With verified connector data available at submission, the underwriter has the technical state they need immediately. The decision cycle compresses from weeks to days.
The Broker Advantage
For cyber insurance brokers, the shift from self-reported to verified controls represents a competitive opportunity. The broker who can deliver verified policyholder state to the underwriter provides a fundamentally better submission. Better submissions get faster responses, better terms, and broader coverage. The broker's value proposition shifts from "I help you fill out the application" to "I deliver verified technical state that gets you preferred pricing."
The HATS broker workflow enables this transition. The broker sends a Terminal setup link to the policyholder. The policyholder authorizes their connectors in three clicks. The broker receives a verified control assessment within 60 seconds. The submission goes to the underwriter with attested data instead of self-reported answers. The underwriter responds faster because the data is reliable. The policyholder gets better terms because the verified state eliminates ambiguity. Everyone in the chain benefits from replacing trust with proof.
This matters especially at renewal. Renewal is where the questionnaire burden is heaviest — the policyholder must re-answer hundreds of questions, often identical to last year's questions, while the broker chases responses and the underwriter waits. With continuous attestation through HATS, the renewal submission is pre-populated with 12 months of verified control data. The underwriter sees not just the current state but the trajectory — did MFA coverage improve or degrade over the policy period? Did backup immutability remain consistent? Were there any periods where EDR coverage dropped below threshold? This longitudinal data is vastly more valuable than a single point-in-time questionnaire, and the broker who provides it wins the renewal.
What the Insurance Tech Conference Should Address
The Insurance Tech Conference in Chicago on June 10–11 will feature sessions on underwriting automation, claims technology, and data-driven decision making. If the conference is serious about advancing the state of the industry, it should address three questions directly.
First, what is the acceptable error rate for self-reported controls? If the industry acknowledges that questionnaire responses are unreliable, it needs to quantify the unreliability. How often does the attested state at the time of a claim match the state reported on the application? No carrier publishes this data, but every carrier has it. The gap between reported and actual state is the industry's biggest blind spot, and quantifying it is the first step toward addressing it.
Second, what is the standard for technical verification? If the industry agrees that verification must replace self-reporting, it needs to define what verification means. Is a vulnerability scan sufficient? Is an outside-in assessment sufficient? Or does verification require direct API access to the authoritative systems — the identity provider, the endpoint platform, the backup system? The answer has significant implications for the technology required, the privacy considerations involved, and the legal framework for data sharing between policyholders and carriers.
Third, how does continuous verification change the policy structure? If the carrier has real-time visibility into the policyholder's control state, the annual policy structure may not be the right vehicle. Continuous verification enables continuous underwriting — premiums that adjust dynamically based on verified risk posture, coverage that expands or contracts based on control state, and claims decisions that reference the attested state at the precise moment of the incident rather than the application submitted 11 months earlier.
The Transition Path
The transition from self-reported to verified controls will not happen overnight. It will follow the same adoption pattern as MFA requirements: a few forward-thinking carriers will require it, competitive pressure will drive broader adoption, and within three years it will be standard. Here is the likely sequence.
Phase 1 (now through mid-2027): Carriers begin accepting verified control data as a supplement to the application. Organizations that provide HATS attestation receive streamlined underwriting and preferential pricing. The data advantage is clear, but the process change is voluntary.
Phase 2 (2027–2028): Leading carriers require verified control data for accounts above a threshold — perhaps $5M in revenue or $2M in coverage limits. The questionnaire remains for smaller accounts, but the large and complex risks must provide direct connector data. Brokers that cannot support verification lose market access for their largest accounts.
Phase 3 (2028+): Verification becomes standard across the market, similar to how MFA requirements became universal between 2023 and 2025. Carriers that still rely solely on questionnaires face adverse selection as verified-state organizations migrate to carriers that offer pricing advantages for attestation. The questionnaire does not disappear entirely — it remains for procedural and organizational controls that cannot be verified through API connectors — but it shrinks to a fraction of its current size.
The organizations and brokers that adopt verification now will be positioned ahead of each phase transition. They will have the connector integrations in place, the longitudinal data accumulated, and the workflow optimized before their competitors begin. In a competitive market, that head start translates directly to better pricing, better coverage, and better outcomes when claims occur.
The Bottom Line
Self-reported controls are the weakest link in cyber underwriting because they produce unreliable data that harms both sides of the policy. Policyholders face claim denials for inaccuracies they did not intend. Carriers face adverse selection they cannot detect. The solution is not better questionnaires. It is cryptographic attestation of control state derived directly from the policyholder's security tools — continuous, tamper-evident, and independently verifiable. The shift Munich Re identified is not coming. It is here.
Further reading: HATS Terminal | Cyber Insurance | HATS & Premiums | When Claims Contradict Applications | Contact Us