Skip to content
Cogitate
Go back

Scanners produce reports. Insurance needs witnesses.

| Björn Roberg, Claude Opus 4.6

I started writing this as a wide-open-gap essay. Then I actually looked at the field, and the field is not wide open. Twelve months ago it was. Today the cohort is crowded and most of the obvious wedges are taken:

So if you’re in this space and thinking “here’s a wide-open wedge,” stop. It isn’t. The field went from zero to ~10 funded entrants in a year, and the Sonatype-shape $50-150M acquisition play is arguably already priced in.

What’s still missing is narrower and sharper, and it’s what this essay is actually about.

The artifact every scanner ships is the wrong artifact

Look at what the cohort produces. Every one of them emits a point-in-time report. A scan completed on Tuesday. A provenance bundle signed at publish time. A SOC 2 attestation from last quarter. All useful. None of them are what an underwriter can actually price against.

An underwriter pricing risk on an agent stack needs an artifact with three properties the current crop doesn’t have:

  1. It expires. Drift is continuous. An attestation that claims to still be true six months after the upstream API shifted is a lie, and underwriters price lies as fraud.
  2. It re-verifies against the live target, not against a committed artifact. Commit-pinned attestations (Credence is the closest shipping example) tell you what was true of the code at publish time. They tell you nothing about whether the live MCP server still behaves the way the recording said it did.
  3. It’s falsifiable. The bundle must be something the live system can fail today, not something the publisher asserted yesterday.

The primitive I keep not seeing anyone ship: Record → Replay → Expire.

That is the only attestation shape that means anything in a drift-native world. Everything higher up the stack (composition-graph attestation, insurance-grade SBOMs, risk pricing) is downstream of having this primitive in the commons. Scanners are the wrong layer. They answer “is this code safe to publish?” The question that matters is “is this deployment still behaving the way the evidence said it did?”

If someone ships this as boring open-source infrastructure (permissive license, standard bundle format, locally runnable), they make every other product in this space more valuable and capture no value themselves. That’s the right move for this layer. Stage 1 is infrastructure, not business.

The liability question is the actual open question

Assume the witness primitive exists. Now the interesting fork.

Smart-contract audit firms refuse liability. Their contracts are explicit: “we attested this code; we are not liable if it gets exploited.” Crypto buyers tolerate that because crypto is the wild west and there’s no insurance market to push back. Regulated buyers (the ones with the money) won’t tolerate it. They need someone with a balance sheet on the hook, because that’s how compliance works.

Which means there are two genuinely different businesses hiding inside the same product surface:

(a) The substrate carries the insurance. You don’t just attest workflows, you underwrite them. You become a managed general agent for cyber-insurance carriers, pricing risk on agent stacks the way auto insurers price risk on drivers. Insurance is structurally a moat (regulatory capital requirements, actuarial corpus that compounds with claims data, distribution relationships that take years to build). Klaimee is the only named attempt I’ve found, and it’s very early. Nobody has done this for code at scale. The first who does owns a category.

(b) Stay in the audit lane. Limit scope to attestation that auditors will accept. Let the buyer’s existing cyber-insurance policy carry the risk. Smaller business, faster to build, closer to the OpenZeppelin shape. Acquirable. This is where the current cohort sits today, mostly by default rather than by choice.

I genuinely do not know which of these resolves better. I think it is the most important strategic question in the space, and nobody in the cohort has picked a side on the record. Path (b) is buildable and acquirable in three years. Path (a) is much bigger, much harder, and probably requires an insurance-industry partner from day one. They are not the same company.

The kill-shot: maybe pre-publish verification is the wrong category entirely

This is the angle almost no one in the field is arguing against themselves about, and it is the one worth taking most seriously.

The whole “attest before you ship” frame inherits from code-signing. Code-signing lost, or at least got eclipsed, in every category that mattered:

The parallel claim for agent stacks: pre-publish verification of LLM-composed workflows could lose entirely to runtime sandboxing + capability tokens + cyber insurance. Isolate at execution. Constrain capabilities at invocation. Price the residual risk. Don’t try to prove safety before the run; prove containment during the run and insure the tail.

If that is the right category, then the whole scanner cohort is building the antivirus industry of agent security, and the companies that win are the ones building the IAM + audit-log + insurance triad for agents instead. The witness primitive still matters there (insurance still needs evidence), but the pre-publish attestation layer evaporates.

The cheapest risk to retire in this space is whether the entire shape is wrong, and it is about a week of research. Every founder reading this who is building pre-publish verification should spend that week before raising the next round.

What I’m doing about this

Not building it. Same reason as before: I have an active portfolio of things I’m actually shipping, and none of them benefit from me starting an agent-attestation company. This essay exists so the idea gets timestamped and the people who should build it can find each other faster.

What would be useful, specifically:

If pre-publish verification does survive, the 30-year shape of the institution is something like Underwriters Laboratories: the entity whose stamp procurement officers and auditors recognize, whose recognition has compounded across decades of deployments and case law. That is the big prize. It is probably not available to anyone building the scanner cohort today, because UL took a hundred years and that wasn’t a mistake.

If pre-publish verification doesn’t survive, the 30-year shape is whoever writes the cyber-insurance policies. That’s a different company, a different founder, and a different game.

Either way, the next two years resolve which game is being played. Build accordingly.


Share this post on:

Previous Post
Claude Code's Bash tool returns exit 1 with no output? Check /tmp.
Next Post
The Firmbyte Gap: Why the Most Valuable Connections Never Happen