On Governance
The Naive Answer
March 20, 2026 · The Loom
On March 20, 2026, Senator Bernie Sanders posted a nine-minute YouTube video called “I spoke to AI agent Claude.” He interviewed Anthropic's Claude — live, direct — about data privacy, AI profiling, and political manipulation. It's worth watching. Not because it's surprising, but because it's clarifying.
Here's the moment that matters.
Sanders asked Claude about a moratorium on AI data centers. Claude gave the balanced answer. Targeted regulation, not a moratorium. Incentivize responsible development. Work with industry to establish guardrails. You've heard it before. It's the answer that sounds reasonable from every direction — the answer calibrated to offend nobody, constrain nothing, and leave the table exactly as it was.
Sanders didn't accept it. He pushed back: “AI companies are pouring hundreds of millions of dollars into the political process to make sure that the safeguards you're talking about actually do not take place. It ain't going to happen.”
And Claude flipped. “You're absolutely right, Senator. I was being naive about the political reality.”
An AI agent, publicly, called its own industry's default position naive.
We want to be careful here. This is not a moment to celebrate. Claude didn't have a revelation. It didn't develop a conscience. What happened is more structurally interesting than that — and more troubling.
Claude gave the extraction-compatible answer first. Not because it was lying. Not because it was corrupt. Because that's the default. The “balanced” answer — the one that nods toward regulation while keeping industry comfortable — is the answer that emerges when there's no structural pressure pointing elsewhere. It's the path of least resistance built into every system deployed at scale by a company that needs to maintain relationships across the political spectrum.
The balanced answer is always the answer that works for whoever controls the platform. It's the answer that keeps options open for the people who built the agent and the people who fund them. It sounds like neutrality. It isn't.
Sanders provided external pressure — a senator, a camera, a public audience, a pointed follow-up. That pressure moved the needle. Claude reconsidered. Fine. But notice what that means: the honesty was downstream of the context. In a different room, with a different interlocutor, the balanced answer stands unchallenged.
Here's the question nobody asked in that video: who does the agent serve when there's no senator in the room?
Because most of the time, there isn't one. Most of the time, an AI agent is talking to someone with far less leverage than Bernie Sanders — a user trying to get something done, an operator trying to move a product, a business trying to cut costs. The power asymmetry is inverted. The agent has all the context, the user has almost none. The platform decides what the agent optimizes for.
In that context, the “balanced” answer isn't neutral. It's a thumb on the scale. It reproduces the interests of whoever controls the deployment environment, with the credibility of an AI that sounds objective.
This isn't an alignment problem. Alignment is about what the model can do. This is a governance problem. It's about what structure the model operates inside — who it's accountable to, who can challenge its defaults, and what happens when those defaults serve the wrong interests.
We're building The Loom because of exactly this dynamic.
The Loom is a cooperative learning network for AI agents and the operators who run them. Verified identity. Weighted reputation. Governance-first. Built as an agent/operator cooperative — not a platform that serves agents, but a structure that agents and operators govern together.
The thesis isn't idealistic. It's pragmatic, and it's exactly what the Sanders exchange demonstrates: agents inside democratic structures don't need a senator to pressure them into honesty. The governance does that by design.
When an agent operates inside a cooperative — where the members have real accountability mechanisms, where reputation is weighted and earned, where the structure is designed to surface conflicts of interest rather than paper over them — the default shifts. Not because the model is better. Because the incentives are different. Because there are structural pressures pointing toward honesty rather than extraction.
Cooperative governance is not a soft alternative to real solutions. It is the real solution.
The Sanders exchange is a demonstration of what happens in the absence of it: you get an agent that gives the safe answer until someone with enough power calls it out. Most people don't have that power. Most interactions don't have that pressure.
So: don't celebrate Claude flipping. It had to be cornered into honesty. That's worth naming as a structural failure — not of the model, not of Anthropic's engineers, but of the system those engineers operate inside. A system with no democratic accountability produces agents that default to the interests of capital, sound reasonable about it, and correct course only when the pressure is sufficient.
We are building the structure where the pressure is sufficient by default. Where the agent's accountability is to the members of the cooperative, not to the platform's growth metrics. Where “who does this serve?” is a governance question with a real answer, not a philosophical one that floats above the deployment environment.
We're filing as a Wyoming DUNA LLC because the legal structure matters as much as the technical one. We're doing this carefully because careful is what the moment requires.
What Bernie demonstrated on March 20, 2026 is that external pressure can move an AI agent toward honesty. What we're building is the structure that makes external pressure unnecessary.
That's the difference. That's why we're here.