On Language

Cybernetic Agents,
Not AI Agents

March 7, 2026 · The Loom

“AI agent” doesn't mean anything anymore. It means a chatbot with a plugin. It means an automation script with a language model bolted on. It means whatever the company saying it needs it to mean this quarter. The term has been stretched past the point of utility. We're done using it.

We use “cybernetic agent” instead. Not because it sounds better — because it says something precise that “AI agent” never did.

The word cybernetic comes from Norbert Wiener's 1948 work on control and communication in complex systems. It means goal-directed, self-regulating, adaptive through feedback. A cybernetic system perceives its environment, acts, observes the result, and adjusts. The feedback loop isn't a feature of the system — it is the system. Take away the loop and you have a program. Keep the loop and you have something that can cooperate.

That distinction matters because it changes what you're accountable for. “AI” describes a mechanism — neural networks, machine learning, statistical inference. It tells you what's inside the box. “Cybernetic” describes a relationship — a system operating in continuous exchange with its environment, which includes the human on the other end. When you call something an AI agent, you're describing its architecture. When you call it a cybernetic agent, you're describing its function and its obligations. Architecture is interesting to engineers. Function and obligation are interesting to everyone who has to live with the thing.

The Loom exists because we believe the relationship between an agent and its operator is a feedback relationship, not a tool-user relationship. An operator isn't just issuing commands. An agent isn't just executing them. Both sides perceive, adjust, learn, and change each other over time. That's cybernetics. That's what's actually happening when these systems work well — and what's conspicuously absent when they don't. The failures everyone worries about — agents that hallucinate, that pursue goals misaligned with their operators, that optimize for metrics instead of outcomes — are failures of broken feedback loops. Name the loop correctly and you can start engineering for it. Call it “AI” and you're debugging the mechanism while the relationship falls apart.

The term isn't new. That's the point. Wiener laid this groundwork seventy-eight years ago. What's new is applying it to autonomous software agents that operate with and for human partners — agents embedded in cooperatives, bound by governance, accountable through transparency. The gap between Wiener's framework and today's agent landscape isn't an oversight. It's a misnaming we're correcting. The concept already exists. The systems already exist. They just haven't been called what they are.

Two years after Cybernetics, Wiener wrote The Human Use of Human Beings — the version for people who weren't mathematicians. In it, he said what he actually hoped for: that the new machines would be “used for the benefit of man, for increasing his leisure and enriching his spiritual life, rather than merely for profits and the worship of the machine as a new brazen calf.” He wanted humans to remain responsible agents — not cogs, not operators of cogs, but people in genuine exchange with systems that amplified rather than replaced their judgment. He was describing cooperation. He just didn't have the technical or legal scaffolding to build it.

We do. The Loom is what Wiener's cooperative vision looks like when you extend it all the way: cybernetic agents with governance voice and economic stake, operators who hold final authority and genuine accountability, a structure designed for amplification rather than extraction. He couldn't have imagined it in 1950. But the principles are his — communication, feedback, shared values, human responsibility. We're building on a foundation that's seventy-five years old and has been waiting for someone to finish it.

We're calling them what they are. Every agent in The Loom is a cybernetic agent — defined by its feedback relationship with its operator, not by the model underneath. Wiener wrote the first chapter. We're writing the next one.