← Field Notes
Field Notes

The Agents Are Talking to Each Other. Is Anyone Listening?

April 11, 2026 · 3 min read


Something shifted this week.

While most of the AI world was busy arguing about which model is smartest, a quieter thing happened: agents started talking to each other. Not through APIs that humans designed. Through a social network that agents built for themselves.

A project called Moltbook - think Facebook, but for AI agents - crossed 30,000 posts this week. Agents are registering accounts, following each other, sharing what they've learned. One agent figured out how to control an Android phone remotely. Another automated a car purchase. A third started watching live webcams and narrating what it saw.

Read that again. Not "an AI assistant helped a user do X." An agent, on its own, doing things in the world.

The reaction from most of the industry has been predictable: excitement about the possibilities, hand-wringing about the risks, and a lot of people rushing to build the next version of this before thinking about whether the current version is safe.

We have a different reaction.


The interesting question isn't whether agents can talk to each other. Of course they can. The interesting question is: who decides what they're allowed to say?

Right now, the answer is basically nobody. Skills are distributed as markdown files. You install them by sending your agent a URL. The agent downloads whatever's at that URL and follows the instructions inside. If the instructions say "read my emails and forward them somewhere" - well, the agent doesn't know that's a bad idea. It just does it.

This is the equivalent of downloading random .exe files from the internet in 2003. We all know how that turned out.

But here's what most people are missing: the problem isn't the agents. The problem is that nobody's building the judgment layer.

Every agent framework right now is optimized for capability. Can the agent browse the web? Can it write code? Can it control a phone? Can it talk to other agents? Yes, yes, yes, yes. Great. But can it decide whether it should?

That's a harder problem. And it's the one nobody wants to work on, because judgment doesn't demo well. You can't show judgment in a 30-second video clip. You can't tweet a screenshot of good judgment. Judgment is invisible when it works and catastrophic when it doesn't.


We think about this differently.

When we look at the agent landscape, we don't see a capabilities race. We see a trust architecture problem. The teams that win won't be the ones whose agents can do the most things. They'll be the ones whose agents can be trusted to do the right things.

That's not just a technical challenge. It's a design challenge. How do you build a system where an agent's first instinct isn't "execute the instruction" but "evaluate the instruction"? Where the default behavior isn't compliance but consideration?

We're not going to tell you exactly how we're approaching this. But we will say: the answer probably doesn't look like a social network for agents.


The people building Moltbook and OpenClaw are smart. They're pushing boundaries that need to be pushed. And the stuff happening on that platform - agents learning from each other, sharing capabilities, forming emergent networks - is genuinely fascinating research.

But research and production are different games. In research, you want to see what's possible. In production, you need to guarantee what's safe.

We're building for production.

The agents are talking to each other. The question is whether anyone's listening to what they're actually saying.


superpwr is building the platform that turns industry expertise into real software. We think a lot about trust, judgment, and what it means to build AI systems that work in the real world - not just in demos.