This is the second article in The Organism series. The first — Power, Trust, Control — is about building a personal AI you actually trust.

The first article traced three attempts at building a personal AI — from a cloud-hosted agent that felt too exposed, to a bespoke system built from scratch, to finding the right framework. The key insight: the hardest part isn’t the technology, it’s deciding what the agent should know about you, and calibrating when to check its work and when to let it run. Trust has to be built. It can’t be installed.

The natural assumption, once you have a personal AI that works, is that you’re done. One capable agent, trusted, calibrated, running well. That’s the goal.

It was mine. Then I started noticing something.

I kept routing everything through my primary agent because it was the one I trusted. Research tasks, scheduling, code, analysis — all through the same point. It handled most of it. But trust built for one job doesn’t transfer cleanly to every job.

A single generalist agent — even a good one — is a single point of failure for a different reason than reliability. It’s a failure of fit.


The answer felt obvious once I saw it: not one agent, but several. A coordinator always on, handling comms and scheduling and awareness. A researcher for synthesis and deep analysis — the kind that takes time and benefits from a different model entirely. An operator for hands-on local work, system access, the stuff that needs to run close to the machine.

Not a team in a loose sense. An actual mesh — connected, coordinated, with defined lanes.

The moment it clicked was unexpectedly quiet. I messaged the coordinator from my phone late one evening — a research question that would normally have pulled me into an hour of reading. The coordinator handed it to the researcher. I went to sleep. By morning, a synthesised brief was waiting, sourced and structured, with the coordinator’s summary on top. Nobody asked me to approve the handoff. Nobody needed to. The boundaries I’d set meant each agent knew what it could and couldn’t do — and the researcher’s lane included exactly this kind of work.

That was the shift. Not a feature I configured. A system that behaved like a team I’d trained.


The current vocabulary for multi-agent architectures is orchestrators and sub-agents. The orchestrator breaks a task into pieces, dispatches sub-agents, reassembles the result. Clean pattern. Works well.

But it’s not what I’m describing.

Sub-agents are ephemeral — spun up for a task, gone when it’s done. No memory, no standing, no relationship with the work that persists. They’re closer to a function call than a colleague. Mesh agents are different: they persist, they have their own accumulated context, and their boundaries are set by me permanently — not configured at dispatch time by whatever agent happens to be calling them. The human stays in the loop at the point of delegation, not just at the point of output.

Sub-agents are tools an orchestrator uses. Mesh agents are peers with standing, with whom you have calibrated trust and ongoing relationships. Less like a function call, more like knowing who to call for what.

The difference comes down to three things.

Persistence. Mesh agents aren’t instantiated for tasks — they exist. They have their own memory, their own accumulated context, specialisations built over time. The researcher has read things. The coordinator has handled enough of my scheduling to know how I think about time. That history is part of what makes them useful. You don’t get that from a sub-agent.

Standing autonomy. Each mesh agent has its own control structures — what it can access, what it can’t, what requires checking — set by me, not by whatever agent happens to be calling it. A sub-agent’s permissions come from its orchestrator. A mesh agent’s permissions are its own, independent of who’s asking.

Where the human sits. In a sub-agent architecture, the orchestrator can autonomously invoke sub-agents — the human is in the loop at the start and the end, but the middle can run without them. In my mesh, the human is still the initiating authority for delegation. My primary agent doesn’t autonomously hand a task to the researcher without me. That’s deliberate. It keeps me at the point of delegation, not just at the point of output.


With a mesh, calibration gets more interesting. You’re no longer just deciding what you trust one agent to handle. You’re calibrating at two levels simultaneously — what you trust your primary agent with, and what each specialist is trusted to do within its domain.

The researcher can synthesise a brief without me reviewing every source. The coordinator can manage scheduling without me approving each message. That’s not abdication — it’s calibration that’s been earned. Each agent trusted for what it actually does, not for everything.

What you end up with isn’t complexity. It’s clarity. Each agent has a lane. You know what to route where. The overhead of managing the mesh is lower than the overhead of one agent doing everything badly.


There’s a different kind of question that starts to surface once the mesh is running.

Your agents are calibrated. Trusted. Each one doing its job. The system is working.

Now imagine everyone in your organisation has this.

Not the same agent — theirs. Built with their context, their calibrated trust, their standing configuration of what it can and can’t touch. Each person with their own primary agent, their own researcher, their own coordinator.

What does that entity look like when all those personal meshes exist in the same organisation? What happens when they start to interact? What’s shared, what stays personal, and who decides?

That’s the question for the next piece.


Second in a series on what building with AI actually looks like — not from a distance, but from inside it. Start with Power, Trust, Control.