CEOs of trillion-dollar companies are building personal AI agents to help them run their organisations. Not adopting off-the-shelf products. Building — or having their teams build — systems that sit inside their own infrastructure, trained on their own context, designed to surface what matters to them specifically.
There's something worth paying attention to in that.
My own version of this started with an existing AI agent platform, deployed on a cloud server, connected to everything — email, calendar, Slack. One agent with reach across my whole life.
Impressive. Also unsettling.
The more I connected, the less settled I felt. I hardened it. Set boundaries. Tightened permissions. Even created a separate channel so it could interact with other automated systems without touching real conversations. Still couldn't get comfortable.
The problem wasn't capability. It was the combination of elevated access and infrastructure I didn't own. Those two things together kept nagging at me. Powerful and connected is the goal. Powerful, connected, and running on someone else's server is a different calculation.
So I built something from scratch. Named it Diya — दीया, lamp or light, from the Sanskrit dīpa.
Local infrastructure, my machines, my data. Python, Postgres, a full agentic loop, connections to my calendar and notes. The point was never the architecture. The point was that I knew exactly what it could reach — and nothing else.
It ran for a while. Always improving, never quite finished. Worked. But a bespoke system has a maintenance cost — APIs change, things drift, keeping it current becomes a second job.
Early 2026 I started writing a spec for a proper successor. Wrote v1 in a day, scrapped it, wrote v2 the same day. The second version was genuinely elegant — proper memory architecture, smarter skill routing, tighter controls on context and cost.
Then stopped and asked: do I actually want to build this?
The honest answer was no. Not because the problem wasn't real. Because the build wasn't the point — having something that worked was.
An AI agent framework already existed that got me there without starting from zero. I made the switch. The answer had been closer than I’d realised.
The tinkerer's paradox: sometimes the most interesting technical decision is to not build.
What’s running now is more than I originally designed.
Not one assistant — a mesh. A network of specialist agents, each trusted for what it does, not for everything. One handles research and deep analysis. Another is always on, reachable through messaging channels from anywhere. A third runs locally, does the hands-on work, and talks to the others mid-task. There’s a separate one for my wife.
Last week I messaged one of them from my phone — find out the current state of the art for building AI agents. It handed the task to the researcher, which worked through it overnight. By morning there was a note waiting for me with the results. I didn’t check in. I didn’t manage the process. It just ran.
I can fire something off from my phone and come back to it done. Not because the system is autonomous without purpose — because it’s autonomous with purpose, operating within boundaries I set, freeing me up for the work that needs me to be present.
Here's what I actually learned. And it has nothing to do with architecture.
The hardest part of building a personal AI is deciding what it should know about you.
When you start populating persistent memory, you make choices. Projects, preferences, the people you work with, how you like things done. But how specific? How personal? Where's the line between context that changes how it responds and noise that doesn't?
That exercise turned out to be surprisingly revealing. What I chose to write down said something about what I actually prioritise. What I left out did too.
Then there's the calibration question — when do you check what the agent did, and when do you let it run? I got this wrong early. Verified everything, which defeated most of the purpose. You have to calibrate. Low-stakes, reversible: let it go. External-facing, irreversible: always check. That calibration shifts as the models improve. It's not something you configure once.
Back to the people building their own. I think I know why they're doing it, and it's not because the off-the-shelf products aren't good enough.
It's because personal means something specific when the access is real.
A personal AI that knows your schedule, your relationships, your decisions, your preferences — and can act on that knowledge — needs to be something you trust completely. Not trust in the way you trust a search engine. Trust in the way you trust someone with your keys.
That kind of trust has to be built. It can't be installed.
And the building is the point. Not the architecture, not the spec, not the choice of model. The process of deciding what it knows, watching what it does with that, extending access incrementally as the track record builds. That's not a setup process. That's a relationship.
Three weeks from first impulse to something genuinely useful. Two of those weeks spent not building.
First in a series on what building with AI actually looks like — not from a distance, but from inside it.