Earlier this year, we decided we no longer needed a piece of software we'd been running on. We asked to reduce the subscription.

What followed was a masterclass in platform gravity — the tendency of systems to make leaving more expensive the longer you stay. Arguments. Delays. Contractual friction. We ended up paying for a period we didn't want.

I'm not describing bad actors. I'm describing a system working exactly as it was designed to — for the vendor.

That experience was clarifying. Not because it was surprising — anyone who has operated enterprise software for long enough has a version of this story. But because of the timing. We are at the beginning of a period when AI agents are being woven into exactly these kinds of platforms, at a depth and speed that makes the gravity I just described look shallow. The question of what happens when AI becomes load-bearing infrastructure has been on my mind for a while. This is the platform layer of the same question.


Salesforce announced Headless 360 in April 2026. The framing is liberation.

The entire platform exposed as APIs and agent endpoints. The UI optional. Every object, every workflow, every automation available to any application, any AI agent, any custom interface. Build on top of us, they said, without being forced to use our screens.

It is a remarkable piece of engineering. And it is worth understanding precisely what it does to the dependency relationship.

It is not an isolated move. In the same month, Anthropic changed the tokeniser on Claude Opus 4.7. The price per token stayed the same. The denominator changed — the model now counts tokens differently, so the same workload costs more. No announcement. No price increase. Just a quieter shift in what the unit means.

Two moves. One month. Same pattern at two different layers of the same stack.

The old lock-in was about the UI. Users learned it. Processes formed around it. Switching meant retraining people and rebuilding workflows — painful, but visible. You could see the dependency because you could see the people sitting in front of it.

Headless removes the UI. It also removes the visibility.

When your AI agents are calling platform APIs directly — reading customer context, writing interaction history, triggering workflows, reasoning about data inside the platform's object model — the dependency becomes invisible infrastructure. There's no interface to point at. There are no users to retrain. There is just a system your agents depend on, at a depth that compounds quietly, month after month, until the day you want to leave. I wrote about why AI demos work but production systems fail — part of that failure is architectural decisions made under speed that only reveal their cost later. Platform dependency is the same pattern.

The counterintuitive truth: headless architecture deepens lock-in precisely because it reduces friction. Lower friction means more integrations. More integrations means more of your logic lives inside their platform. More logic inside their platform means the switching cost is no longer about migrating a UI. It's about migrating a data model. It's about whose vocabulary your agents are thinking in.


The concept of data gravity was coined by Dave McCrory in 2010. His observation was simple: data accumulates in one place, and applications migrate toward it — because moving data is expensive, slow, and risky. The more data in a system, the more processes orbit it. The more processes orbit it, the harder it is to leave.

On data gravity

Dave McCrory published the original formulation in 2010. The core idea: like physical gravity, data has mass. The more of it accumulates in one place, the stronger the pull on everything around it — applications, services, workflows. And like physical gravity, the force compounds with proximity. The closer you build to the data, the more efficient your system. The more efficient your system, the deeper you’re embedded. McCrory’s original post is worth reading: it was written about cloud infrastructure, but the logic applies identically to enterprise platforms in the AI era.

What is new is the AI amplifier.

Before agents, data gravity was shaped by human usage. A number of users, their daily workflows, the integrations they depended on. Linear. Visible. You could measure it by counting seats.

After agents, data gravity is shaped by automated consumption. Agents run continuously. They read and write at machine speed. A workflow that a human executed twice a day, an agent executes a thousand times. The gravity is no longer linear in users. It scales with the number of agents, the frequency of their tasks, the depth of the data model they're reasoning within.

Three things change.

The consumption becomes invisible. Humans leave traces — notes, tickets, decisions you can audit. Agents generate API calls that accumulate in a billing dashboard until someone looks.

The failure modes get faster. A human making a bad decision in a system of record leaves a trail. An agent making thousands of bad decisions compounds the error before anyone notices. The blast radius scales with execution speed.

The dependency becomes harder to name. You can explain to a board that you have four hundred seats in a platform. It is harder to explain that your AI agents have adopted that platform's data model as the implicit vocabulary of your customer intelligence layer. Both are real dependencies. Only one shows up on a renewal invoice.


None of this argues against platforms.

Building an equivalent stack yourself is not cheaper once you account for engineering time, operational burden, security posture, and the opportunity cost of not shipping product. You trade licence lock-in for complexity lock-in. The calculation that made a commercial platform the obvious choice two years ago is less obvious today — AI has compressed the time it takes to build a fit-for-purpose system. The gap has narrowed. The decision deserves more thought than it usually gets.

But the more important point is this: most organisations don't choose their platform. They drift into it. Speed mattered. The ecosystem was convenient. Procurement was already set up. The decision was made by default, not by design.

There is a difference between:

We use this platform as our system of record. We evaluated the alternatives. The ecosystem and compliance posture make it the right call at our current scale. We've abstracted our agent layer so our business logic doesn't speak the platform's object model directly — we translate at the boundary. We measure our consumption per workflow. We could migrate in six months if we needed to.

And:

We use this platform because that's what we started with.

Both organisations are in the gravity well. Only one knows it.


I have operated with a particular instinct for a long time — longer than AI has been a strategic question. The discipline of not letting any single vendor become load-bearing without a clear exit. It costs something upfront. Architecture that preserves optionality is slower to build than architecture that doesn't. The payback comes later, and tends to come all at once.

That instinct sharpened this year.

The renewal experience I described at the start wasn't just frustrating. It was instructive. We wanted to leave. The gravity made it expensive. Not because anyone behaved badly. Because the system was working as designed.

Awareness precedes choice. Here is what to be aware of.

Whose vocabulary are your agents speaking? If your prompts, your data pipelines, and your agent logic are full of your platform vendor's object names — their field structures, their relationship model — you have adopted their ontology. That is deeper than data migration. It is a conceptual migration. Own your data model internally. Translate to the vendor at the boundary.

What does your consumption look like at scale? Not today's usage. The usage you are planning for. The difference is usually an order of magnitude. Model it before the invoice surprises you.

Do you know your own numbers? Vendors know their consumption figures precisely. You often don't, until the bill arrives. Build your own monitoring — per agent, per workflow, per vendor — before you need it.

Where is your abstraction layer? A thin seam between your business logic and any single vendor's API. Swapping the implementation behind it should be a sprint, not a programme. If it would take six months to change your system of record, the seam isn't there.

Can you answer the exit question? If your primary platform doubled its prices tomorrow, what would you do? If the answer is "we'd be in serious trouble" — that is the answer. Not a reason to leave. A measure of how much leverage you have ceded.

On abstraction layers in practice

The model layer has good tooling for this already. LiteLLM and OpenRouter let you route between AI providers with a configuration change, not a migration. The same principle applies at the platform layer — an adapter between your domain logic and your CRM API means the platform is an implementation detail, not a foundation. Few teams build this from the start. Most regret it when they need to.

The gravity well is not a trap. Gravity wells are where the resources are. The ecosystem, the integrations, the compliance posture, the talent pool — these are real advantages, and they are real reasons to be inside one.

The question is not whether to enter. It is whether you are choosing to — with eyes open to what the gravity means at scale, and with the architecture that lets you leave if you need to.

The ability to leave is what keeps the relationship honest.

You don't have to leave. But you should be able to.


This essay is part of a series on what happens when AI becomes the infrastructure organisations depend on. What Happens When AI Becomes Load-Bearing looks at the model layer. What Survives the Reorg looks at the human infrastructure layer. The New Moat looks at what you build when you get the architecture right.


References and further reading

  1. Salesforce debuts Headless 360 agentic platform — The Register, April 2026
  2. Data Gravity in the Clouds — Dave McCrory, 2010 (archived)
  3. Claude Opus 4.7 tokeniser shift — what it means for enterprise AI costs — my LinkedIn post, April 2026
  4. Why Perfect AI Demos Become Failed AI Systems — kinarey.com