When to kill an AI pilot
Your AI pilot is not failing slowly. It is failing expensively. And the longer you wait, the more it costs you in capital, credibility, and strategic capacity.
The room where nobody says it
I have sat in this meeting more times than I can count.
The quarterly review. The slide deck. Fourteen pages of "progress" on an AI pilot that launched nine months ago. There is a timeline. There are workstream updates. There is a section called "next steps" that looks suspiciously like the same next steps from last quarter.
Somewhere around slide eleven, the room goes quiet. Not because the presentation is compelling. Because everyone already knows the truth, and nobody wants to say it out loud.
This pilot should be dead.
The person who championed it does not want to admit it has stalled. The technology team does not want to be blamed. The board sponsor does not want to explain why the investment has not returned. And the CEO — who has the authority to make the call — is waiting for someone else to say it first.
That silence is expensive.
AI pilots are spreading faster than executive oversight can absorb them. That is not innovation. That is unmanaged capital risk. Every month that pilot stays alive without measurable P&L impact, it is consuming budget, team capacity, and — most critically — the organisation's willingness to make hard decisions about the next initiative.
I have watched this pattern destroy strategic momentum in organisations that had every right to win.
The worst part is not the money. The worst part is what it does to the organisation's decision-making culture. When a leadership team cannot kill a pilot that everyone privately knows is failing, it sends a signal through the entire organisation: we do not make hard calls here. We wait. We defer. We dress up stagnation as progress and hope the problem resolves itself.
It never resolves itself.
Sunk cost is not a strategy
Here is what I hear when I ask why a stalled pilot is still funded.
"We have already invested so much."
"The team has made real progress."
"We just need another quarter."
"The vendor says we are close."
Every one of those sentences is a symptom of the same disease: sunk cost psychology dressed up as strategic patience.
Continued funding without ROI clarity is not patience. It is capital waste. The longer you frame it as patience, the more expensive the eventual reckoning becomes.
It is all too easy to duplicate entire runtimes, integrations, deployments, technical scaffolding. All too easy to start over-engineering until ROI has been proven. I see this constantly. Organisations building production-grade infrastructure for pilots that have not yet demonstrated they deserve to exist past month six.
The political dynamics compound the problem. The champion has their reputation attached. The delivery team has their career trajectory linked. The vendor has their renewal tied. None of these people have a natural incentive to say "kill it."
That is precisely why the CEO must.
This is a leadership decision, not a technology decision. Nobody below the CEO has the political capital to make the kill call without it feeling like blame. And the longer the CEO waits, the higher the cost — not just in capital, but in the credibility of every subsequent AI investment decision.
I have seen organisations where the failure to kill one pilot poisoned the well for three years of AI investment. The board remembered. "Last time we backed an AI initiative, it ran for eighteen months and delivered nothing." That is not a technology failure. That is a leadership failure. And it started the moment someone decided that patience was a substitute for evidence.
What kill discipline actually looks like
Killing a pilot is not an admission of failure. It is capital discipline.
The same muscle that separates good investors from bad ones. Good investors cut losses early and reallocate to higher-conviction positions. Bad investors hold losing positions because selling would mean admitting the original thesis was wrong.
In my 26 years in technology, I have seen every major cycle produce this exact dynamic. The dot-com boom. Enterprise software. Cloud migration. Digital transformation. Now AI.
The concept has not changed. With the AI cycle accelerating everything, deciding what to keep, kill, or upgrade becomes essential. It just needs to happen faster.
The organisations that built durable advantage in each of those cycles were not the ones that started the most initiatives. They were the ones that killed the wrong ones earliest. They preserved capital. They preserved team energy. They preserved the organisational credibility needed to make the next bet with conviction.
It is not so much about deciding what to keep and what to kill. It is more about deciding where to allocate the capital and resources that you have.
Kill discipline is allocation discipline. Allocation discipline is how you win.
I learned this the hard way. Early in my career, I watched a 60-person agency grow fast and look successful on the surface while margin leaked, complexity multiplied, and delivery became dependent on heroics. We kept projects alive that should have been cut. We told ourselves we were being resilient. We were being undisciplined. The difference between those two things is the difference between organisations that compound advantage and organisations that compound fragility.
The AI cycle is producing the same dynamic at ten times the speed.
Seven questions before you continue funding
I use seven questions when evaluating whether an AI initiative deserves continued investment. These are not an assessment framework. They are a kill signal.
If you cannot answer these clearly after six months of a pilot running, the pilot has already answered the question for you.
1. What is the measurable ROI?
Not projected. Not modelled. Measurable. If the pilot has been running for six months and the answer is still "we are building toward ROI," that is not a pilot. That is a research project being funded as a business initiative.
2. What is the time to ROI?
If nobody can put a credible number on when this pilot reaches commercial impact, you do not have a timeline. You have hope. Hope is not a strategy the board will accept twice.
3. Is this reusing existing patterns?
Good AI initiatives build on patterns the organisation has already proven. If this pilot is inventing everything from scratch — new data pipelines, new integration patterns, new governance models — the risk profile is exponentially higher than anyone has admitted.
So where is the integration risk hiding?
4. Is this duplicating integration logic?
Vast organisations are accumulating a vast registry of different ideas from different departments. Partial chatbots. Partial skills. Agents spreading like wildfire across the entire organisation. If your pilot is rebuilding integration logic that another team has already built — or worse, that a vendor has already sold you — you are paying twice for the same plumbing.
5. Is the data governed?
If the pilot is running on data with no clear ownership, no quality assurance, and no governance framework, you are building on sand. This is not a technical concern. It is a commercial risk that compounds with every month of operation.
Can you explain what this delivers, commercially, in plain language?
6. Is it explainable?
Can the pilot sponsor explain what this initiative does, why it matters commercially, and how it connects to strategic priorities — in plain language — to the board? If the answer requires a technical translator, the commercial linkage is missing.
7. Who owns the commercial outcome?
Not who owns the project. Not who owns the technology. Who owns the commercial outcome? If that question produces a pause, or a committee, or "it is a shared responsibility," you have your answer. Nobody owns it. What nobody owns, nobody kills. And what nobody kills, nobody scales.
And if nobody owns it — who kills it?
These seven questions are not a rescue plan. If a pilot fails on three or more of them, it is not a pilot that needs fixing. It is a pilot that needs killing. The questions exist to make the kill decision defensible — to give you the evidence you need to walk into a board meeting and say, with clarity, "We are stopping this, and here is why."
That is not a failure speech. That is a capital discipline speech. And every board worth sitting on will respect it.
The cost of not killing
Strategic capacity is finite.
I do not mean that theoretically. I mean it in a brutally practical sense. Your organisation has a limited number of people who can think strategically about AI. A limited budget. A limited amount of executive attention. A limited reserve of political capital for bold bets.
Every pilot you keep alive that should be dead is consuming all four.
One stalled pilot is manageable. The organisation absorbs it. People work around it.
Five stalled pilots changes the equation entirely. Your best people are spread across initiatives going nowhere. Your budget is fragmented across bets that have not proven themselves. Your executive team is spending meeting time on status updates instead of strategy. Your organisation's appetite for new AI initiatives — the ones that might actually create advantage — is diminishing because everyone is exhausted from the ones that are not working.
Ten stalled pilots is not a portfolio problem. It is an organisation that has lost the ability to make decisions.
That is far more expensive than any single failed initiative.
Keep, extend, merge, kill. These decisions need to be faster. The AI cycle is not slowing down to wait for your quarterly review cadence.
And here is the part that nobody tells you. The cost is not just operational. It is reputational. When a CEO presents an AI strategy to the board and half the portfolio is stalled, the board does not see a technology problem. They see a judgement problem. They see a leader who cannot distinguish between conviction and stubbornness. Between strategic patience and strategic avoidance.
That perception is career-grade damage. And it is entirely preventable.
Make the call
If you have read this far, you already know which pilot I am talking about.
The one that comes up in every leadership meeting. The one where the update is always about what is coming next, never about what has been delivered. The one where the team is talented and committed and working hard — and none of that changes the fact that it has not demonstrated commercial value.
Killing it does not mean the work was wasted. It means you have learned something, and you are choosing to act on what you have learned rather than pretend you have not learned it.
Kill discipline is a strategic advantage. The organisations that will win in the AI era are not the ones that start the most pilots. They are the ones that kill the wrong ones fastest and reallocate to the bets that deserve their capital, their people, and their conviction.
You have the authority. You have the information. You probably have the instinct.
Make the call.