How to prioritise AI initiatives
Your AI initiative list is not a strategy. It is a symptom. And the longer it grows unchecked, the less likely any single initiative on it is to deliver the advantage you are paying for.
The spreadsheet that pretends to be a plan
I see the same artefact in almost every mid-market organisation I work with.
A spreadsheet. Sometimes it lives in Jira. Sometimes it is a Notion board or a slide buried in an IT governance deck. The shape is always the same. Twenty to thirty AI initiatives, submitted by a dozen different departments, each marked as high priority. Each with a sponsor. Each with a business case that sounds reasonable in isolation.
Customer service wants a chatbot. Finance wants automated reconciliation. Marketing wants content generation. Operations wants predictive maintenance. HR wants an AI screening tool. Sales wants a lead scoring model. The CTO's team wants to build an internal copilot. And someone in procurement heard about an agent framework at a conference and now wants to pilot that too.
Every one of these sounds defensible. Every one has a champion who can articulate why theirs matters.
That is exactly the problem.
When everything is priority one, nothing is priority one.
What I am looking at is not a strategy. It is a registry. A collection of ambitions accumulated through enthusiasm and internal politics, with no sequencing logic connecting them to each other or to the commercial priorities that should govern where capital goes.
Vast organisations are now accumulating a vast registry of different ideas from different departments. Partial chatbots. Partial skills. Agents spreading like wildfire. Nobody is deciding the order. Nobody is asking whether initiative fourteen should exist when initiative three has not yet proven its commercial value.
This is the most common failure mode I see in mid-market AI programmes. Not bad ideas. Not incompetent teams. Fragmentation. Too many initiatives running in parallel, each consuming a slice of budget, a fraction of technical capacity, and a portion of executive attention — without anyone having decided what order they should run in, or whether they should run at all.
Why more activity does not equal more progress
There is a persistent myth in AI strategy that more activity equals more progress.
That myth does not hold. More activity, without sequencing discipline, equals more drag.
I have watched this pattern repeat across every major technology cycle in my 26 years in this industry. The dot-com era. Enterprise software. Cloud migration. Each cycle produced the same organisational reflex: start everything, prioritise nothing, hope that volume substitutes for strategy.
It never does.
With the AI cycle accelerating everything, deciding what to keep, kill, or upgrade becomes essential. The concept has not changed. It just needs to happen faster. Much faster. Because the cost of fragmentation in an AI programme is not linear. It compounds.
Here is what fragmentation actually looks like in practice.
Eight teams, each 20% committed to eight different AI projects. None of those projects has the resource density to reach a meaningful milestone within a credible timeframe. Each team is context-switching between their core responsibilities and their AI initiative. Each project has its own vendor conversations, its own data requirements, its own integration dependencies. The total organisational effort is enormous. The total output is marginal.
Now compare that with two teams, fully committed to two initiatives, sequenced so that the first creates a foundation the second can build on. Same total resource investment. Radically different outcome.
The difference is not talent. It is not budget. It is sequencing.
Here is the part that should make you uncomfortable. The organisation running eight parallel initiatives feels busier. It generates more status updates. It fills more meeting agendas. It looks, to a board that measures activity, like it is taking AI seriously.
The organisation running two sequenced initiatives looks, from the outside, like it is moving slowly.
Until twelve months later, when one has two working AI capabilities generating measurable commercial value and the other has eight half-built prototypes generating PowerPoint.
Sequencing is the strategy
This is the argument I make to every leadership team I work with. It is the one that meets the most resistance.
Sequencing discipline is not an operational nicety. It is not a project management concern. It is the competitive advantage itself.
It is not so much about deciding what to keep and what to kill. It is more about deciding where to allocate the capital and resources that you have. That distinction matters. Prioritisation is not a negative exercise. It is not about saying no. It is about saying "this one first, that one second, this one not yet, and that one probably never."
The sequencing logic that separates effective AI programmes from expensive failures comes down to three questions, asked in order.
1. Commercial linkage. Does this initiative connect directly to a measurable commercial outcome?
- Test: A specific, identifiable impact on revenue, margin, or competitive position — not a theoretical benefit, not an efficiency gain that might eventually show up somewhere.
- If unclear: The initiative does not get sequenced. It gets parked.
2. Complexity. What is the real delivery complexity?
- Test: The honest assessment of data dependencies, integration requirements, change management overhead, and organisational readiness — not the vendor's estimate, not the optimistic timeline from the team that wants to build it.
- If high complexity + weak commercial linkage: Clear signal to defer.
3. Capacity. Does the organisation actually have the people, systems, and executive attention to execute this initiative properly?
- Test: Resource density required to reach a meaningful milestone within a credible timeframe — not alongside seven other things.
- If no: The initiative cannot succeed regardless of its merit.
Those three criteria, applied honestly, will cut most initiative lists in half.
That is exactly the point.
I am not offering a scoring matrix here. I am not building a prioritisation template. Those are execution tools, and they are only useful once you have made the harder decision: that sequencing discipline matters more than initiative volume.
That is a leadership posture, not a spreadsheet exercise. It is the decision most organisations are avoiding.
What good sequencing actually produces
When sequencing discipline is working, you can see it.
The COO spends less time in status meetings because there are fewer initiatives to track, and the ones that remain have clear ownership and measurable milestones. The technology team is focused, not fragmented. Vendor conversations are purposeful rather than exploratory. The board receives updates on progress, not activity.
Sequencing forces those kind of decisions to be worked through. Deciding how to build, not just deciding what to build.
I have worked with organisations on both sides of this divide. The sequenced ones make faster keep, extend, merge, kill decisions. They evaluate an initiative against clear criteria and act on that evaluation within weeks, not quarters. When something is not working, they stop it. When something is working, they fund it properly. When two initiatives overlap, they merge them before the duplication becomes structural.
The fragmented ones do the opposite. Every decision takes longer because there is no logic for comparison. Every kill decision feels political because there is no agreed basis for why one initiative matters more than another. Every new request from a department gets added to the list because saying no without sequencing criteria feels arbitrary.
The fragmented AI programme does not just waste money. It wastes something more valuable: the organisation's decision-making speed. In a cycle where the technology is moving this fast, decision speed is the scarcest resource you have.
Why prioritisation is so difficult
Here is why prioritisation is difficult, and why most organisations avoid it.
Every initiative on that list has a person behind it. A sponsor who argued for it in a leadership meeting. A team that has already started scoping it. A vendor who has already been engaged. A narrative about why this particular initiative matters.
Deprioritising is not abstract. It is a conversation with a person who believed in something and is now being told it does not rank highly enough.
That is uncomfortable. It should be.
But the alternative is worse.
The strategic cost of fragmentation is not visible on a quarterly dashboard. It shows up eighteen months later when the organisation has spent significant capital on AI and cannot point to a single initiative that changed its competitive position. When the board asks "what did we get for this?" and the honest answer is "activity, not advantage."
I have seen this outcome enough times to be blunt about it.
If your organisation is running more than three to five AI initiatives simultaneously and none of them has reached measurable commercial impact, you do not have a prioritisation problem. You have a sequencing failure. The longer you wait to address it, the more expensive the correction becomes.
Not because the initiatives are bad. Because the fragmentation is compounding. Every month that passes with eight initiatives competing for the same pool of technical capacity, executive attention, and change management bandwidth is a month where none of them gets what it needs to succeed.
The arithmetic is uncomfortable but it is clear. Fewer initiatives, properly sequenced, properly resourced, with clear commercial linkage, will outperform a scattered portfolio every time.
Not sometimes. Every time.
Can you name the order your AI initiatives should run?
If you are a CEO or COO reading this, there is one question that tells you whether your AI programme is sequenced or scattered.
Can you name, right now, the order in which your AI initiatives should run?
Not a list of everything you are doing. The order. First this, then this, then this. With a clear reason for why the first one comes first, and a clear logic connecting each initiative to the one that follows it.
If the answer is yes, you have sequencing discipline. Protect it.
If the answer is no, the problem is not your initiatives. It is the absence of the logic that should govern them. And until that logic exists, every initiative you add makes the problem worse, not better.
Sequencing is not the easy part of AI strategy. It is the hard part. The part that requires you to disappoint people, defer ambitions, and make calls that not everyone will agree with.
It is also the part that determines whether your AI investment creates durable advantage or expensive activity.
That is not a comfortable position to sit with.
But comfort was never the point.