You have made the investment. Your board has heard the AI strategy. Your Chief AI Officer is hired, your budget is allocated, and initiatives are underway across the organization. You are not standing still. You are moving, and moving with conviction.
And yet something is not working the way it should.
The pilots are promising but they are not scaling. The individual productivity gains are real but they are not showing up in the enterprise P&L. Different parts of the organization are experimenting with AI tools but those experiments are not connecting to each other or to a coherent business outcome. Your competitors appear to be moving just as fast, or faster, and the differentiation you expected from your AI investment has not materialized.
You are not alone in this experience. And the reason it is happening is not that your team is underperforming or that the technology is overhyped. The reason is that the approach most organizations are taking, including very likely yours, is fundamentally backwards.
The Numbers Tell an Uncomfortable Story
The scale of AI investment globally is staggering. According to Gartner, approximately $1.5 trillion was spent on AI last year. McKinsey's most recent global survey found that 88% of organizations now use AI in at least one business function, up from 78% just one year prior. The commitment is real and it is widespread.
But the results are not matching the investment. According to McKinsey, nearly two-thirds of organizations have not yet scaled their AI programs beyond pilots and experiments. Only about 6% of organizations report AI contributing more than 5% to their earnings, McKinsey's threshold for what they consider an AI high performer. Boston Consulting Group's research is equally stark: 60% of organizations generate no material value from their AI investments, and only 5% create substantial value at scale.
These are not organizations that failed to invest. These are organizations that invested heavily and are not seeing the return.
The most telling data point comes from McKinsey's analysis of what actually differentiates the 6% who are succeeding from the rest. Out of 25 organizational attributes they tested, one factor had the single largest effect on whether AI investments translated to bottom-line impact. It was not the sophistication of the models. It was not the size of the data estate. It was not the scale of the technology budget.
It was whether the organization had fundamentally redesigned its workflows.
And according to McKinsey, only 21% of organizations have done that.
The Diagnosis: Technology-First Thinking in a Transformation-First Problem
There is a pattern playing out in enterprises across every industry. It looks like progress. It feels like progress. But it is not producing the outcomes it should.
The pattern goes like this. The organization acquires exciting AI capabilities. Leadership asks where those capabilities can be applied. Teams identify use cases. Pilots get funded. Some succeed. The organization declares that AI is working and funds more pilots. A technology roadmap gets built, a timeline of AI tools and platforms to deploy across the business. The roadmap looks strategic because it is sequenced and budgeted. But it starts from the wrong place.
It starts from the technology and works backward to the business.
This is the hammer-looking-for-nails dynamic, and it is the dominant pattern in enterprise AI adoption today. BCG calls it the "imagination gap": the inability of leadership teams to see beyond incremental technology application to genuinely transformative business change. McKinsey describes the same phenomenon differently but arrives at the same conclusion: what they are observing is the biggest, the most complex business transformation, but it is 80% business transformation and 20% tech transformation. That is different from how most people have thought about it.
80% business transformation. 20% technology.
Most organizations are inverting that ratio. They are spending the majority of their energy, attention, and executive bandwidth on the 20% (selecting tools, running pilots, building technology roadmaps) while underinvesting in the 80% that actually determines whether AI creates value.
BCG quantifies this in a framework they call the 10-20-70 rule: 10% of the effort should go to algorithms, 20% to technology and data, and 70% to people, processes, and cultural transformation. The organizations that are capturing real value from AI follow this ratio. The organizations that are stuck in pilot mode have it reversed.
Why This Pattern Persists, and Why It Is Dangerous
If the technology-first approach is not working, why does every organization default to it?
Three reasons.
First, it is familiar. Enterprise technology adoption has followed the same pattern for decades. You buy the software (SAP, Oracle, Salesforce, whatever the era demands) and then you do process redesign work to conform the organization to how the software operates. The technology dictates the process. Entire consulting practices were built around this model. It worked, more or less, because the software was rigid and the organization had to adapt to it.
AI is fundamentally different. AI technologies are dramatically more flexible and adaptable than any previous generation of enterprise software. You are no longer constrained by rigid configurations that force a specific workflow. You can build and configure AI capabilities around your specific business needs. MIT Sloan Management Review captured this shift precisely in "Want AI-Driven Productivity? Redesign Work," arguing that organizations must take a "work-backward" approach rather than a "tech-forward" approach: instead of asking how technology can be applied to existing jobs and processes, leaders must redesign work first and then deploy technology that enables the new design.
But most organizations have not made this mental shift. They are applying the old model (technology first, process redesign second) to a technology that does not require it and does not reward it.
Second, technology-first feels faster. Deploying an AI tool and demonstrating a productivity gain in a pilot can happen in weeks. Redesigning workflows across a business function takes months. Under pressure to show results to the board and to keep pace with competitors, most leadership teams choose the path that produces visible activity quickly, even if that activity does not compound into enterprise value.
Third, and most importantly, the technology-first approach still produces improvement. AI is powerful enough that even poorly deployed (layered on top of unredesigned processes, disconnected from strategy, running in isolated pilots) it generates some gains. People work a little faster. Reports get produced more efficiently. Customer inquiries get triaged more quickly. These gains are real and they are visible. They create the impression that the approach is working.
But they are not compounding. They are not connecting to each other. They are not producing the enterprise-level transformation that the investment was supposed to deliver. Deloitte's research captures this precisely: value comes from process redesign, not process automation. As one of Deloitte's technology leaders put it, if you just take your existing workflow and try to apply advanced AI to it, you are going to weaponize inefficiency.
And here is the competitive danger: the organization that deploys AI on top of unredesigned processes will always be behind the organization that redesigns first. Both will improve. But the one that built the right foundation will improve at an accelerating rate, because every AI capability added is multiplied by the well-designed workflow underneath it. The gap compounds over time. By Year 2 and Year 3, these are not competitors in the same race. They are operating on different planes entirely.
The Alternative: A Framework That Starts From the Business
If technology-first is the wrong starting point, what is the right one?
The answer is a five-level architecture that begins with business strategy and works down through transformation to technology enablement. Each level feeds the next. Skipping a level, or starting in the middle, is how organizations end up with the pilot proliferation and scaling failures the data describes.
Level 1: Strategy (The Art of the Possible)
The starting point is not an AI roadmap. It is business strategy, informed by what AI now makes possible.
This requires a partnership that most organizations have not yet formed. The CEO, the Chief Strategy Officer, and the Chief AI Officer need to be in the same room, working on the same problem: where is this business going, and how does AI change what is achievable?
There is a productive tension at the center of this partnership. The CEO and CSO cannot write effective strategy without understanding what AI now enables; capabilities that were impossible or prohibitively expensive even three years ago are now achievable. But the CAIO cannot deploy AI effectively without business strategy driving the decisions about where and how to deploy it. Each needs what the other knows.
The resolution is co-creation. The CSO brings the competitive landscape, the market dynamics, the growth opportunities, and the strategic context of how the business makes money and where it is vulnerable. The CAIO brings an understanding of what AI makes possible, not at a theoretical level, but specifically for this business, this industry, these competitive dynamics. The CEO brings the authority to commit resources and the judgment to set ambition at the right level.
Together, they produce business strategy that accounts for what AI makes possible. Not an AI strategy. Not a technology roadmap. A business strategy, with the full weight of competitive analysis, market intelligence, and strategic rigor that the word implies, informed by capabilities that did not exist when the last strategy was written.
The CAIO's first and most important job in this partnership is not building a technology plan. It is expanding the strategic imagination of the leadership team. Showing the CEO and CSO what is now achievable changes the boundaries of what the organization considers strategically possible. That expanded imagination is what produces bolder, more differentiated strategy.
Level 2: Business Transformation Imperatives
Strategy, once set, must decompose into action. But the correct unit of action is not a technology project. It is a Business Transformation Imperative: a specific, strategically-derived program that transforms how the business operates in a defined domain.
Not "deploy a large language model in customer service." Instead: "fundamentally redesign how we engage customers so that resolution quality increases while cost-per-interaction decreases by 40%." Not "implement AI in the supply chain." Instead: "build a predictive supply chain capability that identifies disruptions 72 hours before they impact production."
The difference is not semantic. The first framing starts with technology and hopes it produces a business outcome. The second starts with a business outcome and determines what capability, including but not limited to AI, is required to achieve it.
These imperatives are prioritized, resourced, and managed as a portfolio. Not everything can happen simultaneously. Some imperatives are funded now, some are sequenced for later, some are deferred. The portfolio is owned by the business (the CEO, CSO, and CAIO together) not by the technology organization. And it is backed by a genuine resource reallocation commitment, because a strategy that does not change where money and people go is not actually a strategy.
Level 3: Capability Pathways and Workflow Transformation (The 70%)
This is where business transformation begins in earnest. And this is where 70% of the effort should be concentrated.
Each Business Transformation Imperative from Level 2 gets assigned to a domain leader, typically a C-suite executive or senior vice president, who is chartered with delivering it. Their first task is to decompose the imperative into the specific capabilities their organization needs. What must we be able to do that we cannot do today?
But before those leaders can design new ways of working, they need to understand what AI makes possible in their specific domain. The same productive tension that existed at Level 1 between the CEO, CSO, and CAIO now cascades down through the organization. Department leaders, their directors, their senior managers, everyone who will be responsible for operating the new workflows, needs substantive education on what is achievable. Not a briefing deck. Not a vendor demo. A working engagement where they experience what is possible and develop enough fluency to make informed design decisions about their own operations.
Then comes the core of the 70%: redesigning workflows for how work should actually be done in an AI-enabled world. This is not process redesign in the traditional sense, conforming the business to how the technology works. This is the opposite. It is designing how humans and AI collaborate to produce the best possible outcomes, and then selecting technology that conforms to that design.
This workflow redesign is inseparable from redesigning jobs. When workflows change, every person's role, responsibilities, skills, and performance metrics change with them. The supporting infrastructure (training, tools, job aids, support systems) has to be rebuilt. And because multiple departments are redesigning concurrently, enterprise-wide coordination is required to ensure that one department's redesigned workflow connects properly to the departments upstream and downstream of it.
This is hard work. It is the hardest work in the entire framework. It is also the work that determines whether AI investments pay off.
Level 4: AI Enablement and Iteration
With workflows redesigned and capabilities defined, AI tool selection becomes dramatically clearer. You are no longer asking "what AI tools should we buy?" You are asking "what AI tools enable the specific workflows we have already designed?" The path from need to solution is direct because the need has been precisely defined.
Implementation follows, but it does not end at deployment. The first deployment is Version 1.0, not the final state. Teams that have never worked in AI-enabled workflows will encounter gaps between the design and the reality. Handoffs between humans and AI agents will need tuning. Orchestration patterns will need adjustment. The iteration cycles are planned and budgeted, not treated as failures. Each cycle builds organizational muscle that makes the next cycle faster and more effective.
Level 5: Continuous Transformation
The framework does not end with a completed transformation. It transitions into a permanent operating capability. As the workforce develops fluency and confidence with AI-enabled work, and as AI tools continue to advance in capability, the organization enters a state of continuous self-optimization.
Individual employees begin identifying improvement opportunities themselves. The coordination and governance overhead that initially required massive human effort begins shifting to AI-orchestrated systems. The feedback loop from Level 5 back to Level 1 means that new AI capabilities continuously create new strategic possibilities, which generate new Business Transformation Imperatives, new capability pathways, new workflow redesigns. Each cycle runs faster than the last.
Transformation becomes the organization's permanent state, not a periodic program imposed from above, but a continuous capability embedded in how the organization operates.
The Compounding Effect: Why Sequence Determines Destiny
The organizations that follow this sequence (strategy, then imperatives, then workflow redesign, then AI enablement) build something fundamentally different from the organizations that start with AI tools and work backwards.
The difference is compounding.
When a workflow is redesigned before AI is deployed, every AI capability added to that workflow multiplies the value of the design. When the AI tool is improved or upgraded, the well-designed workflow captures more of that improvement. When adjacent workflows are redesigned and connected, the integration is clean because the design anticipated it.
When AI is deployed on top of an unredesigned workflow, the opposite dynamics take hold. The AI automates an inefficient process, locking in the inefficiency. Upgrading the AI tool produces marginal improvement because the bottleneck is the process, not the technology. Connecting to adjacent workflows is painful because neither was designed for integration.
Both organizations invested in AI. Both are seeing improvement. But one is building on a foundation designed for compounding returns. The other is building on a foundation that resists them. As MIT Sloan Management Review argued in "Why AI Will Not Provide Sustainable Competitive Advantage," once AI becomes ubiquitous it will transform economies but will not uniquely benefit any single company, because every competitor will have access to the same technology. The differentiator is not the AI. It is the transformation underneath it. Over time, and it does not take long, these become fundamentally different organizations competing in the same market. And the gap between them does not close. It widens.
The Honest Assessment: What This Requires
This framework demands things that are genuinely difficult.
It requires the CEO to resist the pressure to deploy technology fast and instead invest in the harder, slower, more valuable work of business transformation. The board will want to see AI wins quickly. The competitive pressure to announce AI initiatives is real. The discipline to say "we are investing in getting the foundation right before we scale" is not easy, but it is the discipline that separates the 6% from the rest.
It requires the CEO to bring the Chief Strategy Officer back to the strategic table as a co-equal partner. When organizations treat AI as a technology initiative, the strategic conversation gets reframed as a technology conversation, and the strategy leader gets displaced. This framework treats AI as a business strategy issue, which is exactly where the CSO's competitive intelligence, market sensing, and strategic framing capability belongs. Bringing the CSO back to the table is not a courtesy. It is a competitive necessity.
It requires the CAIO to redefine their own role, from technology leader building an AI roadmap to strategic imagination partner who helps the CEO and CSO see what is now possible. This is a fundamental shift in how most CAIOs understand their mandate, and it demands skills that the typical technical career path does not develop. But it is the shift that makes the Level 1 partnership productive rather than performative.
This is not a three-month initiative or a single budget cycle. It is a multi-year transformation, but not in the way most people fear. Value is generated in waves, not at the end. The first domains through the transformation produce returns within months while building the playbooks and organizational muscle that make each subsequent wave faster. The organizations that resist the pressure to skip steps in the interest of speed will move faster in Year 2 and Year 3 than the organizations that deployed technology quickly but built on an unredesigned foundation.
And it requires expecting and budgeting for iteration. No organization has done this before at full enterprise scale. The first pass through a redesigned, AI-enabled workflow will not be perfect, not because the people are not good enough, but because this is genuinely new work that no one has practiced. The organizations that plan for iteration, budget for rework, and treat first-cycle friction as learning rather than failure will build the muscle for continuous transformation. The organizations that expect perfection on the first pass will interpret normal learning curves as evidence that the transformation is failing, and they will retreat to the familiar territory of incremental technology deployment, which is exactly how the gap compounds.
The Choice That Determines the Next Decade
Every organization investing in AI right now is making a choice, whether they realize it or not.
One path treats AI as a technology to deploy. It starts with tools, looks for applications, builds a technology roadmap, and measures success in pilots launched and productivity gains captured. It produces real but incremental improvement. Based on McKinsey's finding that only 6% of organizations qualify as AI high performers, this is the path the vast majority are currently on.
The other path treats AI as the catalyst for the most consequential business transformation of this generation. It starts with strategy, decomposes into business transformation imperatives, invests 70% of its effort in redesigning how work actually gets done, and then deploys AI technology into workflows specifically designed to leverage it. It produces compounding returns that accelerate over time. It is the path the 6% are on, and the research from McKinsey, BCG, and Deloitte is unambiguous about the difference in outcomes.
The technology itself is not the differentiator. Every organization has access to the same AI capabilities. The differentiator is the approach: whether you start from the business and work toward the technology, or start from the technology and hope it finds its way to business value.
That choice is being made right now, in how you structure your next board discussion about AI, in whether your CSO is in the room when AI strategy is set, in whether your CAIO is building a technology roadmap or expanding your strategic imagination, and in whether you are measuring success in pilots deployed or in workflows redesigned.
The organizations that get this right will not just outperform their competitors. They will operate at a level their competitors cannot reach by deploying technology faster, because the advantage is structural, not technological, and structural advantages compound in ways that technology investments alone never do.
Frequently Asked Questions
How is this framework different from the AI roadmaps most organizations are already building?
An AI roadmap typically starts with AI capabilities and maps them to potential business applications: technology first, business second. This framework inverts that sequence. It starts with business strategy informed by what AI makes possible, decomposes that strategy into business transformation imperatives, and only brings in AI technology after workflows have been redesigned to leverage it. The difference in outcomes is significant: McKinsey's research shows that workflow redesign is the single largest predictor of whether AI investments translate to bottom-line impact, and only 21% of organizations are doing it.
Does this mean we should stop our current AI initiatives while we build the framework?
No. Existing initiatives that are producing value should continue. But they should be evaluated against the framework to determine whether they are connected to a strategic business outcome and whether they would benefit from workflow redesign. In many cases, current initiatives can be repositioned within the framework rather than stopped. The framework is about changing the direction of future investment and the sequencing of future transformation, not about dismantling work already in progress.
Why does the CEO need to be personally involved at Level 1? Can this be delegated to the CAIO?
The Level 1 partnership requires three things that only the CEO can provide: the authority to commit resources and reallocate them based on the strategy, the judgment to set the right level of ambition for the transformation, and the accountability to the board for business outcomes. A CAIO who builds a strategy without the CEO's involvement builds a strategy without organizational commitment. A CEO who delegates AI strategy to the CAIO without participating in it gets a technology plan disconnected from business intent. The partnership is not optional. It is the mechanism that connects AI's possibilities to business strategy.
Our organization does not have a Chief Strategy Officer. Does the framework still apply?
Yes. The strategic function exists in every organization. It may sit with the CEO directly, with a head of corporate strategy, or with a senior executive who owns strategic planning. The title matters less than the function: someone who brings competitive landscape analysis, market intelligence, and strategic framing to the Level 1 partnership. If that function is currently absent from AI-related strategic discussions, bringing it in is one of the highest-leverage changes the CEO can make.
How long does a full transformation take?
The timeline varies by organization size, complexity, and the number of domains being transformed. But the framework does not require completing all five levels before seeing value. Strategy and Business Transformation Imperatives at Levels 1 and 2 can be established in the first two to three months. The first wave of domains entering Level 3 begins producing value within months as workflows are redesigned and AI is enabled. Each subsequent wave moves faster because the organization has learned from the first. Full enterprise transformation is a multi-year journey, but compounding returns begin early and accelerate throughout.
Ready to move forward?
Let's discuss how your organization can build with AI — securely, strategically, and starting from where you are today.
Start a Conversation