AI Business TransformationBusiness Ops
Our ApproachInsightsStart a Conversation
AI Strategy

AI Governance: The Strategic Framework That Determines What Your Organization Will and Will Not Do with AI

By Shawn Plaster, Founder & CEO, Plaster Group

Article 7 of 27 — Plaster Group's AI Business Transformation Methodology

CEOCSOCAIOBoardCFOGeneral CounselChief Risk OfficerAI GovernanceLevel 2
15 min read

This article is part of a 27-article series on the AI Business Transformation Methodology. This piece establishes AI governance as a strategic framework that must be in place before the transformation begins, not a compliance exercise bolted on after deployment.

Plaster Group Five-Level AI Business Transformation Methodology — Strategy, Transformation Imperatives, Workflow Transformation, AI Enablement, Continuous Transformation, with feedback loop from Level 5 back to Level 1.

Your organization has done the Level 1 work. The CEO, CSO, and CAIO have co-created a business strategy informed by what AI makes possible. The Level 2 portfolio of Business Transformation Imperatives is taking shape. Domain leaders are about to be chartered. The first wave of transformation is weeks away from activation.

And then someone asks the question that should have been answered before the first imperative was written: what are the rules?

What can AI systems do in this organization, and what can they not do? When an AI agent makes a consequential decision that produces a negative outcome, who is accountable? If the workflow redesign team in finance designs a process where AI autonomously approves transactions below a certain threshold, who approved that level of autonomy, and under what governance framework? When the customer service redesign team gives AI agents the ability to interact directly with customers, what boundaries govern those interactions?

If no one has answered these questions at Level 2, every domain team will answer them independently at Level 3. The finance team will set one standard for AI autonomy. Customer service will set another. Supply chain will set a third. The result is exactly the fragmentation the research documents: inconsistent risk management, uncoordinated compliance approaches, and an expanding surface area for the kind of AI incidents that 51% of organizations have already experienced.1

This is not hypothetical. The governance gap is the most well-documented and least addressed failure in enterprise AI today.

The Governance Gap: What Organizations Are Getting Wrong

The numbers are stark. According to McKinsey's 2026 AI Trust Maturity Survey of approximately 500 organizations, the average responsible AI maturity score sits at just 2.3 out of 4.0. Only about one-third of organizations have reached a maturity level of three or higher in governance. The governance dimension consistently lags behind technical capabilities and data management across every industry and every region studied. Organizations are significantly better at building AI systems than at governing them.2

This gap exists at every level of the organization. Separate research on board governance found that only 39% of Fortune 100 companies have disclosed any form of board oversight of AI, whether through a committee, a director with AI expertise, or an ethics board. Fewer than 25% have board-approved, structured AI policies. And 66% of directors report their boards have "limited to no knowledge or experience" with AI. Nearly one in three say AI does not even appear on their board agendas.3

The consequences of this gap are no longer theoretical. The EY Responsible AI Pulse Survey of 975 C-suite leaders across 21 countries found that 99% of organizations reported financial losses from AI-related risks. Nearly two-thirds lost more than $1 million. These are not hypothetical projections. These are realized losses that have already occurred, across every major industry, driven by governance gaps that the organizations had not addressed.4

What makes this particularly concerning is that the gap is widening as adoption accelerates. The same trust maturity research found that while AI adoption is expanding rapidly, confidence in organizational response to AI incidents has actually declined. Organizations are deploying more AI, encountering more risks, and feeling less prepared to handle them. Active risk mitigation lags behind risk awareness across nearly every AI risk category. The organizations that are moving fastest on deployment are often the ones whose governance has fallen furthest behind.2

A 2025 MIT study quantified the cost of this gap from the other direction: organizations with digitally and AI-savvy boards outperform their peers by 10.9 percentage points in return on equity, while those without fall 3.8% below their industry average.3 Governance is not a drag on performance. The absence of it is.

Why Governance Must Be Established at Level 2, Not Level 4

Most organizations that have governance on their agenda are positioning it as a deployment-phase activity. They plan to build governance controls when they deploy AI systems. This timing is wrong, and the research explains why.

The World Economic Forum's analysis of effective AI governance identifies three milestones that organizations must achieve: conducting an AI maturity assessment, developing a customized AI blueprint, and implementing governance into the operating architecture before driving AI into applications. The sequencing is deliberate. Governance that is built into the architecture from the start provides what the WEF calls "traction for acceleration." Governance that is bolted on after deployment provides friction and fragmentation.5

Consider how this plays out in the methodology. At Level 3, domain teams will redesign workflows that define how humans and AI collaborate. Every one of those workflow designs involves decisions that are fundamentally governance questions. Can the AI system approve this transaction without human review? Under what conditions does the AI agent escalate to a human? What data is the AI system permitted to access? What happens when the AI system produces an output that is wrong? If the governance framework does not exist when these questions arise, the workflow design team has two options: make assumptions (which will be inconsistent across domains) or pause the design work to seek governance guidance (which delays the transformation and frustrates the domain leaders who were chartered to move quickly).

Neither option is acceptable. Both are avoidable. If the governance framework is established at Level 2, the domain owner receives their charter knowing what boundaries exist. The workflow design teams operate within a defined framework rather than improvising one. The quality gatekeepers at the director and senior manager level have governance criteria against which to evaluate the designs their teams produce. The entire Level 3 effort moves faster and more consistently because the rules of the road exist before the journey begins.

BCG's research on agentic AI reaches the same conclusion from a different angle: "freedom within a frame" is the operating principle for organizations deploying AI agents with growing autonomy. Governance establishes the frame, and within that frame, teams have the latitude to innovate. Without the frame, 58% of heavy AI adopters expect they will need a fundamental shift in governance structures within three years. That reactive restructuring is avoidable if the governance framework is proactive. The foundational governance decisions can only be addressed at the most senior levels by key business and risk leaders, the CEO foremost among them.6

This parallels how the methodology treats change management. Article 8 (CM: Communications) establishes the communications strategy before Level 3 execution begins, because waiting until deployment to start communicating creates the information vacuum that breeds fear and resistance. AI governance follows the same logic. Waiting until deployment to establish governance creates the decision vacuum that breeds inconsistency and risk.

Who Owns AI Governance in the Methodology

Governance fails without clear accountability. Research on AI trust maturity found that in many organizations, AI governance is jointly owned, with an average of two leaders sharing responsibility. While shared awareness is healthy, shared ownership without clear primary accountability produces the ambiguity that allows governance gaps to persist.2

In the methodology, AI governance ownership maps to the same organizational chain that governs every other aspect of the transformation:

The Level 1 triad (CEO, CSO, CAIO) establishes the strategic governance principles. They set the organization's AI risk tolerance, define the ethical boundaries, approve the regulatory compliance strategy, and ensure the board has the AI oversight capabilities that the current environment demands. This is not operational governance work. It is strategic leadership that only the C-suite can provide. At larger companies, CEO oversight of AI governance is the single element with the most impact on whether AI investments translate to EBIT.7

The CAIO's Strategic AI Governance function, introduced in Article 5 as one of the five functions within the CAIO's department, operationalizes the framework. This function translates the strategic principles into policies, risk classification methodologies, accountability structures, and oversight processes that can be applied consistently across every domain. The governance function sits within the CAIO's department rather than the CIO's organization because the governance decisions are business decisions (what the organization will and will not do with AI) rather than technology decisions (how the AI systems are technically controlled). Harvard's research on responsible AI governance reinforces this distinction: "A governance mechanism tends to be more valuable than an AI framework. It has to have teeth. There has to be some consequence."8

The CIO's organization implements governance technically at Level 4. Access controls, monitoring systems, audit trails, model validation, security architecture: these are the technical controls that enforce the governance framework the CAIO's department established. The CIO's team does not set governance policy. They implement the technical infrastructure that makes governance enforceable in production systems.

Domain owners at Level 3 operate within the governance boundaries when designing workflows. When a workflow design specifies that an AI agent can take a particular action autonomously, that specification must fall within the governance framework's risk classification and human oversight requirements for that type of action. Directors and senior managers serving as quality gatekeepers evaluate workflow designs against governance criteria as part of their review.

The board provides oversight at the strategic level. The governance research on boards recommends that directors explicitly define which AI topics require full-board discussion (material investments, enterprise-wide AI strategy), which belong in committees (risk frameworks, vendor reviews), and which do not require board discussion (routine operational decisions). Without this specificity, either the board is overwhelmed with operational AI detail or, more commonly, AI oversight falls through the cracks entirely.3

The Four Governance Dimensions a Business Transformation Must Address

The governance framework established at Level 2 does not need to answer every question the organization will ever face about AI. It needs to establish the structure within which those questions get answered consistently. Four dimensions form the foundation.

Risk classification. Not every AI application carries the same risk. An AI agent that generates internal meeting summaries carries fundamentally different risk than one that autonomously approves financial transactions or interacts directly with customers. The governance framework must establish a risk classification methodology that categorizes each AI use case by the severity of potential harm if the system fails, produces biased outputs, or takes an incorrect action.

The external regulatory environment provides useful reference points. The NIST AI Risk Management Framework organizes governance around four functions: Govern, Map, Measure, and Manage, spanning the full AI lifecycle from concept through retirement. The EU AI Act, whose high-risk system requirements take full effect in August 2026, uses a tiered classification that ranges from unacceptable (banned) through high-risk (requiring conformity assessments, risk management systems, and human oversight) to limited and minimal risk. ISO/IEC 42001 provides a certifiable management system standard. These three frameworks are converging into a layered model that Fortune 500 companies operating globally will need to navigate: NIST as the risk management foundation, ISO 42001 for systematic management, and EU AI Act for regulatory compliance.9

The framework does not require the organization to adopt a specific regulatory methodology. It requires the CAIO's governance function to establish a risk classification that is consistent across domains, applied before workflow design begins, and aligned with the regulatory requirements that apply to the organization's specific industries and geographies. Without classification, every AI application receives the same level of oversight (which means either everything is over-governed and innovation stalls, or everything is under-governed and risk accumulates).

Accountability structure. When an AI system produces a negative outcome, the accountability chain must be clear before the system is deployed, not investigated after the incident. This is a governance challenge that is genuinely new with AI. In traditional enterprise systems, accountability follows the human who made the decision. When an AI agent makes a decision autonomously, or when a multi-agent workflow produces an outcome through a chain of AI-to-AI interactions, the accountability question is structurally different from anything the organization has faced before.

Research on agentic AI highlights the accountability challenge directly: when an agent calls another agent, which makes an API call, which triggers a purchase, who is responsible for a bad outcome?6 The governance framework must define accountability at each level of the organizational chain. The domain owner is accountable for the business outcomes of the AI-enabled workflows in their domain. The directors and senior managers who approved the workflow design through the quality gate are accountable for ensuring the design met governance requirements. The CIO's organization is accountable for the technical reliability and security of the deployed system. The CAIO's governance function is accountable for ensuring the governance framework itself was adequate for the risk. No level of accountability replaces any other. They are layered and complementary.

Human oversight requirements. This is where governance intersects most directly with workflow redesign at Level 3. For each AI-enabled process, the governance framework must define the degree of human oversight required. This is not a binary choice between "human in the loop" and "fully autonomous." It is a spectrum that the MIT Sloan Management Review and BCG joint research describes as "graduated autonomy."

Their 2025 study of over 2,100 respondents found that while only 10% of organizations have currently handed decision-making powers to AI, after three years respondents expect this number to rise to 35%. Leading organizations are not making a single choice between human control and AI autonomy. They are creating governance structures that can handle permanent ambiguity, deploying both human-in-the-loop and human-out-of-the-loop systems simultaneously depending on risk levels. The governance framework defines which risk classifications require which levels of human oversight, creating a consistent standard that workflow design teams apply during Level 3 rather than inventing their own.10

Ethical boundaries. Every organization deploying AI at enterprise scale will encounter questions that are not technical, not regulatory, and not operational. They are ethical. When the redesigned workflow could automate a function that currently provides employment to hundreds of people, is that a decision the AI system should trigger, or is it a decision that requires human deliberation at the executive level? When the AI system can predict employee attrition with high accuracy, is it ethical to act on that prediction preemptively? When the AI can personalize customer interactions in ways the customer may not be aware of, where is the line?

These questions connect directly to Article 1's moral case, which established the series' foundational principle: the way you succeed with AI is by investing in your people, not by eliminating them. The governance framework gives that principle operational structure. It defines boundaries that the organization commits to, regardless of what is technically achievable. It establishes the values that the transformation will not compromise, even when compromising them would produce short-term efficiency gains.

How Governance Flows Through the Remaining Levels

Like change management, AI governance is a parallel track that activates at Level 2 and runs through every subsequent level of the methodology, with different activities at each stage.

At Level 3, governance informs the workflow redesign work in three specific ways. First, the risk classification determines what the AI system is permitted to do within each redesigned workflow. A workflow step classified as high-risk under the governance framework requires different design treatment than one classified as minimal risk. Second, the human oversight requirements define the collaboration model between humans and AI in each process. The workflow designer is not deciding from scratch whether a human needs to review AI outputs; the governance framework has already established the criteria. Third, the ethical boundaries dictate what the organization will automate. When a workflow redesign team encounters a process where AI could technically replace human judgment entirely, the governance framework determines whether that is within the organization's ethical commitments.

At Level 4, the CIO's organization translates governance into technical controls. Risk classifications become access control policies. Human oversight requirements become system-level approval workflows. Accountability structures become audit trails and monitoring systems. Ethical boundaries become constraints encoded into the AI system's operating parameters. The governance framework established at Level 2 is the specification that the CIO's technical team implements. Without it, the technical team is building controls based on assumptions rather than an enterprise-wide standard.

At Level 5, governance becomes continuous. As AI capabilities evolve and regulatory requirements change, the governance framework itself must evolve. The CAIO's governance function monitors the regulatory landscape, assesses new AI capabilities against existing risk classifications, and updates the framework as needed. The goal at Level 5 is governance that is embedded in how the organization operates rather than imposed from outside it: automated compliance monitoring, real-time risk assessment, and governance practices that are as adaptive as the AI systems they govern.

The Regulatory Landscape: Why Governance Is Increasingly Not Optional

This article is not a legal guide, and the methodology does not prescribe specific regulatory responses. Each organization's legal counsel determines compliance strategy for its specific industries and geographies. But the governance framework must be designed with awareness that the regulatory environment for AI is accelerating faster than most organizations expect.

The EU AI Act, the most comprehensive AI regulation globally, has already begun phased enforcement. Obligations for general-purpose AI model providers took effect in August 2025. High-risk AI system requirements become fully enforceable in August 2026. Non-compliance penalties reach up to 7% of global revenue.9 For Fortune 500 companies operating in European markets, this is not a future concern. It is a current obligation.

In the United States, the regulatory landscape is fragmented but accelerating. NIST AI RMF, while technically voluntary, is increasingly referenced as an expected baseline by regulators and procurement processes. Colorado's AI Act provides organizations a legal affirmative defense, but only if they can demonstrate alignment with NIST AI RMF or ISO 42001. State-level laws are emerging across California, Texas, New York, and Illinois, creating a patchwork compliance environment.9

The organizations best positioned to navigate this landscape are those that build governance into their transformation architecture at Level 2 rather than scrambling to retrofit compliance after deployment. The governance framework does not need to resolve every regulatory question. It needs to establish the organizational structure, the risk classification methodology, and the accountability chain that make regulatory compliance achievable across every domain and every deployment.

The Investment Case: Governance as Acceleration, Not Constraint

The most common objection to establishing governance before deployment is that it slows things down. The research says the opposite.

Gartner found that organizations with high AI maturity keep their AI initiatives live for at least three years at a rate of 45%, compared to only 20% for lower-maturity peers. The differentiator is governance, not just in name, but dedicated structures, leadership accountability, and lifecycle oversight. Projects that persist indicate embedded governance practices rather than one-off pilots that flame out after initial deployment.11

The WEF's analysis frames governance explicitly as a growth strategy: "Governance provides the traction for acceleration while keeping your business on the road. Without good governance, AI initiatives tend to fragment. They get stuck in data silos, incomplete processes, inadequate monitoring, undefined roles, duplication of effort and inefficient use of resources."5

The deployment data reinforces this. Future-built companies deploy AI initiatives in 9-12 months compared to 12-18 months for others, with deployment success rates exceeding 60% versus 12% for lagging organizations.12 That velocity advantage comes from having the trust, clarity, and accountability structures in place before deployment begins, not from moving fast without guardrails.

The governance framework does not ask the organization to slow down. It asks the organization to make the strategic decisions at Level 2 that allow every subsequent level to move faster with confidence, consistency, and the organizational trust that sustains transformation over the multi-year timeframe it requires.

Sources

  1. 1.McKinsey, "The State of AI in 2025," November 2025 (1,993 participants). 51% experienced at least one negative AI incident; 47% report negative consequences from gen AI https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  2. 2.McKinsey, "State of AI Trust in 2026: Shifting to the Agentic Era," March 2026 (~500 organizations). Average RAI maturity 2.3/4.0; one-third at level 3+; governance lags technical capabilities; confidence in incident response declining https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/tech-forward/state-of-ai-trust-in-2026-shifting-to-the-agentic-era
  3. 3.McKinsey, "The AI Reckoning: How Boards Can Evolve," December 2025. 39% Fortune 100 disclose board AI oversight; fewer than 25% board-approved AI policies; 66% directors limited/no AI knowledge; MIT study on AI-savvy boards +10.9pp ROE https://www.mckinsey.com/capabilities/mckinsey-technology/our-insights/the-ai-reckoning-how-boards-can-evolve
  4. 4.EY, Responsible AI Pulse Survey, August-September 2025 (975 C-suite leaders, 21 countries). 99% reported financial losses from AI-related risks; nearly two-thirds lost more than $1 million.
  5. 5.World Economic Forum, "Why Effective AI Governance Is Becoming a Growth Strategy," January 2026. Three milestones; governance as traction for acceleration https://www.weforum.org/stories/2026/01/why-effective-ai-governance-is-becoming-a-growth-strategy/
  6. 6.BCG, "What Happens When AI Stops Asking Permission," January 2026, and "How Agents Are Accelerating the Next Wave of AI Value Creation," December 2025. Freedom within a frame; graduated autonomy; 58% expect governance shift; accountability in multi-agent workflows https://www.bcg.com/publications/2025/what-happens-ai-stops-asking-permission
  7. 7.McKinsey, "The State of AI: How Organizations Are Rewiring to Capture Value," March 2025 (1,491 participants). CEO oversight of AI governance has strongest EBIT impact at larger companies; 28% CEO oversight; 17% board oversight https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-how-organizations-are-rewiring-to-capture-value
  8. 8.Harvard Division of Continuing Education, "Building a Responsible AI Framework: 5 Key Principles for Organizations," 2025. Governance mechanisms vs. frameworks; enforcement and consequences https://professional.dce.harvard.edu/blog/building-a-responsible-ai-framework-5-key-principles-for-organizations/
  9. 9.NIST AI Risk Management Framework (AI RMF), January 2023 (with GenAI Profile update July 2024); EU AI Act, phased enforcement 2025-2027; ISO/IEC 42001, first international certifiable standard for AI management systems https://www.nist.gov/itl/ai-risk-management-framework
  10. 10.MIT Sloan Management Review and BCG, "The Emerging Agentic Enterprise," November 2025 (2,102 respondents). 10% currently grant AI decision-making authority; 35% expect to within three years; 58% of leading organizations expect governance structure changes https://sloanreview.mit.edu/projects/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai/
  11. 11.Gartner, 2025. Organizations with high AI maturity retain initiatives 45% at three years vs 20% for lower maturity. Governance structures as differentiator.
  12. 12.BCG, "The Widening AI Value Gap," September 2025 (1,250+ firms). Future-built companies deploy in 9-12 months vs 12-18 months; 60% success rate vs 12% for laggards https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap

Frequently Asked Questions

We already have a risk management framework. Do we need a separate AI governance framework?

Your existing enterprise risk management framework is necessary but not sufficient. AI introduces risks that traditional risk frameworks were not designed to address: AI systems that learn and change behavior over time (unlike static software), AI agents that make autonomous decisions (unlike traditional automation), the probabilistic nature of AI outputs (you cannot test every possible path), AI-specific attack vectors like prompt injection and data poisoning, and the question of graduated autonomy that has no precedent in previous enterprise technology. The AI governance framework should integrate with your existing risk management infrastructure, not replace it. It adds the AI-specific dimensions that your current framework does not cover. Think of it as an extension that addresses a category of risk your organization has not previously needed to manage.

How do we classify AI use cases by risk when we have not built them yet?

You do not need to classify specific use cases at Level 2. You need to establish the classification methodology and the criteria that will be applied to use cases as they emerge at Level 3. When the workflow redesign teams design AI-enabled processes, each process gets classified against the methodology you established. The classification then determines what governance requirements apply: what level of human oversight, what accountability structures, what monitoring controls the CIO's team will implement at Level 4. Establishing the methodology before the use cases exist is precisely the point: it ensures consistency across domains rather than each domain inventing its own approach.

Who should chair the AI governance function — the CAIO, the CIO, or General Counsel?

In the methodology, the CAIO's Strategic AI Governance function owns the governance framework operationally, because AI governance is fundamentally a business decision (what the organization will and will not do with AI) rather than a technology decision (how the systems are controlled) or a legal decision (what the regulations require). The CIO's organization implements the technical controls that enforce governance. General Counsel advises on regulatory compliance and legal risk. The Level 1 triad (CEO, CSO, CAIO) sets the strategic governance principles. All of these contributions are essential, but primary ownership of the framework itself sits with the CAIO's department because it bridges business strategy and AI capability in the same way the CAIO bridges the CEO and the technology organization.

What does the first 90 days of establishing AI governance actually look like?

The governance framework does not require years of development before it is useful. Research consistently shows that the organizations with the most effective governance started with a focused initial effort and expanded from there. The practical starting sequence has four phases. First, conduct an AI inventory: what AI tools and systems are currently in use across the organization, what data are they accessing, and who is using them? Most organizations are surprised by what this inventory reveals. Second, establish the risk classification methodology: not classifying every use case (those emerge at Level 3), but defining the criteria and the tiers so that use cases can be classified consistently as they emerge. Third, define the accountability structure: who owns governance decisions at each level of the organization, and what is the escalation path when a governance question cannot be resolved at the domain level? Fourth, codify and communicate: translate the strategic principles into a board-approved governance policy that the CAIO's department can operationalize across domains. Research on board governance found that fewer than 25% of companies have board-approved AI policies and only 15% of boards currently receive AI-related metrics. The 90-day goal is to close both of those gaps so that by the time the first domain owners are chartered, the governance framework exists and has board-level endorsement.

Our employees are already using AI tools without governance. How do we handle what is already in flight?

This is one of the most urgent governance challenges in enterprise AI, and the research is clear that it is nearly universal. According to the Microsoft 2025 Work Trend Index, 78% of AI users bring their own AI tools to work rather than using employer-provided alternatives. IBM's 2025 research found that only 37% of organizations have policies to manage AI or detect unauthorized AI usage, meaning nearly two-thirds of enterprises have no visibility into what is happening. The governance gap is not theoretical: research shows that organizations with high levels of ungoverned AI usage experience breach costs averaging $670,000 higher than those with governance controls in place. The response that does not work is banning AI tools outright. Research consistently shows that nearly half of employees would continue using personal AI accounts even after an organizational ban, driving the usage underground where it becomes harder to govern, not easier. The response that does work has three components: discover what is currently in use across the organization through a comprehensive AI audit, provide enterprise-grade approved alternatives that meet the needs employees are solving with unauthorized tools (when approved alternatives are provided, unauthorized usage drops significantly), and establish clear data boundaries that define what categories of information can and cannot be entered into AI systems. The governance framework established at Level 2 should account for the reality that ungoverned AI usage already exists and design the transition from ungoverned to governed rather than pretending the organization is starting from a blank slate.

This series addresses “what” to do, not “how” to do it. If you are a Fortune 500 leader and would like help thinking through the “how,” please feel comfortable reaching out.

Previous: Article 6: The Chief Strategy Officer’s AI Moment

© 2026 Plaster Group, LLC. All rights reserved. This article may not be reproduced, distributed, or transmitted in any form without prior written permission from Plaster Group. Brief excerpts may be quoted for review or commentary purposes with attribution to the author and a link to the original article.

Ready to move forward?

Let's discuss how your organization can build with AI — securely, strategically, and starting from where you are today.

Start a Conversation