Governance

Governing What You Deploy

Across regulated industries and global markets, organizations are deploying artificial intelligence at a pace that has outrun their capacity to govern it.

Governing What You Deploy

THE BRIEF

AI adoption has reached near-universality across regulated industries; McKinsey's 2025 survey documents an 88% organizational adoption rate spanning financial services, healthcare, legal practice, and government administration [1]. Governance maturity has not kept pace: Deloitte's 2025 board survey found only 5% of organizations "very ready" to deploy AI, and its 2026 enterprise report found 80% lack the governance capabilities necessary to manage agentic AI responsibly [2][3].

Regulatory frameworks across the EU, U.S., and GCC have moved from voluntary guidance to enforceable obligation; the EU AI Act's high-risk compliance framework becomes fully enforceable on 2 August 2026, and U.S. state-level AI laws increased from 49 to 131 between 2023 and 2024 [10].

The financial case for governance is documented: organizations with AI-engaged boards outperform peers by 10.9 percentage points in return on equity [14]. Governance infrastructure is not a constraint on AI value creation; it is a prerequisite for it.

The organizational cost of ungoverned AI deployment compounds rather than self-corrects; 233 AI-related incidents were recorded in 2024, a 56% increase over the prior year [10].


I. THE ADOPTION REALITY

Artificial intelligence has moved from strategic aspiration to operational infrastructure across virtually every industry in a compressed period of time. According to McKinsey's 2025 State of AI report, 88% of organizations now use AI in at least one business function, spanning sectors including financial services, healthcare, legal practice, logistics, and government administration. In the GCC, a 2024 survey of 140 C-suite leaders across eight industries, conducted by McKinsey in collaboration with the GCC Board Directors Institute, found that 73% of organizations had piloted generative AI applications. Globally, a January 2026 Gallup report found that 46% of U.S. employees reported using AI at least a few times per year, up from 27% in late 2024.

These figures describe the pace of adoption. What they do not describe is whether that adoption is governed.

The distinction matters because adoption and governance are not the same organizational activity, and the failure to treat them as separate functions, each requiring dedicated infrastructure, has produced a predictable and documented set of consequences. When AI systems are deployed without prior governance design, the risks that materialize are not hypothetical edge cases. They are operational, regulatory, reputational, and financial in character. They accrue at the institutional level. As regulatory frameworks in multiple jurisdictions have now moved from guidance to enforcement, the cost of that gap is measurable.

II. THE GOVERNANCE GAP: WHAT THE DATA SHOWS

The coexistence of high adoption rates and low governance maturity is not a tension unique to any one organization. It reflects a structural pattern that research institutions across disciplines have documented with consistency.

Deloitte's "Governance of AI: A Critical Imperative for Today's Boards" (2025 edition) surveyed 695 board members and C-suite executives across 56 countries between January and February 2025. The findings are unambiguous: nearly one-third of respondents (31%) reported their organizations are not ready to deploy AI, and preparedness in the critical area of risk and governance had not improved relative to prior survey periods. Only 5% of respondents characterized their organizations as "very ready." Deloitte's concurrent State of AI in the Enterprise report, drawn from 3,235 leaders across 24 countries surveyed in late 2025, found that only 21% of organizations have a mature governance model in place for agentic AI, while approximately 80% lack the capabilities necessary to define agent decision boundaries, monitor agent behavior in real time, or maintain audit trails sufficient for regulatory accountability.

The International Association of Privacy Professionals' 2024 AI Governance in Practice Report documented a comparable pattern. Organizations were increasingly engaging with AI systems at multiple stages of the technology supply chain, as providers, deployers, and integrators, without establishing the governance infrastructure necessary to manage the distinct risk profiles each role generates. The report identified confusion about how the technology functions, the propagation of algorithmic bias, privacy rights violations, and the dissemination of misinformation as risks that governance frameworks are specifically designed to address; yet most organizations had not advanced beyond early-stage governance development [5].

A January 2026 Gallup report on AI use in the workplace sharpened the operational dimension of this gap: while 44% of employees said their organization had started integrating AI, only 22% said their organization had communicated a clear plan or strategy for doing so. AI was spreading laterally through organizations without corresponding governance design propagating alongside it.

This is not a technology problem. It is a governance problem. The technology functions as designed. The institution has not defined the conditions under which the technology should be permitted to function.


III. REGULATORY FRAMEWORKS HAVE NOT WAITED

The period during which regulators treated AI governance as a matter of organizational discretion has closed. In multiple jurisdictions, governance obligations are now legal requirements with enforcement mechanisms attached.

The European Union's Artificial Intelligence Act (Regulation (EU) 2024/1689), which entered into force in August 2024, is the most comprehensive AI-specific legal framework yet enacted. Its implementation is staged across several dates. Prohibitions on unacceptable-risk AI practices, including manipulative systems, social scoring, and certain applications of biometric categorization, became operative on February 2, 2025. Obligations for providers of general-purpose AI models, along with the penalty regime administered through the EU AI Office, became applicable on August 2, 2025. The full compliance framework for high-risk AI systems, covering applications in employment, credit decisions, education, and law enforcement, becomes enforceable on August 2, 2026. Counsel at Orrick, among other law firms, advise organizations to treat August 2026 as the binding planning horizon [8].

The penalty structure is not nominal. Infringements relating to prohibited AI practices may result in administrative fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher. Infringements of other specified obligations carry fines of up to EUR 15 million or 3% of global annual turnover. The regulation applies on an effects basis: an organization headquartered outside the EU that deploys an AI system affecting EU residents or markets falls within its scope regardless of where it is incorporated.

In the United States, federal AI-specific legislation has advanced more slowly, but the regulatory environment is not static. The National Institute of Standards and Technology published its AI Risk Management Framework (AI RMF 1.0) in January 2023 and updated it with a Generative AI Profile in July 2024; NIST released a further concept note in April 2026 addressing trustworthy AI in critical infrastructure. The AI RMF is voluntary, but it has become the most widely referenced AI governance standard in the U.S. and is increasingly cited as the benchmark against which organizational AI practices are assessed by regulators, counterparties, and courts. Stanford University's AI Index documented that state-level AI laws increased from 49 in 2023 to 131 in 2024 [10], and the Securities and Exchange Commission demonstrated in March 2024 that AI-related representations carry enforcement risk: it fined two investment firms a combined USD 400,000 for making false claims about their AI capabilities, constituting the first enforcement action against what regulators have termed AI washing [11].

In the GCC, the regulatory trajectory is distinct in character but converging in direction. None of the GCC member states has yet enacted an omnibus AI statute comparable to the EU AI Act, a finding stated explicitly in peer-reviewed analysis published in Humanities and Social Sciences Communications in October 2025, which characterized the region's governance posture as a pronounced two-tier structure, with the UAE and Saudi Arabia leading at the upper tier and Kuwait, Qatar, Oman, and Bahrain relying on a patchwork of sector-specific provisions and guidance at the second. The region's approach cannot, however, be characterized as unregulated. The UAE appointed a Minister of State for Artificial Intelligence in 2017, becoming the first nation to do so, and has since developed its National AI Strategy 2031 and adopted the UAE Charter for the Development and Use of Artificial Intelligence in June 2024, outlining 12 principles governing responsible AI deployment including transparency, human oversight, governance, and accountability. Abu Dhabi formally established the Artificial Intelligence and Advanced Technology Council by law in January 2024; the UAE Cabinet announced development of a comprehensive AI legislative framework in April 2025. Qatar maintains the only legally binding AI guidelines currently in force within the GCC, applicable to Qatar Central Bank-licensed financial institutions. Bahrain introduced a standalone Artificial Intelligence Regulation Law in April 2024. A separate peer-reviewed analysis published in the Journal of Artificial Intelligence Research in April 2025 characterized the GCC's prevailing approach as a soft regulation paradigm, one that emphasizes national strategies and ethical principles over binding rules, and raised substantive concerns regarding the enforceability of those ethical standards, the risk of ethics-washing, and the degree to which GCC frameworks diverge from the EU AI Act's risk-based compliance model.

The regulatory direction across jurisdictions is consistent: governance obligations are increasing in specificity, in legal force, and in extraterritorial reach. Organizations operating across these jurisdictions face layered compliance requirements that cannot be satisfied through technology deployment alone.


IV. THE OPERATIONAL AND FINANCIAL CONSEQUENCES OF UNGOVERNED AI

The organizational cost of deploying AI without governance infrastructure is not limited to regulatory exposure. It manifests in operational inefficiency, reputational damage, and the failure to extract the financial value that AI adoption is intended to produce.

Deloitte's 2026 State of AI in the Enterprise report identifies a pattern that should concern any senior leader: agentic AI usage is scaling quickly, with 74% of surveyed organizations expecting to use AI agents at least moderately by 2027, yet 80% currently lack the governance capabilities necessary to manage that deployment responsibly. The same report specifies that effective governance integrates with existing risk and oversight structures rather than operating as a parallel function; it focuses on identifying high-risk applications, enforcing responsible design practices, and ensuring independent validation where appropriate. Organizations that have not built those systems before scaling autonomous AI applications are accumulating exposure at the same rate they are accumulating capability.

The IAPP's 2024 report articulated the multi-dimensional risk profile of ungoverned AI deployment: legal and regulatory risk from noncompliance with existing and emerging obligations; reputational risk from bias, misinformation, and adverse AI-generated outputs that become publicly visible; and financial risk from operational disruption, liability exposure, and the cost of remediation after the fact. The Stanford AI Index documented 233 AI-related incidents in 2024, a record high and a 56% increase over 2023. A Harvard Law School Forum on Corporate Governance analysis of S&P 500 AI risk disclosures in 2025 found that legal and regulatory risk is now characterized as a long-tail governance challenge by the companies that disclose it most rigorously: one that can lead to protracted litigation and sustained reputational damage rather than discrete, containable events [13].

The performance dimension is equally significant. McKinsey's 2025 State of AI analysis found that organizations reporting the strongest financial returns from AI share specific characteristics: human-in-the-loop oversight rules, centralized AI governance, and senior leadership visibly involved in oversight decisions. Governance, in this analysis, is not a constraint on value creation. It is a prerequisite for it.

Governance failures in AI systems are rarely self-correcting. A model that produces biased outputs, a system that fails to maintain required audit logs, or a deployment that exceeds its designated risk classification under applicable regulation does not degrade visibly in the way that a failing piece of physical infrastructure does. The failure accumulates. By the time it surfaces through a regulatory inquiry, a legal challenge, or an adverse outcome for an end user, the remediation cost is substantially higher than the cost of governance design at the point of deployment.


V. WHAT GOVERNANCE INFRASTRUCTURE REQUIRES

Governance infrastructure is not a policy document. It is a set of operational systems, accountabilities, and decision-making structures that determine how AI is assessed before deployment, monitored during operation, and modified or discontinued when its outputs no longer meet defined standards.

The NIST AI Risk Management Framework identifies four core functions that together constitute a working AI governance program: Govern, Map, Measure, and Manage. The Govern function, which is the foundational layer, focuses on policies, procedures, accountability structures, and organizational culture for AI risk management. It requires that roles and responsibilities be formally assigned, that a risk-aware culture be established, and that AI governance be integrated into existing enterprise risk management rather than administered as a separate function. The Map, Measure, and Manage functions address identification of AI-related risks across the system lifecycle, quantification of those risks against defined criteria, and active mitigation through technical and organizational controls.

The EU AI Act structures compliance obligations around the same foundational questions. Before a high-risk AI system can be deployed, the organization must conduct an AI mapping exercise to identify all systems in scope, classify each system by risk level, document the system's intended purpose and technical characteristics, design human oversight mechanisms, maintain records for regulatory review, and complete conformity assessments where required. These are not compliance formalities. They are governance activities. An organization that has not built internal capacity to perform them cannot satisfy the regulatory requirements and, more practically, cannot ensure its AI systems are performing as intended.

For organizations operating across multiple jurisdictions, the governance architecture must account for the different obligation sets applicable in each. The EU AI Act applies to EU-market deployments regardless of organizational headquarters. NIST AI RMF alignment is increasingly expected by U.S. regulatory agencies and contractual counterparties. GCC-operating organizations face a developing patchwork of national strategies, emerging legislation, and sector-specific guidelines, including Qatar's binding financial-sector AI guidelines and Bahrain's AI Regulation Law, that collectively require governance systems capable of adapting to multiple regulatory reference points simultaneously.

Deloitte's board governance research is instructive on the investment dimension: organizations that have improved preparedness in technology infrastructure without corresponding progress in risk and governance present a structural vulnerability. The two cannot be decoupled. Governance investment is not a cost center; it is the mechanism through which technical capability translates into sustainable organizational performance.


VI. THE BOARD AND SENIOR LEADERSHIP IMPERATIVE

Governance infrastructure cannot be delegated exclusively to technical or compliance functions. Research across institutions is consistent on this point: effective AI governance requires board-level engagement, senior leadership accountability, and an organizational culture that treats AI risk as a first-order institutional concern.

Deloitte's 2025 board survey found that while awareness of AI and the importance of its oversight has grown, the pace of organizational change remains slower than the pace of the technology itself. The survey found 31% of board respondents said AI was not on their board agenda, down from 45% in the prior survey period [2]. That directional improvement does not alter the underlying reality: the majority of boards globally are still governing AI at arm's length from the decisions being made at the operational level.

Board-level AI governance is not a matter of directors becoming technically fluent in machine learning. It is a matter of boards understanding what questions to ask, what risk disclosures to require, and what accountability structures to mandate. The Harvard Law School Forum on Corporate Governance has identified the integration of AI within enterprise risk frameworks, the setting of key performance indicators for AI exposure and mitigation, and the clear distinction between internal and customer-facing AI applications as foundational board-level governance activities [12]. These are governance design questions, not technology questions.

A 2025 MIT study, cited in McKinsey's board governance analysis, found that organizations with digitally and AI-savvy boards outperform peers by 10.9 percentage points in return on equity, while those without meaningful board-level AI competency perform 3.8% below their industry average. The governance investment, properly understood, is a performance variable with a measurable return.


VII. CONCLUSION

The organizations that will navigate the current AI environment most effectively, across regulatory jurisdictions, across industries, and across the competitive landscape, are those that treat governance design as a precondition for deployment rather than a retrofit applied after exposure materializes. The evidence for this proposition is now substantial: from Deloitte's documentation of the governance gap at the board level across 56 countries, to the EU AI Act's enforcement architecture, to the GCC's rapidly developing governance landscape, to Harvard Law School Forum's analysis of how AI risk compounds for organizations that treat it as a long-tail concern rather than an immediate governance challenge.

Adoption at scale without governance infrastructure is not a technology strategy. It is an organizational liability, accumulating in advance of the moment it becomes visible.

The institutions that understand this earliest and build accordingly will not merely be compliant. They will be structurally positioned to extract durable value from AI investment that their less-governed peers cannot.


RECOMMENDATIONS

Within 30 days: Conduct an AI mapping exercise to identify all AI systems currently in use across the organization, their risk classification under applicable regulatory frameworks, and whether each system has a designated governance owner. Establish or confirm board-level AI governance oversight, including a designated committee or reporting line for AI risk.

Within 90 days: Implement the NIST AI RMF's four core functions (Govern, Map, Measure, Manage) as the structural basis for the organization's AI governance program, integrated into existing enterprise risk management rather than administered as a parallel function [9]. Develop and adopt an internal AI use policy that defines permissible use cases, prohibited applications, human oversight requirements, and escalation procedures.

Within 6 months: Complete conformity assessments for all high-risk AI systems subject to the EU AI Act's August 2026 enforcement deadline [6][8]. Establish continuous monitoring and audit trail systems for agentic AI deployments, addressing the capabilities gap identified by Deloitte: decision boundary definition, real-time behavioral monitoring, and regulatory-grade audit trails [3].

Benchmarks that should change the recommendation: Material changes to the EU Digital Omnibus timeline affecting the August 2026 enforcement date; adoption of federal AI legislation in the U.S. that supersedes the current patchwork of state-level laws; enactment of omnibus AI legislation in GCC jurisdictions that consolidates the current soft-regulation paradigm into binding compliance obligations.


CAVEATS

Survey methodology (Deloitte board survey): Findings from [2] are drawn from 695 respondents across 56 countries surveyed January-February 2025. The survey captures board and C-suite perceptions of readiness rather than an independent assessment of organizational governance maturity.

Survey methodology (Deloitte enterprise survey): Findings from [3] are drawn from 3,235 IT and business leaders across 24 countries surveyed August-September 2025. The 80% governance capability gap figure is specific to agentic AI; governance maturity for traditional AI systems may differ.

GCC regulatory landscape: The GCC governance analysis relies primarily on two peer-reviewed articles [17][18] and law firm client alerts [15][16]. The GCC regulatory environment is developing rapidly; readers should verify current regulatory postures, particularly in the UAE and Saudi Arabia, where legislative frameworks were announced in 2024-2025 and may have advanced since publication.

Stanford AI Index incident data: The 233 AI incidents figure from [10] was accessed via a secondary source (Virtasant, February 2026). The Stanford AI Index methodology for incident classification may differ from other incident databases.

MIT board performance study: The 10.9 percentage point ROE outperformance finding from [14] was accessed via McKinsey's December 2025 analysis rather than the primary MIT study. The specific methodology, sample size, and control variables of the original MIT study should be verified by readers relying on this figure for board-level investment decisions.

EU AI Act timeline: The August 2026 enforcement date for high-risk AI systems was current at the time of writing. The proposed EU Digital Omnibus package includes timeline adjustments; readers should verify the operative enforcement date before relying on this analysis for compliance planning.




REFERENCES

[1] McKinsey & Company. "The State of AI: How Organizations Are Rewiring to Capture Value." McKinsey Global Survey, 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai


[2] Deloitte Global Boardroom Program. "Governance of AI: A Critical Imperative for Today's Boards," 2nd edition. Deloitte, 2025. (Survey: 695 board members and C-suite executives, 56 countries, January-February 2025.) https://www.deloitte.com/global/en/issues/trust/progress-on-ai-in-the-boardroom-but-room-to-accelerate.html


[3] Deloitte. "State of AI in the Enterprise: The Untapped Edge." January 2026. (Survey: 3,235 IT and business leaders, 24 countries, August-September 2025.) https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html


[4] Gallup. "AI in the Workplace: January 2026 Report." Gallup, January 2026.

Referenced via: Bloomberg Law, "Building Your Company's AI Governance Framework to Reduce Risk," April 2026. https://pro.bloomberglaw.com/insights/artificial-intelligence/building-your-companys-ai-governance-framework-to-reduce-risk/


[5] International Association of Privacy Professionals (IAPP). "AI Governance in Practice Report 2024." IAPP, 2024. https://iapp.org/resources/article/ai-governance-in-practice-report


[6] European Parliament and Council of the European Union. Regulation (EU) 2024/1689 (EU AI Act). Entered into force August 1, 2024. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai


[7] DLA Piper. "Latest Wave of Obligations Under the EU AI Act Take Effect: Key Considerations." DLA Piper Client Alert, August 2025. https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect


[8] Orrick, Herrington & Sutcliffe LLP. "The EU AI Act: 6 Steps to Take Before 2 August 2026." November 2025. https://www.orrick.com/en/Insights/2025/11/The-EU-AI-Act-6-Steps-to-Take-Before-2-August-2026


[9] National Institute of Standards and Technology (NIST). AI Risk Management Framework (AI RMF 1.0). NIST AI 100-1, January 2023. Generative AI Profile (NIST AI 600-1), July 2024. AI RMF Profile on Trustworthy AI in Critical Infrastructure, concept note, April 7, 2026. https://www.nist.gov/itl/ai-risk-management-framework


[10] Stanford University Human-Centered AI Institute. "AI Index Report." Stanford HAI, 2024/2025.

Referenced via: Virtasant, "3 AI Governance Framework Questions Keeping Leaders Awake," February 2026. https://www.virtasant.com/ai-today/3-ai-governance-framework-questions-keeping-leaders-awake


[11] U.S. Securities and Exchange Commission. Enforcement Actions: In the Matter of Delphia (USA) Inc. and Global Predictions Inc. March 2024.

Referenced via: Virtasant, February 2026 (ibid.)


[12] Marks, A., Abrash, L., and Probst, A. "Governance of AI: A Critical Imperative for Today's Boards." Harvard Law School Forum on Corporate Governance, May 27, 2025. https://corpgov.law.harvard.edu


[13] Harvard Law School Forum on Corporate Governance. "AI Risk Disclosures in the S&P 500: Reputation, Cybersecurity, and Regulation." October 15, 2025. https://corpgov.law.harvard.edu/2025/10/15/ai-risk-disclosures-in-the-sp-500-reputation-cybersecurity-and-regulation/


[14] MIT Sloan School of Management. Study on AI-savvy board performance. 2025.

Referenced via: McKinsey, "The AI Reckoning: How Boards Can Evolve," December 2025. https://www.mckinsey.com/capabilities/mckinsey-technology/our-insights/the-ai-reckoning-how-boards-can-evolve


[15] Latham & Watkins LLP. "AI in the UAE: Understanding the Regulatory Landscape and Key Authorities." 2025. https://www.lw.com/en/insights/ai-in-the-uae-understanding-the-regulatory-landscape-and-key-authorities


[16] Bird & Bird LLP. "GCC Navigating AI Regulations: The Current Landscape." January 2025. https://www.twobirds.com/en/insights/2025/united-arab-emirates/gcc-navigating-ai-regulations---the-current-landscape


[17] Albous, M.R., Al-Jayyousi, O.R., and Stephens, M. "AI Governance in the GCC States: A Comparative Analysis of National AI Strategies." Journal of Artificial Intelligence Research, Vol. 82, pp. 2389-2422. April 2025. https://www.jair.org/index.php/jair/article/download/17619/27175


[18] Albous, M.R., Stephens, M., and Al-Jayyousi, O.R. "Artificial Intelligence and the Gulf Cooperation Council Workforce: Adapting to the Future of Work." Humanities and Social Sciences Communications (Nature Portfolio), Vol. 12, Art. 1649. October 29, 2025. DOI: 10.1057/s41599-025-05984-5. https://www.nature.com/articles/s41599-025-05984-5


[19] IAPP. "Global AI Governance Law and Policy: United Arab Emirates." December 2025. https://iapp.org/resources/article/global-ai-governance-uae


[20] Library of Congress / In Custodia Legis. "FALQs: AI Regulations in the Gulf Cooperation Council Member States — Part One." December 2024. https://blogs.loc.gov/law/2024/12/falqs-ai-regulations-in-the-gulf-cooperation-council-member-states-part-one/

Terms and Conditions Apply | Privacy Policy | Refund Policy

© 2026 Solari LLC | All Rights Reserved

© 2026 Solari LLC | All Rights Reserved