sage ideas | fresh perspective | sustained success

Sagacious Thinking

Periodic musings

The AI Governance Gap: When Oversight Falls Between Committees

A board member recently confided: "We discuss AI in every committee meeting, but I couldn't tell you who actually owns it." This isn't just confusion—it's a governance gap creating real compliance exposure and regulatory risk.

As boards mature beyond asking "Are we using AI?" they're confronting a more urgent question: "Who at the board level is accountable for the full AI risk picture—or are we spreading it across committees until no one owns it?"

That "between committees" gap has become one of the fastest ways to create future compliance exposure, especially as AI moves from standalone tools to embedded, agentic, and multi-model systems.

Why This Matters Now: The Regulatory and Risk Landscape

Several converging factors make AI governance accountability urgent:

Enforcement is already here. In March 2024, the SEC brought its first "AI-washing" enforcement actions against investment advisers Delphia and Global Predictions, resulting in penalties of $225,000 and $175,000 respectively for making false claims about their AI capabilities. SEC Chair Gary Gensler warned that when new technologies create buzz, they also create false claims, and investment advisers must not mislead the public about AI use.

Global regulation is taking effect. The EU AI Act entered into force on August 1, 2024, with prohibited AI practices and AI literacy obligations becoming enforceable on February 2, 2025, and governance rules for general-purpose AI models taking effect on August 2, 2025. Even U.S. companies must comply if they serve EU markets.

Security incidents are escalating. In March 2025, a Fortune 500 financial services firm discovered its customer service AI agent had been leaking sensitive account data for weeks through a prompt injection attack that bypassed traditional security controls. In August 2024, security researcher Johann Rehberger demonstrated how Microsoft 365 Copilot could be exploited through prompt injection to secretly exfiltrate sensitive company data.

Board structures haven't caught up. AI oversight remains unevenly institutionalized in board structures (e.g., committee charters), even as adoption accelerates. Audit committees often serve as the "default home" for AI oversight, but disclosures are frequently more robust when responsibility is placed in technology or governance committees, suggesting boards struggle with where AI truly belongs. thecaq.org + EY Board guidance is increasingly emphasizing formal governance frameworks rather than ad hoc conversations. Deloitte

What AI Risk Mapping Means?

AI risk mapping is the board-level practice of:

  1. listing the company’s AI use cases, both current and planned,

  2. identifying the risk types they create, and

  3. assigning clear oversight ownership (committee + management owner) with established evidence expectations.

It’s the difference between “AI is on the agenda” and “AI is governed.”

NACD has explicitly framed AI as an emerging board oversight responsibility and highlighted governance and stakeholder-trust implications as AI moves into core processes. NACD

The “Falling Between Committees” Problem

AI doesn’t fit neatly into a single committee charter because it touches:

  • Audit (controls, reporting integrity, internal auditability)

  • Risk/Compliance (regulatory exposure, third-party risk, ethics)

  • Technology/Cyber (LLM-specific threats, data leakage, resilience)

  • Comp/People (AI in hiring/performance, workforce impacts)

  • Governance/Nominating (policy, accountability design, oversight structure)

If the board doesn’t deliberately map AI risks to committee ownership, the most common outcome is: Every committee assumes another committee has it.

This is not theoretical; board advisors and disclosure research are already highlighting the structure question (“Which committee oversees AI?”) as a central governance issue.

Source: Harvard Law Corporate Governance Forum

What Boards Miss When AI Falls Between Committees

1) “AI-Washing” becomes a governance and compliance exposure

Real-world example: Delphia claimed from 2019 to 2023 that it used AI and machine learning to analyze client data for investment decisions, stating it could "predict which companies and trends are about to make it big," but admitted in an SEC examination that it had never actually used client data or created the algorithm it claimed to have. Even after correcting its disclosures, Delphia continued misleading clients in emails through August 2023, claiming their data was "helping train our algorithm for pursuing ever better returns".

The SEC has already brought enforcement actions over misleading AI-related claims by investment advisers, a sure indicator that “AI claims” are becoming a compliance target. SEC

Where it falls between committees

  • Audit: “Not our scope unless it affects financial reporting.”

  • Risk/Compliance: “We didn’t approve those product/IR statements.”

  • Governance: “We assumed management had controls.”

2) Data lineage is treated as IT “plumbing” until it becomes a regulatory event

The scenario: Teams paste sensitive data into AI tools, or AI systems pull from internal repositories via RAG/agents. IT thinks "tooling," Privacy thinks "policy," Audit wants "evidence," and Security focuses on perimeter controls. All departments take different perspectives that ultimately create gaps.

Organizations such as NACD and Deloitte board guidance increasingly emphasize governance structures and repeatable oversight precisely because AI risk spans multiple control domains. Deloitte

Where it falls between committees

  • Tech/Cyber: “We secured the system, but we don’t own privacy.”

  • Risk/Compliance: “We wrote rules, but we can’t prove adherence.”

  • Audit: “We can’t audit what isn’t logged.”

3) LLM-specific security threats don’t cleanly land in traditional cyber oversight

Boards often have cyber oversight built around classic threats. LLM systems introduce distinct failure modes that traditional frameworks miss.

Real-world examples:

  • Aikido Security discovered "PromptPwnd," a vulnerability allowing attackers to hijack AI agents in GitHub Actions and GitLab CI/CD pipelines, confirming at least five Fortune 500 companies were impacted

  • In 2024, dozens of custom GPT-powered bots deployed via OpenAI's API were found vulnerable to prompt injection, causing them to reveal private system instructions and API secret keys

Increasingly, board publications and committee agenda guidance include AI alongside cyber and regulatory matters. Deloitte

Where it falls between committees

  • Tech/Cyber: “We oversee cyber, but this is more ‘product AI.’”

  • Product/Strategy: “It’s a feature decision, not a security decision.”

  • Audit: “We’ll look later, once it’s in controls testing.”

Red Flags To Watch For

Warning signs that AI oversight is falling through the cracks:

  • Board materials mention "AI" but don't specify which AI systems

  • No single executive can list all AI tools in use across the company

  • AI risk discussions happen only when incidents occur

  • Committee minutes show AI discussed, but no follow-up actions were assigned

  • Marketing makes AI claims that compliance hasn't vetted

  • IT deploys AI tools without privacy or security review

A Practical AI Risk Map a Board Can Adopt

Here’s a lightweight approach that works for boards without creating bureaucracy.

Step 1: Define 6 AI risk buckets

  1. Regulatory & compliance (privacy, consumer protection, sector rules, AI Act requirements)

  2. Controls & auditability (logs, traceability, evidence, AI claims substantiation)

  3. Security & resilience (prompt injection, data leakage, incidents, LLM-specific threats)

  4. Third-party/model risk (vendors, model changes, contracts, GPAI transparency)

  5. Human capital & decision integrity (bias, HR uses, oversight, workforce impacts)

  6. Reputation & disclosure (AI washing, transparency, trust, investor communications)

Board-focused organizations strongly reinforce the need for structured governance across AI’s lifecycle and impacts. Deloitte

Step 2: Assign committee (not shared) ownership

A clean pattern many boards are moving toward (adapt as needed):

  • Audit Committee: controls, audit trails, reporting integrity, internal audit plan

  • Risk/Compliance (or full board): regulatory mapping, third-party risk posture, ethics

  • Tech/Cyber (or Audit if no tech committee): AI security threat model + incident readiness

  • Comp/People: HR uses, workforce impacts, training, incentives

  • Governance/Nominating: AI policy, charter language, accountability design

This approach aligns with what disclosure research has revealed: AI oversight is often placed in audit, but other committees may drive more explicit governance formalization. EY

Step 3: Require “evidence, not assurances” reporting

For each bucket, the board should ask for:

  • Metric: % of AI use cases with documented approval workflows

  • Control test: Internal audit reviewed 10 AI implementations; 8 had complete documentation

  • Near-miss: Marketing team nearly deployed AI chatbot without data privacy review

  • Forward risk: Planned expansion of AI agents will require new monitoring capabilities

Two concrete governance upgrades boards can make now

1) Add AI oversight into committee charters

A recurring finding is that boards discuss AI, but don’t always go the next step to formalize it in their charters and oversight mechanisms – this is a missed opportunity. thecaq.org

Action: Add 3–5 lines to relevant charters defining AI oversight scope and escalation paths.

Example language for Audit Committee charter:

"The Committee oversees the company's use of artificial intelligence technologies in financial reporting, internal controls, and disclosure processes, including verification of AI-related claims in investor communications and assessment of AI-related risks to financial statement integrity."

2) Create a cross-committee "AI Risk Council" (board-level cadence)

Not a new committee - a quarterly synthesis that prevents the "everyone thought someone else had it" problem:

  • each committee reports AI risk changes in its domain

  • the full board sees the integrated map

  • gaps are explicitly assigned

Questions abound about the best approach, pushing the question of whether boards need an existing committee, a new tech committee, or a working group to ensure adequate attention. Harvard Law Corporate Governance Forum

The board question that prevents “between committees” failure

End every AI discussion with:

“Which committee owns this risk, what evidence will we review next quarter, and who has authority to pause AI use if controls fail?”

Developing a simple discipline turns AI from “innovation talk” into “governed capability.”

Sources for the article:

Board Governance & AI Oversight Guidance

Board Role Examples & Commentary

Broader Context on AI Regulation & Risk

  1. Wikipedia: EU Artificial Intelligence Act Overview of a major emerging regulatory framework boards should consider.

  2. FAIR Institute: AI governance and risk quantification for boards. Discussion of structured risk models integrating AI compliance.

Optional Academic References (for a deeper dive into frameworks)