ClickCease Skip to content

Third‑Party AI Governance: The Data, Risks, and Board Reporting 

Julia Petre
Julia Petre
Hands of robot and human touching digital financial graphs and reports.

Buying AI, Borrowing Risk

A June 2025 study by MIT Nanda explains that external partnerships double your odds of successful AI deployment. The headline findings in that study are provocative and going viral. For GCs and CECOs the findings are clarifying: most enterprise AI value will arrive via vendors and partners, not homegrown systems. That’s a risk if third-party AI governance is undercooked. 

Ethisphere’s data shows E&C teams are not asleep at the wheel. 77% of teams have a significant or coordinating role in responsible AI, 57% have trained the employee population, and 47% already have responsible-AI experience on staff. That is clear evidence that E&C is at the AI “table”, but there’s an undeniable gap with third parties, which is a major concern given the findings in the MIT study. The data shows that while 84% of E&C teams own third-party risk management only 15% include an AI clause in the third-party code, and just 14% have audited even half of their vendors. That’s a material gap in exactly the place AI risk concentrates.  

This is a governance distribution problem. E&C authority is centralized, but that’s only one element of the equation, especially with AI. AI is entering decentralized, through procurement, business units, and SaaS. 

The Next 18 Months: Board Reporting Becomes Mandatory 

Regulators won’t accept “the vendor did it.” The EU AI Act already assigns obligations to deployers and has extraterritorial reach. U.S. requirements are still a patchwork but trending toward documented controls and evidence of oversight. We expect that AI reporting to the board will become standard by the FY27 planning cycle. E&C teams need to start planning now for this level of reporting. 

Here’s what we expect you’ll need to report on and be able to show on a single page: 

Coverage: % of AI use cases inventoried, % with completed risk reviews, time from intake to approval. 

Controls: % with HITL, exception/incident rates, drift or model-change notifications. 

Third-party assurance: % strategic vendors with AI clauses, % completing questionnaires, remediation turnaround, audit sampling results. 

Capability: % employees trained, number of E&C SMEs with RAI backgrounds. 

If you can’t quantify it, you don’t control it. AI governance without metrics is theater. 

The Counterintuitive Risk: “Successful” Vendor AI That Fails Compliance 

Here’s the trap: a vendor tool can “work” (the 67%) and still fail your program if it can’t meet your evidentiary burden (no bias/robustness artifacts), can’t support human-in-the-loop where policy requires it, or can’t be rolled back safely when outputs drift. MIT’s finding is a speed story. E&C’s job is to make it a defensibility story

Litmus test for third-party AI governance (use it in every RFP and renewal): 

  • What did you test, and can we see it? Bias, robustness, red-team, and change-control artifacts. 
  • Where can a human stop the machine? Explicit HITL triggers for high-impact outcomes. 
  • What happens when it goes sideways? Rollback plan, data purging, and version pinning. 
  • What are you doing with our data? No training on our data without consent; data location, retention, and deletion SLAs. 
  • Who else touches the system? Sub processors, notification windows, audit rights. 

From “At the Table” to “Running the Table” 

Ethisphere’s data is encouraging; E&C has the seat. Now convert it into leverage: 

  • AI policy + Shadow-AI addendum that’s permissioned, not performative. 
  • Single inventory of AI use cases/models that ties to approvals and monitoring. 
  • Risk reviews that are light but testable. You need documented rationale, not folklore. 
  • HITL and QA defined numerically (what’s “good enough”?) with drift triggers. 
  • Third-party AI governance as a standard gate: intake, clauses, assurance, monitoring. 

Peer Proof that Speed and Governance Can Coexist 

  • Palo Alto Networks: “Progress over perfection” using enterprise tools to triage investigations and compress cycle time, with HITL and privilege awareness. It’s governed speed, not cowboy automation. 
  • Verisk: Literacy first (team-wide AI fluency), RAG for enablement, and vendor diligence that demands fairness testing and bias audits. They are outsourcing capability without outsourcing accountability.  
  • Cargill: Internal agents trained on policies/standards; ~90% of leadership visuals for reporting generated via AI (with human review). Translation: AI accelerates board-level reporting while preserving judgment. 

What To Change ASAP 

  • Make “third-party AI governance” a proper noun. The gate is non-negotiable. No AI-enabled vendor goes live without: (a) questionnaire & evidence, (b) contractual protections, (c) go-live assurance, (d) ongoing attestations. 
  • Stand up a one-page AI board report. Start simple. Publish it monthly. Directors need to see coverage, control, exceptions. This view leads to assurance. 
  • Normalize HITL. Treat it like SOX for model-assisted decisions: who reviewed what, when, and why. 
  • Train employees for judgment, not prompts. Your team’s comparative advantage is interpretation and escalation, not clever prompts. 

The Strategic Bet 

If external AI partnerships succeed ~2× more often than internal builds, then the immediate risk is not “AI going rogue” inside your walls. The risk is outsourced AI risk moving faster than your governance.  

You can either scale AI and retrofit governance under pressure or scale governance and let AI ride on rails. Given the MIT data (vendor success outpacing internal development) and Ethisphere’s findings (E&C owns TPRM but AI clauses/audits trail), the path is obvious: buy for speed, govern for defensibility.  

In the next 18 months, board-level AI reporting will become standard. Address your third-party exposure now or prepare to brief the board on why your control environment lags your adoption curve.