ClickCease Skip to content

How to Turn Compliance Metrics into Board-Ready Stories

How to Turn Compliance Metrics into Board-Ready Stories

Ethics and compliance leaders often get only a short slot a few times a year to explain to their Board an E&C program that is complex, long-term, and deals with gray areas. Showing up with a stack of metrics risks looking busy instead of effective. But showing up with anecdotes risks looking unmeasurable. The way through that tension is data with intention, shaped into a narrative that helps the Board make decisions and understand risk.

That is exactly where global real estate services company (and 18-time World’s Most Ethical Companies honoree) JLL is pushing: building a data-driven ethics and compliance approach that goes beyond reporting the past and starts positioning the program as a forward-looking management tool.

Kendall Mills, Head of Ethics and Compliance, Americas at JLL puts it plainly: regulators expect stronger data literacy in compliance, and leadership teams increasingly expect the same. The real opportunity is to use that pressure to your advantage.

If you are trying to elevate how your program is understood at the top of the house, here is what JLL’s approach makes clear.

Why the compliance “data moment” is real and not going away

Two forces are converging.

Externally, regulators are explicit that effective programs do not just publish policies and run training. They are expected to monitor, test, and adapt using data. Kendall referenced the DOJ’s focus on analytics and trend identification as a signal to the market: it is no longer enough to say you have controls. You need to show you can detect breakdowns, measure effectiveness, and respond intelligently.

Internally, many companies are already investing heavily in technology, AI, and enterprise data strategy. JLL’s differentiator is that the ethics and compliance function is integrated into that broader data strategy, with access to tools and talent through its internal technology capabilities. The lesson is bigger than JLL: when your company is “future-ready” in tech, compliance cannot be the last function telling stories in PowerPoint alone.

The practical takeaway: data-driven compliance is no longer a special project. It is becoming table stakes.

The real shift: from reporting metrics to building a data story

Most programs start the same way: you pull benchmark surveys, compare your rates, and report “higher or lower than peers.” That is not wrong. It is just shallow.

JLL’s trajectory is the part worth copying:

  1. Start with basic reporting and benchmarking (risk and investigations are common entry points).
  2. Add specialist capability (not just more compliance generalists).
  3. Build infrastructure that makes analysis possible, not just dashboards.
  4. Move from descriptive to predictive, using layered datasets responsibly.

One of the smartest decisions Kendall described was a staffing fork in the road many leaders face: Do you hire an ethics professional and teach them analytics, or hire an analytics professional and teach them ethics?

JLL chose the latter, asking:

  • What data architecture do we need to sustain this?
  • Where should data live so we can trust it year over year?
  • How do we anonymize and layer datasets to see patterns without compromising confidentiality?
  • What would “predictive” compliance even mean in our context?

The key takeaway JLL learned is that data maturity is an operating model issue, not a dashboard issue.

What “impact measurement” looks like when you stop fooling yourself

Many programs measure what is easy: completion rates, hotline volume, cycle times, policy acknowledgments. Those are useful, but they are often proxies, not outcomes. JLL’s experience is a good reminder that data can do more than validate what you already believe. It can challenge it.

Kendall shared a training example that should make every compliance leader pause:

  • JLL can see training completion across a global footprint (they operate in more than 85 countries).
  • Through partnership with HR, they can compare training patterns with organizational indicators like employee survey results.
  • And they found something unintuitive: some business lines or countries with strong training completion did not show the strongest culture signals.

That is not a failure of training. It is evidence that training completion is not the same as training effectiveness. Data forces the better question: What is happening in that environment that makes training “not click,” even when it is completed?

This is where data becomes a credibility engine with the Board. You are not saying “our completion rate is 98%.” You are saying, in a language that the Board understands:

  • “Here is where the program is working as intended.”
  • “Here is where the signals conflict, and what we think that means.”
  • “Here is what we are changing as a result.”

A board-ready framework: Data → Meaning → Decision

If you want your metrics to land in the Boardroom, treat them as inputs to decisions, not outputs of activity. Here is a simple structure that holds up well in short Board windows:

1) What we see (the signal). Pick a small set of metrics that matter. Not everything you can measure. The things that change risk.

  • Substantiation rates by risk area (not just allegation volume)
  • Repeat issue clusters by geography or business line
  • Time to contain high-severity issues (not just time to close)
  • Training completion and post-training indicators (survey items, manager follow-through, local control adoption)
  • Third-party due diligence exceptions and where they concentrate

2) What it means (the insight.) Translate the metric into a risk statement or culture statement.

  • “We are seeing concentrated conflict-of-interest cases in two growth markets.”
  • “Speak-up volume is stable overall, but retaliation-related concerns are rising in one business line.”
  • “High completion rates are not correlating with culture indicators in three countries.”

3) What we are doing (the intervention.) Tie activity directly to the insight.

  • Targeted leader enablement in the affected region
  • Changes to training format, cadence, or language
  • Control redesign or monitoring adjustment
  • Focused auditing or site-level assessment

4) What we expect next (the forward look). This is what Boards want and what many programs skip.

  • “We expect substantiation to rise in the short term because detection is improving.”
  • “We expect fewer repeat COI cases within two quarters if local pre-approval controls take hold.”
  • “We will know this worked if these two indicators move together.”

This is how you stop being the function that reports numbers and become the function that helps leadership steer.

How to get started without boiling the ocean

Here is a practical starter plan Kendall recommends:

Step 1: Inventory what you already have.

  • What systems store it?
  • Who owns the data fields?
  • What definitions are inconsistent across regions?

Step 2: Clean up definitions now (future-you will thank you). Data integrity is not glamorous, but it is what keeps you from presenting numbers you cannot defend six months later.

  • Define categories once (and enforce them).
  • Lock basic taxonomy for case types and root causes.
  • Document how you handle duplicates, reclassifications, and reopened cases.

Step 3: Benchmark carefully, but do benchmark. Benchmarking is not just about “Are we good?” It helps you interpret counterintuitive signals, like the reality that more reports can indicate more trust, not more misconduct.

Step 4: Build partnerships that let you layer insights responsibly. The HR partnership Kendall described is the model: you can compare indicators while still protecting confidentiality through anonymization and appropriate access controls.

Step 5: Use AI responsibly, but use it. Trend analysis, clustering, and anomaly detection can help you see what humans miss, especially at global scale. The guardrails matter: privacy, bias, access, and explainability. But refusing to engage will not age well.

Key takeaways

  1. Data is now part of compliance credibility. Regulators and Boards both expect it.
  2. The goal is not dashboards. The goal is insight that changes decisions and reduces risk.
  3. Staffing choices shape maturity. Adding analytics skill can change the questions your function asks.
  4. Training metrics are not outcomes. Pair them with culture indicators to avoid false confidence.
  5. Boards want meaning, not measurement. Your job is to interpret, not just report.

Where this conversation is going next

If you want to go deeper on how to communicate program impact in a way that resonates with directors, the Global Ethics Summit is putting this topic front and center.

At the 2026 Global Ethics Summit in Atlanta (March 30–31, with a dedicated BELA Day on March 29), one of the early Main Stage sessions is “Speak the Language of the Board with Data-Driven Stories.” Kendall Mills will be joined by leaders from Mitratech, Amgen, Circana, and AT&T to dig into KPIs, outcomes, and narrative construction that works in real Board settings.

Watch Kendall’s full interview on the Ethicast: