CQ Scoring Methodology

The following describes how questions, Pillars, and overall scores are calculated for your organization and the Benchmark within Ethisphere’s Culture Quotient application.

Methodology

The following describes how questions, Pillars, and overall scores are calculated for your organization and the Benchmark within Ethisphere’s Culture Quotient application.

Overview

The new calculations use question scores as a foundation to build pillar, survey, and benchmark calculations. We determine a questions score by the following:

( (number of favorable responses) / (total responses) ) * 100

Questions

A question’s benchmark is calculated by the same equation, except that the data set is comprised of all the responses that meet the following criteria:

  • Share the same canonical ancestor
  • They are from a “benchmark” survey this is NOT archived
  • They are NOT from the “current” survey that the question is from (scoping the benchmark).

Pillars

Pillar scores are calculated by finding the mean score of all questions underneath the pillar.

Pillar benchmark scores are determined by calculating the mean of question benchmark that meet the following criteria:

  • They are a part of the “current” survey
  • They are a part of the “current” pillar
    Surveys
  • Survey scores are calculated by finding the mean score of all questions underneath the pillar.
  • Survey benchmark scores are determined by calculating the mean of question scores that meet the following criteria
  • They are NOT a part of the “current” survey – thereby scoping them
  • They are from surveys designated as being included in the benchmark, and NOT archived
  • They share canonicals with current survey.

If a survey only uses questions 1-40 of 50 possible questions, we make sure to only include questions that share the same canonical ancestor as 1-40, excluding any questions that are not used, from the scoring.

Weighting

Weighting is used to manage the impact of scores on the overall calculations that encompass them. The current setup, without weighting, treats every score with equal weight. In reality, some scores are not as meaningful, making low scores on these questions have a needless negative impact on the overall calculation.

Weighted scoring for pillars, surveys, and benchmarks, relies on the same logic as above, with the addition of the following.

Canonical questions are given a weight between 0 and 10 that gets converted to a 0-1 decimal scale. 0 meaning they do not get included in the score calculation, 10 meaning they count for 100% of their value.

When calculating a weighted score, each question score is multiplied by it’s canonical’s weight, then the total is divided by the sum of the weights. For example, take the following eight scores:

Score Weight
80 1.0
60 .8
90 1.o
20 .1
70 .5
30 .1
15 0
85 1.0
—– —–
450 4.5

The non-weighted mean here would be:

(80+60+90+20+70+30+15+85)/8 = 56.25

The weighted mean would be calculated as follows. The score of 15 is removed as it was given a 0 weight, thus it has no shares in the score calculation.

(80 * 1) = 80

(60 * .8) = 48

(90 * 1) = 90

(20 * .1) = 2

(70 * .5) = 35

(30 * .1) = 3

(85 * 1) = 85

total: 343

weight sum = 4.5

Weighted Score = 343/4.5 = 76.22

With this system, if all questions are given the same weight, the resulting mean score will always be the same. Given the above example, if all 8 questions were given a weight of 2, the calculation would result in 56.25. This means that the driving factor or the weighting is the delta between weights, or using 0 since that removes it entirely from the calculation.

Additional Notes

Benchmark calculations happen after all of the question, pillar, and survey scores have been calculated as they are reliant on that data.

Weighted scoring for pillars, surveys, and benchmarks, relies on the same logic as above, with the addition of the following.

Favorability VS Benchmark

When comparing the favorability breakdown to the benchmark, the current math will not result in perfect parity between the results. This is outlined below where a hypothetical set of benchmark questions shows the disparity between calculating a mean of results, and calculating the total favorable over total responses.

favorable responses / total responses = score

160/200 = 0.8

92/100 = 0.92

76/100 = 0.76

210/300 = 0.7

450/500 = 0.9

950/1000 = 0.95

Mean of the scores, the current question benchmark calculation.
(0.8+0.92+0.76+0.7+0.9+0.95)/6 = 0.83833

Sum of favorable responses divided by the sum of total responses, the proposed calculation.
(160+92+76+210+450+950)/(200+100+100+300+500+1000) = 0.88091

Additionally, this means that if we extend the favorability breakdown to pillars and surveys, the disparity between the benchmark and % favorable will become evident.