Benchmark calculations happen after all of the question, pillar, and survey scores have been calculated as they are reliant on that data.
Weighted scoring for pillars, surveys, and benchmarks, relies on the same logic as above, with the addition of the following.
Favorability VS Benchmark
When comparing the favorability breakdown to the benchmark, the current math will not result in perfect parity between the results. This is outlined below where a hypothetical set of benchmark questions shows the disparity between calculating a mean of results, and calculating the total favorable over total responses.
favorable responses / total responses = score
160/200 = 0.8
92/100 = 0.92
76/100 = 0.76
210/300 = 0.7
450/500 = 0.9
950/1000 = 0.95
Mean of the scores, the current question benchmark calculation.
(0.8+0.92+0.76+0.7+0.9+0.95)/6 = 0.83833
Sum of favorable responses divided by the sum of total responses, the proposed calculation.
(160+92+76+210+450+950)/(200+100+100+300+500+1000) = 0.88091
Additionally, this means that if we extend the favorability breakdown to pillars and surveys, the disparity between the benchmark and % favorable will become evident.