We all know political polls are increasingly unreliable. That's why forecasting outfits like 538 aim to separate "the signal and the noise" by assigning grades to pollsters and weighting their forecasts toward those with the best grades.
It seemed like a good plan. So why did it backfire so spectacularly?
Flip Pidot, Peter Hurford and Harry Crane investigate Nate Silver's utterly failed attempt to distinguish good pollsters from bad.
If you'd like to see more independent forecasting and unbiased polling in your world (and gain exclusive access it before anyone else), please consider supporting us on Patreon at patreon.com/openmodel.