Cutting Through The (Polling) Noise

It’s the closing weeks of election season, which means I hear a lot of discussions on whether or not a certain poll is accurate… My general thinking on most of these discussions: if the poll says something the person wants to hear, they find it trustworthy. If the poll says something contrary to that person’s interest, the poll is untrustworthy.

Usually, I cut through the noise and say, ‘don’t trust a single poll, rather look to the aggregates.’ However, this year is slightly different… This year it depends on what aggregates you’re looking at…

Take for example the Senate forecast for 538 side-by-side with RealClearPolitics’ forecast:

538 says there’s a 63% chance Democrats win the Senate. Meanwhile, RealClearPolitics projects Republicans to pick up 2 seats…


So, back to my thinking: if you lean Republican, I bet you like RealClear’s projections and think it’s more accurate. If you lean Democratic, I’d hazard a guess you favor 538’s.

To break down these differences very quickly: 538 has an algorithm for weighting polls based on their historical accuracy, as well as weighting by date (older polls = less influence), and sample size, and I’m sure a few other things. These algorithms are drafted by people, and people inherently have biases. I’m not saying that 538 is right or wrong, I’m saying that it is not taking the straight average of polling data, for right or wrong. Also, 538’s most likely outcome actually shows a 50/50 Senate – which technically means Democrats ‘win’ the Senate, but more accurately shows no change in the Senate.

RealClear’s new projections are based off of historical accuracy (or inaccuracy) of public polling in the state compared to actual vote results. That’s how they came out with this little projection table:

… I think this table is smart in what it’s trying to do – but misses the mark unfortunately.

RealClear is trying to say that how far off an election poll was in a given state in the past should determine how far off the election polls are in that state currently, based on state/poll bias.

Couple problems here.

First off, RealClear is equating general election years equally to midterms, which is a mistake in my estimation.

Second, they should only be comparing Senate races against Senate races, rather than Senate races against Presidential races, regardless of the election year (I understand that not every state will have a Senate race each election – for those, leave em’ blank like they did in 2018). This is just such a strange comparison looking at differentials in the past between a Senate race and Presidential race – it would be like me saying that in the past few years, the University of Northern Iowa's wrestling program has overachieved their initial rankings, so I should assume the school’s English program is probably also better than its current rankings… that’s silly.

So, I don’t like this. I also don’t like that a polling aggregator is saying they don’t trust their aggregate of the polls. I don’t like that they have consistency across the board in Pennsylvania’s aggregate right now (for example), but say it’s wrong:

Listen, I’m not saying that Oz can’t or won’t win. I’m here to say that if ALL of the polls in a polling aggregate on a polling aggregation site have one candidate winning, the site shouldn’t say they’re all wrong… Or they should stop looking at polling data and start doing something like having a ‘fake news’ site or something.

What to Believe

I myself this year am going to average the two polling aggregates. That is to say, if 538 thinks Democrats are going to win by a seat or two, and RCP thinks Republicans are going to win by a seat or two, then at this point, I put it at even… Which kinda matches up to what I see happening (See Nevada and Pennsylvania).