The biggest problem with reporting on political polls

In my column over the weekend, I once again warned about the “unskewed” polling conspiracy theory regarding weighting polls based on party ID. Read that piece or this primer from Pew if you’d like more info on why focusing on a difference between reported party identification numbers from a survey and voter registration statistics is a bad idea. It’s a fallacy that I’ve written about many times over the past several years, but it keeps popping up.

I was impressed by the quick and thoughtful reaction to the column from Press Herald reporter Steve Mistler, who wrote a blog post in response and noted that highlighting party ID was probably a mistake.

Unfortunately, Mistler’s colleague Kevin Miller apparently didn’t read his post. Yesterday he wrote a piece about a Rasmussen poll, once again claiming that it “oversamples” Democrats and Republicans compared to party registration numbers.

There are couple additional notes I should make about Mistler’s piece. First, The words “credulous” and “parrot” in the headline and section header of my column were added by an editor and don’t quite reflect my views. I think the text of the piece is more nuanced and accurate.

Second, one thing that I didn’t have space for in the column is to object to the use of the term “oversampling” which Mistler included in his original piece and again in his blog post and which appeared in the headline of the Bangor Daily News article I also took issue with in the column.

Oversampling has a specific meaning in research and statistics – it’s a data analysis technique involving the intentional gathering of more information from certain demographic groups, usually so that you can get more granular information about the opinions of a specific population. (It is then often accounted for in the broader sample through weighting.) It’s not something that just happens. To say that a poll “oversamples” Democrats or Republicans is basically to accuse a pollster of deliberately skewing their results. Unless the technique actually has been employed, that word shouldn’t appear in reporting on polls.

As I mentioned in the column the best way to report on polling is to give broader context by providing polling averages where available. Here, for instance is the Huffpost Pollster trend chart on the Maine governor’s race (not yet including today’s results from Rasmussen). If you want to take the temperature of a race, this kind of aggregator is your best thermometer:

Maine Governor 2014 polling

If a reporter wants to provide some background on an individual poll (and doesn’t have a deep background in polling methodology), their best bet is probably to talk about the history of accuracy of that pollster. For instance, a more relevant and useful point to make about the Rasmussen poll than talking about party ID numbers would be to mention their welldocumented institutional bias or “house effect” in past elections in favor of Republican candidates.

Thanks for indulging my polling wonkery. I’ll leave you with one of my favorite takes on polling and party ID:

Mike Tipping

About Mike Tipping

Mike is Maine's longest-writing political blogger and explores state politics and policy with a focus on analysis and explanation. He works at the Maine People's Alliance and Maine People's Resource Center.