market research election polling survey focus group

The Science of Market Research: Why US Election Polls Continue to Fail

I did a lot of my undergraduate and graduate work in market research. I’ve run several studies over the years, eventually working for a time at a market research company. Doing polls, focus groups, and using other sampling methods can accurately predict outcomes if done correctly. Studies are far better at determining what happened in the past than predicting the future. There are several reasons for this, and, with current technology, I believe there is a way to create a far more accurate predictive model.

Let’s walk through how you sample a population to predict an outcome this week.

Market Research Isn’t Magic

There is no crystal ball for predicting the future. But what market research tries to do is predict an outcome based on what people think they will do in the future. It does this by sampling the population. That means selecting a representative smaller group of people that the researcher believes will act how the population will act.

Now on top of a projected result, there is also a reported confidence interval. That is the + or – x% that generally follows the report of a result. The bigger that number, the less reliable the projected result will be. But here is the first big problem. It takes time to do a survey, and there is time after the survey when minds can change.

For instance, I was in a focus group (which is another method of predicting future behavior), and I was asked if I’d buy the Chrysler car I was being shown. And I thought the car was beautiful and said yes. But then it took several years to bring the car to market, and by that time, other cars emerged. I liked those a ton better, and I didn’t buy the Chrysler. I didn’t lie; I also had no idea what my future choices might be, and those future choices changed my projected outcome.

The same thing can happen in election surveys. Things happen after the survey is taken that could change who to vote for and even whether to vote. Now variances from predicted outcomes from surveys held close to the election or exit polls can be more troubling. Exit polls should be far more accurate because people report how they voted and can’t change how they voted so if you get a large unexplained variance, you’d need to look at whether your sample was flawed/biased or there was some voter fraud. The bias may come from the nature of how people were sampled if a significant group of people didn’t want to tell the pollster how they voted or if their votes weren’t counted, that would result in an error. The first would reflect poorly on the sampling methodology and the person doing the poll; the second would be a criminal act outside of the control of the person taking the sample.

But this showcases why you generally get a far more accurate view of what someone did than of what someone will do. The decision is locked in in the latter case, so no external pressure or event on the voter will change the outcome. As long as those being sampled are honest, and the process being predicted trustworthy, you should get an accurate count. Now, in this election, you had mail-in ballots in large numbers that would have fallen outside of the sample and led to predictive errors. You also had problems with the United States Postal Service, which led to lost or missing ballots, and you had significant efforts to keep people from voting, all of which would have potentially impacted the results.

Thus, it was always likely that the predictions would not be accurate.

Steve Jobs Better Way

Of the tech leaders, Steve Jobs stood out as not believing in market research or focus groups. Correctly, he believed they were untrustworthy for reasons like I’ve showcased. Instead, he believed in manipulating the population to arrive at a predictable outcome. If you can control the outcome, you can predict it, which is why in single party countries, you can pretty much always predict the outcome.

He was good at crafting marketing programs that drove people to his products. He spent several times what his competitors did on marketing; plus, he aggressively seeded products to influencers, and he had an inordinate influence on key reporters. In short, rather than trying to predict an outcome, he spent his money on assuring it.

Both political parties, but particularly the Democrats, could learn from Jobs’ approach as it was—based on his dominance of MP3 players, smartphones, and tablets—massively successful.

Wrapping Up

There were many issues with the US election ranging from the massive mail-in vote to what appeared to be efforts to change the outcome that would have materially rendered the polls mostly invalid.

The combination of voting method variances, the COVID-19 pandemic, and a fluid information cycle make the methods that pollsters traditionally use invalid. And, as I noted, predictive surveys are problematic because people aren’t locked into their votes. Ironically, because many people voted well in advance, had they been sufficiently instrumented, we might have had higher accuracy this time, but that, sadly, was not the case.

This problem does point to a path where polls could be made more accurate and adjust for external events. A combination of a closed-loop process where voters voted early and got back a report on how their vote was counted coupled with remediation if it were counted wrong would assure the voting process. An app that provided an incentive to report how you voted or will vote, where you could change your answer if your opinion changed, would (if a critical mass of people used it) likely be more predictive and would provide the political parties with an early warning if they were losing potential votes or what messages were working to increase potential votes. A widespread social media platform like Facebook or Twitter, or a cloud provider like Amazon, Google, or Microsoft, could host the app.

But to make these predictions more accurate, you need to ensure the voting process better, and you need to instrument the voters with a dynamic tool that captures when they change their minds. Optionally, if you capture why, the resulting data would be invaluable to the campaigns who would likely co-fund the effort.

Even better would be moving to online voting, which is far more secure and more accessible to the instrument than the mess of aging and out of date technologies for voting we currently use. But we’ll leave that for another time.

Scroll to Top