On the morning of November 8, 2016, pollsters seemed confident: Hillary Clinton would win the election for the next President of the United States. Of course, we all know what happened thereafter.
Like all surveys, polls have their flaws, and it’s important to understand them and their effects on your research results. We’re hearing today what a poor job the pollsters did, but here’s the truth: their approach might not have been wrong, but the assumptions made obviously were.
Similarly, assumptions behind one’s site selection and market analysis decisions can make or break a retailer or other chain business. Election polls and retail research both rely on proven science, but both also assume that humans will behave as they have in the past (or at least as they say they will). When that doesn’t happen, we end up with results like the most recent election.
So, exactly what happened with the election polls vs. reality?
The answer to this is multifaceted and there are certainly different perspectives. Here’s ours—comparing this Presidential election to our industry.
The media had “confirmation bias.” Confirmation bias states that people seek out information that confirms what they believe or prefer. Admittedly or not, many in the media believed that Clinton was a superior candidate and wanted a Clinton victory—and thus interpreted events that favored that outcome. With the rise of the internet and social media as sources of news, it is easier than ever for people to unknowingly avoid news that doesn’t fit their viewpoint. The point? Evidence of a Trump victory went unheard and poll takers may have been hesitant to admit that they supported Trump due to their perception that most of the public was in favor of Clinton.
In the world of retail research, we’ve seen confirmation bias muddy decision making time and time again—like the restaurant chain who has that “gut feel” that a particular location will be successful, even though the data shows low traffic figures in the area and demographics that fall outside of their target customer profile. Another example is the retailer that chooses the sales forecast that best matches its expectations for the site and ignores the other forecasts that show that the site has challenges. Bottom line: it’s easy to ignore facts when they don’t give you the answers you really want.
A case of misjudged competition. Another way pundits missed the outcome of the election is due to an incorrect assessment of the competitiveness of Trump. As a candidate with no political experience characterized by behavior that many considered “un-Presidential,” it was easy for many to assume that Trump was unelectable. In many ways, the media wrote him off as the winner weeks before the election, underestimating how he would react and that he could win.
In our world, a retailer contemplating entering a new market has to make assumptions about their most similar competitors. If they do a poor job of this, their projections will fail and they could lose money. Grocer A might be poised to enter a market based on their assumptions that Grocer B has a reputation for poor service, but what they might not understand is that many customers like B’s prices and are willing to live with the mediocre service. The lesson here is that misunderstanding a competitor and/or their customers could completely throw off your forecasts.
Overreliance on the data. Plain and simple, pollsters’ data was heavily flawed, and most Americans are still trying to understand why. As Harvard Business Review put it, “even as our ability to analyze data has gotten better and better…our ability to collect data has gotten worse. And if the inputs are bad, the analysis won’t be any good either.”
There were sampling problems with this election, without question. What if many Trump voters opted out of polls because they didn’t want to admit they were voting for him or were afraid they wouldn’t remain anonymous? What if Trump’s supporters were less likely to participate in polls because they felt they were rigged or biased?
These data shortcomings must be addressed through skill and experience. Many believe the political experts failed by assuming that people of various racial, gender, or age groups would vote the same. In retail research, many forecasts fail when the behavior of past consumer groups is generalized to one’s current site. Just because store X’s customers are accepting of low-quality products because of your low prices doesn’t mean that store Y’s customers won’t expect higher quality and be more forgiving of higher prices.
Over-reliance on data and tools without accounting for risk is a surefire way to end up with an underperforming store—or a surprising election result. In the end, we may never completely understand why political projections failed in this election, but many want to study the data and learn from the results in hopes of getting it right next time—or at least a little less wrong.
In market research and in the site selection software business, that’s our goal as well. We strive to be as accurate as possible given the data and tools at our disposal. When we miss the mark, we do our homework to learn from our mistakes and prevent bad assumptions from leading us down the wrong path in the future.