Menu

XRepublic

Unfiltered Political News

Trade Secrets From the Predictors Who Called a Trump Victory

Now that the initial dust has settled from the historically surprising 2016 election, we all have one question: What went wrong with the polls and predictions? They showed Hillary Clinton over Donald Trump, sometimes handily. Just about everyone blew the call.

Well, not everyone: There were some people who got it right. Whether by a quirk of their polling, a model that relied more on history or by sheer accident, a handful of predictors bucked the crowd and told us something else was going on. One is Helmut Norpoth, a Stony Brook political science professor whose model suggested, all the way back in March, that Donald Trump had between an 87 and 99 percent chance at the presidency. Another is the team behind the USC-Dornsife/LA Times Daybreak poll, which followed a fixed pool of 3,200 respondents every week over the course of the campaign; their final forecast on Election Day had Trump leading 46.8 percent to 43.6 percent. Another was the Trafalgar Group, a Republican consulting firm that conducted surveys using automated phone calls to landline numbers in battleground states. They got Pennsylvania, Michigan, Wisconsin, Florida and North Carolina right—an achievement few others can claim.

While many of them were off in crucial ways—two of them had Trump winning the popular vote, for instance—they all picked up on different hints that something bigger and more unpredictable was brewing this election, whether it was that fishy zero percent figure for Trump’s approval among black voters or the creeping understanding that women were particularly uncomfortable admitting who they were voting for. (Hint: Trump voters were shyest of all.) We brought them together to tell us what they learned, how we all screwed up, and what we know now about the country.

Katelyn Fossett, associate editor, Politico Magazine: It will probably take months for us to know 100 percent but, based on what we know now, what did you think was the number-one thing that went wrong with how we as a whole predicted this election?

Arie Kapteyn, professor and leader of the USC team behind the USC-LA Times Daybreak poll: So, frankly, I think the one thing I really should stress before even starting, is that we didn’t do all that great. I mean, after all, we tried to predict popular vote just like many others, and we actually ended up with, I think, a last forecast of Trump plus-3, and of course I think what it looks like now is that Clinton will have a very comfortable margin of popular vote.

I think generally why polls may have missed trends a little bit is that in most polls, one of the inputs often is whether people voted four years ago. And what we seem to see in our data is that among the people that didn’t vote four years ago, there was definitely more support for Trump than among others. And even these people told us that their likelihood of voting, their self-reported likelihood of voting was lower than for those people who voted four years ago.
And my current sense is they went to the polls more than most of the other polls might have anticipated. So, you have this group that’s sort of underrepresented in many polls, and then they go to the polls a little more than expected, and then they’re also more Trump-leaning than expected. So, that’s at least one way—one direction in which we’re looking for an answer.

Robert Cahaly, Senior Strategist, Trafalgar Group: There was a segment of voters that a lot of the different organizations weren’t polling, and that was one of the first things we noticed when we did the analysis from spring, especially in the two states that were easiest for us to get the data, Georgia and South Carolina. And what we noticed when we went back and looked is the difference between those who voted in the Republican primary this year and those who voted in primaries historically was vast. And it was one-sided [only on the Republican side].

And so, we started from that, back in the primary to create a model of what this Trump voter looks like, which is not a new voter, but a lapsed voter. We broke them up into people who had voted in the 2008 presidential election forward, and people who voted in 2006 or earlier and had not participated forward.

And so, we took that universe of those who have participated in 2006 or previous—we’re talking about people who voted for the last time in the ‘70s, in the ‘80s, in the ‘90s, and earlier 2000s, and we created the term Trump surge voters, and we added them into our call database as well as the newly registered, because in our experience, when people register to vote for the first time, the election that follows is the one that they’re most likely to participate in. So we had a good mix of newly registered voters as well as what we call our Trump surge voters. So, we started by basically having a different type to make our soup from, and we put all those in there.

Now, we are probably different from a lot of the groups that we didn’t do nationwide popular vote polls. All we focused on was battleground state polls, because when it’s over, in my mind, it’s all about the electoral map and state-by-state Electoral College.

So the big difference was the people they were surveying; and second, and I don’t know who else would agree with this, is the fact that taking people on their face for what they were answering was not a smart move, because people were not being forthright as to who they were supporting.

Fossett: This big surprise taught us a few really hard lessons about voter behavior. You’re getting at one now about shy voters, but what were the lessons that all of you took away that you hadn’t really thought about before, about the way people vote?

Kapteyn: I would like to sort of underline what was just said about shy voters. So, a few weeks ago—one or two weeks before the election—we asked a couple of questions to our respondents where we said, “Do you feel comfortable discussing who you want to vote for with others?” And then, we had a number of possibilities, could be their friends, could be their family. And we asked about over the Internet which is obviously what we’re doing, and then also when someone calls you and asks you who you’re going to vote for.

Of course we were mainly interested in the last one, this, “How comfortable are you to tell someone on the phone who you’re going to vote for?” There is some indication, and we have to look at it a little more, that the Trump voters were a little more uncomfortable with telling someone, but there were groups within the Trump voter camp that felt particularly uncomfortable. And one of those groups was women who planned to vote for Trump. So, I think there is definitely something to that.

And I think the other one—there was another paper that was done recently, it’s also college-educated Trump voters. So, there were just groups that—and I guess Latinos—groups that in the popular press, you know, people would sort of say, “Well, these groups are all going to vote for Clinton.” And, of course, within this group, you have big groups that actually wanted to vote for Trump, but they were the ones that were a little less comfortable with telling someone else.

That’s one reason why we do Internet [polling], because we feel that at least you sort of take away that part. They don’t really have to talk to anyone. They can just say whatever they want.

Cahaly: I saw a lot of commentators refer to this say that they believe that the “shy voter” worked both ways [shy Trump and shy Hillary voters]. That is not what we experienced. In fact, what we experienced was a pattern that was so unnatural we knew there had to be something to it.

I grew up in the South and everybody is very polite down here, and if you want to find out the truth on a hot topic, you can’t just ask the question directly. So, the neighbor is part of the mechanism to get that real answer. In the 11 battle ground states, and 3 non-battleground, there was a significant drop-off between the ballot test question [which candidate you support] and the neighbors’ question [which candidate you believe most of your neighbors support]. The neighbors question result showed a similar result in each state: Hillary dropped [relative to the ballot test question] and Trump comes up across every demographic, every geography. Hillary’s drop was between 3 and 11 percent while Trump’s increase was between 3 and 7 percent. This pattern existed everywhere from Pennsylvania to Nevada to Utah to Georgia, and it was a constant.

And so, I don’t accept that there were “shy Hillary voters.” And what we discovered is what he just said about a lot of minorities were shy voters and women were shy voters.

And just as he did most of his—he was saying to go on the Internet. That was one of the reasons we discovered in the primary that our live caller polls versus our automated push-button polls, a lot of callers weren’t even in the game [in the live caller polls]. Every single time we polled in the primary, the push-button said 4.5 points better for Trump. And obviously, we didn’t know until the primary election that the push-button would always be right. The push-button had a much wider universe and it was right. In every single one of those situations, it was more accurate. I mean, it was the most accurate poll in South Carolina, most accurate poll in Georgia, second-most accurate in Florida, and we noticed the pattern. And so, we took what we learned in the primary and we put that information in the general election balloting.

And you know, if one of those elections—if you ask people that were in the field, that were out there visiting folks, they will tell you this was one of those elections where, yes, there were tons of Trump signs and Hillary signs in yards, but there were many, many millions of people who would have never put a Trump sticker on their car or a sign in their yard that literally could not wait to vote for them.

We were surprised at how many people thought it was not going to be a Trump win. It was shocking to us the ridicule we were getting. I mean, you look at our Twitter as the election night unfolded, people saying, “You guys weren’t crazy,” just as much as they’ve been saying we were crazy for the three weeks before. There was one guy who simply tweeted us “bullocks” and that night he said, “May I withdraw my bullocks?”

Helmut Norpoth, political science professor at Stony Brook University: I’m not a pollster, but I followed the polling and watched as much as I can, the breakdowns by all different categories, et cetera. And one of the things that struck me early on which I thought was really a revelation that something was not right were polls that showed the support for Trump among African-Americans as something like 1 percent or zero percent. There’s just no way on earth that, in the end, Trump would get such a low percent of the vote. I knew there was some problem in either getting African-Americans or getting them to respond to something and I think it was just systematic for me—what the other gentlemen just talked about probably the reluctance to admit that you were a Trump voter.

Fossett: Robert, you talked about the ridicule you got after you came up with these numbers. I was just wondering generally about how difficult it was when all the polls were showing one thing to kind of go against the grain.

Norpoth: I don’t get too bothered about it. But I was prepared maybe at some point if things didn’t work out that I would have to retool the model, maybe rethink the use of primaries. So I kept sort of a tally in my head about how it would look if I had waited and it never really changed very much. I mean, I would have made the same prediction essentially if I had waited until the end of June.

Steve Shepard, Caucus editor and chief polling analyst, Politico: The USC-LA Times project received some criticism, I would say, during the course of the campaign. Obviously, you guys did miss on the popular vote, but I’ve seen sort of a victory lap from the LA Times on the project. How did it feel during the course of the campaign to be saying something different than what you saw in the rest of the coverage?

Kapteyn: Frankly, it didn’t bother me too much, partly because this is not my main occupation. So, we find this an interesting experiment. We just want to see whether it helps to do things a little differently or whether one can learn something.

And I think the reactions were actually twofold. So, we had the people, let’s say, in the public sphere, who thought we were crazy and the poll was referred to as a bogus poll and everything else, and there were people who knew, quote/unquote, what the result was going to be, and they knew, quote/unquote, that we were crazy, and I’ve gotten emails from people that said that what you’re saying about Latinos cannot possible be true.

But then, within the professional sphere, like AAPOR, the American Association for Public Opinion Research, their reaction was much more: “Well, we see what you’re doing. We don’t necessarily agree what you’re doing, but we appreciate the openness.” You know, we made all the data available from day one. Everyone could download the data. Of course, that’s also how Nate found out the 19-year-old African-American in Illinois, right? You could only do that because we made it available.

So, I think in the professional world there was a lot of appreciation for the fact that we tried to do something differently.

So, for me, and I guess my group, it really didn’t matter much. I would think, though, that for The LA Times, in some ways it mattered much more because they’re much more exposed to the public discussion every day and I can see why The LA Times did a little bit of a victory lap, because at some level it’s sort of, you know, validated the fact that they stuck to it, because as you would expect, it wasn’t necessarily easy for them to do that in an environment where everyone else tells you you’re wrong.

Cahaly: Actually, I found the same thing. You know, I am intrigued listening to just what he just said about it’s not kind of the main thing they’re doing. But there’s this term, “herding”—some of these polls were so worried about being outliers that they were avoiding it, and I think there’s something to be said for doing things a little differently. I think there became almost a one-mindedness where all of these traditional polling companies and firms didn’t want to deviate too far. They didn’t want to get out there.

But just as he said just now, the freedom he had of, like, this is not what they primarily do and they can learn if they were wrong was exactly the same attitude we took. It’s like, we think we built a better mousetrap, we’re willing to test it, and, just like I said in a couple of newspaper articles, on Wednesday, I want to be the guy who got it right or nobody’s going to listen to me anymore.

Shepard: You guys all bring very different perspectives to this conversation, but one thing that’s clear is that you all read a lot of the coverage of the campaign and watch a lot of coverage of the campaign. How do you think we in the media did this campaign?

Cahaly: I think the coverage of the campaign from a national perspective when you look at the way the networks and all were covering it, the one thing we figured out, and I bet these guys did, too, is people are living in bubbles. They were talking to people who shared their opinions—they had their opinions reinforced so often from people around them.

Norpoth: One of the things that struck me a little bit is the effect of the three debates. I think typically the media would say, well, really, Hillary won all these three debates and they would sort of list the points where she scored, but I didn’t really see any movement in our polls as a result of that. So, that made me think that the perception of journalists or people on television, people who follow politics very closely, of how important these things are may be just very different from the way people in the country perceived that.

If we do surveys, they can write at the end what they think. And so, you read what they think and you just see that what they think or what their opinion is or what concerns them is often just different from what you see on television or maybe look at the Internet.

Fossett: I was wondering if you could specifically talk about the way the media covers the polls. Is there something we don’t get, if we over-focus on certain elements of them . . .

Kapteyn: I felt that, especially if you read something like the New York Times or you watch some of the major news shows and network shows, I mean, there was sort of a fixed notion that Hillary had this race in hand, that there was just a very difficult path for Trump to get to victory. From what I recall, there was never any notion that something like Pennsylvania or Michigan or Wisconsin would be in play. It was just going to be impossibly close for Trump to pull it off with the states Romney had and the battleground states—and I think that became such a fixed idea about this campaign and they didn’t consider an alternative.

So, I think this was sort of poll-driven, but it probably fed into a notion that Donald Trump was just unpalatable and unimaginable as a candidate. So I mean, this “it shouldn’t happen,” “couldn’t happen” sort of reinforced each other in the coverage on the New York Times and CNN and MSNBC.

Shepard: Since we’ve been talking a lot about what everybody else got wrong this year and what you guys got right, I want to give you guys an opportunity to talk about the biggest thing that you think you got wrong in this election and why, if you’ve pinpointed what—whether it was something in the methodology of your poll or your model, or something about where you were as far as your preconceived notions of politics in this campaign.

Kapteyn: Well, actually, we didn’t get it right at all. So, I think in our case there’s so much to be discussed, as always. Four years ago, we got very close to the final count for Obama, and this time were off quite a bit, and at this point we don’t really know. We have—right now, we’re in the field with a bunch of questions about who do you actually vote for and some other things. So, we have definitely some work to do.

Cahaly: We got lucky in that our prediction, our 306 to 232, ended up being accurate. However, in that prediction, we predicted Wisconsin may go Republican, but we put it down for Hillary. We had 47 states right, but we did predict that Trump would win Nevada and he lost it, and we predicted that Trump would win New Hampshire, and he just barely lost it. So, one, the demographic changes in Nevada was the reason that one was wrong. And we went wrong assuming the methodology would work all the way in New Hampshire—there was probably less hidden vote in New Hampshire than most places, and you know, they tend to be kind of straight-speaking folks, so that makes a lot of sense. So, yeah, I would say that was where our error was but, you know, we’re still proud of predicting 306.

Shepard: Okay. Helmut, you talked about, during the course of the primaries, thinking about how to tinker with your model. What’s the one thing that surprised you with this campaign or that you think maybe you misunderstood and understand better now or just flat were wrong about?

Norpoth: Well, I would have to admit that I was wrong about the winner of the popular vote, which I thought would be Donald Trump, but I was lucky that I predicted that such a large lead for him in the popular vote that I was confident he would be elected President through the Electoral College, which I did not predict specifically, but I would have certainly come up with a substantial number, probably over 300, for him in the Electoral College.

So, I will try to re-estimate my model with the elections of the past to make a direct prediction of the Electoral College, because ultimately this is what matters.

This conversation has been condensed and edited for clarity.

Powered by WPeMatico