The other important thing I’m going to say is, if Comey’s statement is true, then he really needs to listen good Election projections put the figure at more than 70%. So that becomes an argument for further forecasts.
Well, what is a “good” forecast? If we go back to 2016, as you say, Nate Silver’s forecast gave Trump a 30% chance of winning. Other models have pegged Trump’s chances at more than 1% or low single digits. Meaning because Trump won, Nate Silver is, therefore, “right.” But of course, we can’t really say that. If you say something has a 1 in 100 chance of happening and it happens, it could mean you underestimated it or it just means there is a 1 in 100 chance of it being achieved.
This is the problem with figuring out whether election forecasting models are correctly adjusted to real-world events. Back in 1940, we had only 20 presidential elections in our sample size. So there is no real statistical justification for an exact probability here. 97 vs 96 — it’s difficult with our limited test size to tell if these are calibrated to 1 percent accuracy. This whole exercise is much more uncertain than journalism, I think, in giving consumers confidence in polls and forecasts.
In your book, you talk about the pollster Franklin Roosevelt, who was originally a polling genius — but his career eventually took a turn, right? are not?
This guy, Emil Hurja, was Franklin Roosevelt’s pollster and election forecaster. He devised the first kind of poll aggregation, the first tracking poll. A really compelling character in the poll story. At first, he was insanely accurate. In 1932, he predicted that Franklin Roosevelt would win 7.5 million votes, although others predicted that Roosevelt would lose. He won 7.1 million votes. So Hurja was better calibrated than other opinion polls at the time. But then he failed in 1940, and then he was basically exactly like your average pollster.
In investing, it’s hard to beat the market for a long time. Similarly, with polls, you have to constantly rethink your methods and assumptions. Though early on Emil Hurja was called “Witch of Washington” and “Crystalmaker of Crystal Falls, Michigan,” his record dwindled over time. Or maybe he just got lucky early on. After the fact, it’s hard to tell if he really is this genius predictor.
I bring this up because — well, I’m not trying to scare you, but maybe your biggest mistake is somewhere in the future, yet to come.
That’s the kind of lesson here. What I want people to think about is, just because the polls were biased in one direction for the last few elections doesn’t mean they will be biased in the same way for the same reasons in the next election. . The smartest thing we can do is read each poll with an eye towards how that data was generated. Are these questions worded properly? Does this poll reflect Americans according to their political and demographic trends? Is this store a reputable store? Is there something going on in the political environment that could cause Democrats or Republicans to answer the phone or respond to online surveys at a higher or lower rate than the other party? You must think about all possible outcomes before accepting the data. And it’s an argument to treat polls with more uncertainty than we’ve treated them in the past. I think that’s a pretty obvious conclusion from the last few elections. But more importantly, it’s more honest with how pollsters arrive at their estimates. They are uncertain estimates at the end of the day; They are not factual facts about public opinion. And that’s how I want people to think about it.