Here’s what happened.
Up until Tuesday morning, it was easy to consider yourself well-informed about what was going to happen during the 2016 presidential election: All you had to do was check your choice of polls, almost all of which showed Clinton winning.
If you wanted to get more sophisticated, you could look at a data + politics hub like Nate Silver’s Fivethirtyeight, which provided more nuanced odds, but still favored Clinton.
Almost everyone got this stuff wrong.
But the one that arguably made the biggest error was also the one making the most audacious claim: Votecastr, a brand-new startup, promised to provide real-time projections on Tuesday, as the votes were being collected, but before vote totals were actually released.
Votecastr’s data generated lots of attention on Tuesday, and may have even helped move financial markets.
But it turned out to be way, way off: Votecastr got five of the seven states it predicted wrong.
The most prominent error regarded Florida, which Votecastr thought Hillary Clinton would win by more than 300,000 votes; instead she lost it by more than 100,000 votes.
The misses convinced many people that Votecastr’s mission was a bad one: It got the calls wrong, and it distributed those incorrect calls while voting was in process, which could have affected the outcome.
But Slate editor in chief Julia Turner, whose site partnered with Votecastr and published their data, says the idea remains a good one.
The problem, she says, was the polls Votecastr conducted before the election, then synced up with the voter turnout numbers it collected on Tuesday.
Like everyone else who polled voters before the vote, Votecastr’s poll numbers were wrong, too. Which meant the final product would be wrong as well.
Here’s her response, via email, to my questions about why Votecastr’s numbers were off, and whether she would repeat the process for another election:
The Votecastr project was premised on two ideas: First, that keeping real-time information from voters on election day was anti-journalistic. And second, that campaign methodologies could produce accurate and illuminating estimates of how the race was going on election day.
Obviously, our numbers weren’t right. We’re running a postmortem with the VC team this morning, but I suspect that’s because Votecastr’s methodology is dependent on polling; Votecastr conducted its own large sample polls, but found results fairly in line with the public opinion polling that was also off by a few points in many key states.
I’m disappointed that our numbers were off, but I still believe in the general principle that election day shouldn’t be an information-free zone.