Armed with a set of 10-sided dice (we’ll get to those in a moment), an online Web tool, and a stack of hundreds of ballots, University of California-Berkeley statistics professor Philip Stark spent last Friday unleashing both science and technology upon a recent California election. He wanted to answer a very simple question—had the vote counting produced the proper result?—and he had developed a stats-based system to find out. On June 2, 6,573 citizens went to the polls in Napa County and cast primary ballots for supervisor of the 2nd District in one of California’s most famous wine-producing regions, on the northern edge of the San Francisco Bay Area. The three candidates—Juliana Inman, Mark van Gorder, and Mark Luce—would all have liked to come in first, but they really didn’t want to be third. That’s because only the two top vote-getters in the primary would proceed to the runoff election in November; number three was out. Napa County officials announced the official results a few days later: Luce, the incumbent, took in 2,806 votes, van Gorder got 1,911 votes, and Inman received 1,856 votes—a difference between second and third place of just 55 votes. Given the close result, even a small number of counting errors could have swung the election. Vote counting can go wrong in any number of ways, and even the auditing processes designed to ensure the integrity of close races can be a mess (did someone say “hanging, dimpled, or pregnant chads”?). Measuring human intent at the ballot box can be tricky. To take just one example, in California, many ballots are cast by completing an arrow, which is then optically read. While voters are instructed to fully complete the thickness of the arrow, in practice some only draw a line. The vote tabulation system used by counties sometimes do not always count those as votes. So Napa County invited Philip Stark to look more closely at their results. Stark has been on a four-year mission to encourage more elections officials to use statistical tools to ensure that the announced victor is indeed correct. He first described his method back in 2008, in a paper called “Conservative statistical post-election audits,” but he generally uses a catchier name for the process: “risk-limiting auditing.”