The technology behind elections is hard to get right. Elections require security. They also require transparency: anyone should be able to observe enough of the election process, from distribution of ballots, to the counting and canvassing of votes, to verify that the reported winners really won. But if people vote on computers or votes are tallied by computers, key steps of the election are not transparent and additional measures are needed to confirm the results. In a New York Times op-ed a couple weeks ago, James Woolsey and Brian Fox proposed using “open-source systems that can guard our votes against manipulation.” Their hypothesis is that “open-source software is less vulnerable to hacking” than proprietary voting software because “anyone can see how open-source systems operate. Bugs can be spotted and remedied, deterring those who would attempt attacks. This makes them much more secure than closed-source models.” This sounds reasonable, but in fact, open-source systems are only one step towards guarding our votes against manipulation—and the hypothesis that using open source software will by itself improve security is questionable at best.
First, with the systems in use today, there is no guarantee that the software running on any machine is in fact the software it is supposed to be running, open source or not. And even if we could know with certainty that the installed software matches the software source, the quality of the software is critical. Poorly written software, whether open source or not, creates vulnerabilities, and is thus vulnerable to hacking. Open source software allows anyone to detect vulnerabilities. We do not believe in “security through obscurity”—that is, relying on secrecy as a primary security strategy—but making source code available to everyone for inspection makes it available to the attackers for inspection. And the attackers are often highly motivated to find vulnerabilities.
Complicating this is the relative ease of identifying one vulnerability and the difficulty of finding them all. Attackers need to find just a single flaw in order to exploit a system. On the other hand, it is very easy for reviewers to miss something—the Heartbleed bug that affected millions of websites and devices in 2014 occurred in open source software—or to make assumptions about the environment in which the source code is executed that turn out to be wrong. Software authors, maintainers, election officials, and other defenders must find every flaw, fix them all, and then distribute the fixed system (or patches) to everyone using the system.
Patch distribution creates its own set of potential problems, as it informs attackers that there was a vulnerability (and where in the code it is), leaving anyone who does not immediately install the patch especially vulnerable. For example, many years ago, a response group announced a patch to a well-known, widely used piece of software. Within thirty minutes, that vulnerability was being exploited around the world. Many sites did not have the time or resources to install the fix. The patch was announced at 5 p.m. East Coast time on a Friday, making things worse.