Verified Voting Blog: How not to measure security
This article was originally posted at Freedom to Tinker on August 10, 2015. It is reposted here with permission of the author.
A recent paper published by Smartmatic, a vendor of voting systems, caught my attention. The first thing is that it’s published by Springer, which typically publishes peer-reviewed articles – which this is not. This is a marketing piece. It’s disturbing that a respected imprint like Springer would get into the business of publishing vendor white papers. There’s no disclaimer that it’s not a peer-reviewed piece, or any other indication that it doesn’t follow Springer’s historical standards. The second, and more important issue, is that the article could not possibly have passed peer review, given some of its claims. I won’t go into the controversies around voting systems (a nice summary of some of those issues can be found on the OSET blog), but rather focus on some of the security metrics claims.
The article states, “Well-designed, special-purpose [voting] systems reduce the possibility of results tampering and eliminate fraud. Security is increased by 10-1,000 times, depending on the level of automation.”
That would be nice. However, we have no agreed-upon way of measuring security of systems (other than cryptographic algorithms, within limits). So the only way this is meaningful is if it’s qualified and explained – which it isn’t. Other studies, such as one I participated in (Applying a Reusable Election Threat Model at the County Level), have tried to quantify the risk to voting systems – our study measured risk in terms of the number of people required to carry out the attack. So is Smartmatic’s study claiming that they can make an attack require 10 to 1000 more people, 10 to 1000 times more money, 10 to 1000 times more expertise (however that would be measured!), or something entirely different?