On 2 November, 2010, Facebook’s American users were subject to an ambitious experiment in civic-engineering: could a social network get otherwise-indolent people to cast a ballot in that day’s congressional midterm elections? The answer was yes. The prod to nudge bystanders to the voting booths was simple. It consisted of a graphic containing a link for looking up polling places, a button to click to announce that you had voted, and the profile photos of up to six Facebook friends who had indicated they’d already done the same. With Facebook’s cooperation, the political scientists who dreamed up the study planted that graphic in the newsfeeds of tens of millions of users. (Other groups of Facebook users were shown a generic get-out-the-vote message or received no voting reminder at all.) Then, in an awesome feat of data-crunching, the researchers cross-referenced their subjects’ names with the day’s actual voting records from precincts across the country to measure how much their voting prompt increased turnout.
Overall, users notified of their friends’ voting were 0.39 per cent more likely to vote than those in the control group, and any resulting decisions to cast a ballot also appeared to ripple to the behaviour of close Facebook friends, even if those people hadn’t received the original message. That small increase in turnout rates amounted to a lot of new votes. The researchers concluded that their Facebook graphic directly mobilised 60,000 voters, and, thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast that day. As they point out, George W Bush won Florida, and thus the presidency, by 537 votes – fewer than 0.01 per cent of the votes cast in that state.
Now consider a hypothetical, hotly contested future election. Suppose that Mark Zuckerberg personally favours whichever candidate you don’t like. He arranges for a voting prompt to appear within the newsfeeds of tens of millions of active Facebook users – but unlike in the 2010 experiment, the group that will not receive the message is not chosen at random. Rather, Zuckerberg makes use of the fact that Facebook “likes” can predict political views and party affiliation, even beyond the many users who proudly advertise those affiliations directly. With that knowledge, our hypothetical Zuck chooses not to spice the feeds of users unsympathetic to his views. Such machinations then flip the outcome of our hypothetical election. Should the law constrain this kind of behaviour?
Full Article: New Statesman | Facebook could decide an election without anyone ever finding out.