News that Facebook assigns users with secret ‘trustworthiness’ scores based on the content they report has drawn a familiar round of worries. We are told that as users flag objectionable content on the platform, they are themselves scored on a scale of 0 to 1, based on how reliably their flagging concurs with the censors’ judgment. Presumably, other factors play into the scoring; but the exact mechanism, as well as the users’ actual scores, remain secret. The secrecy has drawn critical comparisons with China’s developing ‘social credit’ systems. At least with China’s Zhima Credit, you know you’re being scored and the score is freely available – something not afforded by American credit rating systems, either.
But there’s a bit more to it. In many ways, Facebook is doing what we asked it to do. To untangle the issue, we might look to two other points of reference.
One is Peeple, the short-lived ‘Yelp for People’ that enjoyed a short period in 2015 as the Internet’s most reviled object. The app proposed that ordinary people should rate and review each other, from dates to landlords to neighbours – and, in the original design, the ratings would be published without the victim’s consent. The Internet boiled over, and the app was quickly stripped of its most controversial (and useful) features. That particular effort, at least, was dead in the water.
But Peeple wasn’t an anomaly. It wasn’t even the first to try such scoring. (Unvarnished tried it in 2010; people compared that to Yelp, too; and it died just as quickly.) These efforts are an inevitable extension of the tendency to substitute lived experience, human interaction and personal judgment with aggregated scores managed by profit-driven corporations and their proprietary algorithms. (Joseph Weizenbaum, Computer Power and Human Reason.) If restaurants and Uber drivers are scored, then why not landlords, blind dates, neighbours, classmates? Once we say news sources have to be scored and censored in the battle against fake news, there is a practical incentive to score the people who report that fake news as well.
More broadly, platforms have long been about ‘scoring’ us for reliability and usefulness, and weighting the visibility and impact of our contributions. It’s been pointed out that such scoring & weighting of users is standard practice for platforms like Youtube. Facebook isn’t leaping off into a brave new world of platform hubris; it is only extending its core social function of scoring and sorting people, in response to widespread demand that it do so.
In recent weeks, Twitter was described as essentially a rogue actor amongst social media platforms. Where Facebook and others moved to ban notorious misinformer Alex Jones, Jack Dorsey argued that banning specific individuals on the basis of widespread outrage would undermine efforts to establish consistent and fair policy. Those who called for banning Jones criticised Dorsey’s approach as lacking integrity, a moral cop-out. But let us be precise: what exactly makes Twitter’s decision ‘lack integrity’? Do we believe that if somebody is widely condemned enough, then platforms must reflect this public outrage in their judgment? That is, in many obvious ways, a dangerous path. Alternatively, we might insist that if Twitter’s rules are so impotent as to allow an Alex Jones to roam free, they should become more stringent. In other words, we are effectively asking social media platforms to play an even more dominant and deliberate role in their ability to censor public discourse.
In many ways, this is the logical consequence of the widespread demand since November 2016 that social media platforms take on the responsibility of policing our information and our speech, and that they take on the role of determining who or what is trustworthy. We scolded Mark Zuckerberg in a global reality TV drama of a Congressional hearing, telling him that with power comes responsibility: it turns out that with responsibility comes new powers, too.
Now, we might say that the problem isn’t that Facebook is scoring us for trustworthiness, but that the process needs to be – we all know the magic phrase by heart – open, transparent, accountable. What would that look like? Presumably, this would require not only that users’ trust scores are visible to themselves, but that the scoring process can be audited, contested, and corrected. That might involve another third party, though it is uncertain what kind of institution is trustworthy these days to arbitrate our trust scores.
Here, we find the second reference point: the NSA. Tessa Lyons, Facebook’s product manager, explains that their scoring system must remain as secret as possible, for fears that bad actors will learn to abuse it. This is, of course, a rationale we have seen in the past not only from social media platforms with regards to their other algorithms, but also the NSA and other agencies with regards to government surveillance programs. We can’t tell you who exactly is being spied on in what way, because that would help the terrorists. And we can’t tell you how exactly our search results work, because this would help bad actors game the algorithm (and undermine our market position).
So we come back to a number of enduring structural dilemmas in the way our Internet is made. In protesting the disproportionate impact of a select few powerful platforms on the public sphere, we also demand that these platforms increase their control over public speech. Even as we ask them to censor the public with greater zeal, we possess little effective oversight over how the censors are to act.
In the wake of Russian election interference, Cambridge Analytica and other scandals, the pretention to neutrality has been punctured. There is now a wider recognition that platforms shape, bias and regulate public discourse. But while there is much sense in demanding better from the platforms themselves, passing off too much responsibility onto their hands risks encouraging exactly the hubris we like to accuse them of. “What makes you think you know the best for us, Mark Zuckerberg? Now, please hurry up and come up with the right rules and mechanisms to police our speech.”
If there is sufficient public outrage, Facebook might well be moved to retire its current system of trust scores. It has a history of stepping back from particularly rancorous features, only to gradually reintroduce them in more palatable forms. (Jose van Dijck, The Culture of Connectivity.) Why? Not because Facebook is an evil force that advances its dark arts whenever nobody’s looking, but because Facebook has both commercial and social pressures to develop more expansive scoring systems in order to perform its censoring functions.
Facebook’s development of ‘trust scores’ reflects a deeper, underlying problem: that traditional indices of trustworthiness, such as mainstream media institutions, have lost their effectiveness. Facebook isn’t sure which of us to trust, and neither are we. And if we are searching for new ways to assign trust fairly and transparently, we should not expect the massive tech corporations, whose primary responsibility is to maximise profit from an attention economy, to do that work for us. They are not the guiding lights we are looking for.