When you can trust nobody, trust the smart machine

I will be at AOIR in Montreal, 10-13 October to present some newer work as I look beyond the book. Below is one brief summary of ongoing investigations:


What is the connection between smart machines, self-tracking, and the ongoing mis/disinformation epidemic? They are part of a broader shift in the social rules of truth and trust. Emerging today is a strange alliance of objectivity, technology and the ‘personal’ – often cast in opposition to the aging bastions of institutional expertise. The fantasy of an empowered individual who ‘knows for themselves’ smuggles in a new set of dependencies on opaque and powerful technologies.



On one hand, individuals are encouraged to know more, and to take that knowing into their own hands. Emblematic is the growth of the self-tracking industry: measure your own health and productivity, discover the unique correlations that make you tick, and take control of rationalising and optimising your life. Taglines of ‘n=1’ and ‘small data’ sloganise the vision: the intrepid, tech-savvy individual on an empowering and personal quest to self-knowledge. Implicit here is a revalorisation of the personal and experiential: you have a claim to the truth of your body in ways that the doctor cannot, despite all their learned expertise. This is territory that I go into in some detail in the book.


smarr screencap.png

And so, Calit2’s Larry Smarr builds a giant 3D projection of his own microbiome – which, he claims, helped him diagnose the onset of Crohn’s disease before the doctors could.


But what does it mean to take control and know yourself, if this knowing happens through technologies that operate beyond the limits of the human senses? Subsidiary to the wider enthusiasm for big data, smart machines and machine learning, the value proposition of much (not all) of self-tracking tech is predicated on the promise of data-driven objectivity: the idea that the machines will know us better than we know ourselves, and correct the biases and ‘fuzziness’ of human senses, cognition, memory. And this claim to objectivity is predicated on a highly physical relationship: these smart machines live on the wrist, under the bedsheets, sometimes even in the user’s body, embedding their observations, notifications, recommendations, into the lived rhythms of everyday life. What we find is a very particular mixture of the personal and the machinic, the objective and the experiential: know yourself – through machines that know you better than you do.


risley affidavit.png

Jeannine Risley’s Fitbit data is used to help disprove her claims of being raped by an intruder. What is called ‘self-knowledge’ becomes increasingly capable being disassociated from the control and intentions of the ‘self’.



Another transformative site for how we know and how we trust is that of political mis/disinformation. While the comparison is neither simple nor obvious, I am exploring the idea that they are animated by a common, broader shift towards a particular alliance of the objective, machinic and ‘personal’. In the political sphere, its current enemies are well-defined: institutional expertise, bureaucratic truthmaking and, in a piece of historical irony, liberalism as the dishonest face of a privileged elite. Here, new information technologies are leveraged towards what van Zoonen labelled ‘i-pistemology’: the embrace of personal and experiential truth in opposition to top-down and expert factmaking.


fake post.png

In such ‘deceptive’ social media postings, we find no comprehensive and consistent message per se, but a more flexible and scattershot method. The aim is not to defeat a rival message in the game of public opinion and truthtelling, but to add noise to the game until it breaks down. It is this general erosion of established rules that allows half-baked, factually incorrect and otherwise suspect information to compete with more official ones.


The ongoing ‘fake news’ epidemic of course has roots in post-Cold War geopolitics, and the free speech ideology embedded into social media platforms and their corporate custodians. But it is also an extension of a decades-long decline in public trust of institutions and experts. It is also an unintended consequence of what we thought was the best part about Internet technologies: the ability to give everyone a voice, to break down artificial gatekeepers, and allow more information to reach more people. It is well known how Dylann Roof, who killed nine in the 2015 Charleston massacre, began that path with a simple online search of ‘black on white crime’. The focus here is on what danah boyd identified as a loss of orienting anchors in the age of online misinformation: emerging generations of media users who are taught to assemble their own eclectic mix of truths in a hyper-pluralistic media environment, while also learning a deep distrust of official sources.


science march combo.png

2017 saw the March for Science: an earnest defence of evidence-based, objective, institutionalised truth as an indispensable tool for the government of self and others. The underlying sentiment: this isn’t an agenda for a particular kind of truth and trust, this is just reality – and anyway, didn’t we already settle this debate? But the debate over what counts as reality and how we get access to it is never quite settled.



These are strange and unsettling combinations: the displacement of trust from institutions to technologies in the guise of the empowered ‘I’, and the related proliferation of alternative forms of truthtelling. My current suspicion is that they express an increasingly unstable set of contradictions in our long-running relationship with the Enlightenment. On one hand, we find the enduring belief in better knowledge, especially through depersonalised and inhuman forms of objectivity, as the ticket to rational and informed human subjects. At the same time, this figure of the individual who knows for themselves – found in Kant’s inaugural call of Sapere aude! – is increasingly subject to both deliberate and structural manipulations by sociotechnical systems. We are pushed to discover our ‘personal truths’ in the wilderness of speculation, relying only on ourselves – which, in practice, often means relying on technologies whose workings escape our power to audit. There is nobody you can trust these days, but the smart machine shall not lead you astray.



Facebook scores our trustworthiness because we asked them to

News that Facebook assigns users with secret ‘trustworthiness’ scores based on the content they report has drawn a familiar round of worries. We are told that as users flag objectionable content on the platform, they are themselves scored on a scale of 0 to 1, based on how reliably their flagging concurs with the censors’ judgment. Presumably, other factors play into the scoring; but the exact mechanism, as well as the users’ actual scores, remain secret. The secrecy has drawn critical comparisons with China’s developing ‘social credit’ systems. At least with China’s Zhima Credit, you know you’re being scored and the score is freely available – something not afforded by American credit rating systems, either.

But there’s a bit more to it. In many ways, Facebook is doing what we asked it to do. To untangle the issue, we might look to two other points of reference.

One is Peeple, the short-lived ‘Yelp for People’ that enjoyed a short period in 2015 as the Internet’s most reviled object. The app proposed that ordinary people should rate and review each other, from dates to landlords to neighbours – and, in the original design, the ratings would be published without the victim’s consent. The Internet boiled over, and the app was quickly stripped of its most controversial (and useful) features. That particular effort, at least, was dead in the water.

180821 Peeple img

But Peeple wasn’t an anomaly. It wasn’t even the first to try such scoring. (Unvarnished tried it in 2010; people compared that to Yelp, too; and it died just as quickly.) These efforts are an inevitable extension of the tendency to substitute lived experience, human interaction and personal judgment with aggregated scores managed by profit-driven corporations and their proprietary algorithms. (Joseph Weizenbaum, Computer Power and Human Reason.) If restaurants and Uber drivers are scored, then why not landlords, blind dates, neighbours, classmates? Once we say news sources have to be scored and censored in the battle against fake news, there is a practical incentive to score the people who report that fake news as well.

More broadly, platforms have long been about ‘scoring’ us for reliability and usefulness, and weighting the visibility and impact of our contributions. It’s been pointed out that such scoring & weighting of users is standard practice for platforms like Youtube. Facebook isn’t leaping off into a brave new world of platform hubris; it is only extending its core social function of scoring and sorting people, in response to widespread demand that it do so.

In recent weeks, Twitter was described as essentially a rogue actor amongst social media platforms. Where Facebook and others moved to ban notorious misinformer Alex Jones, Jack Dorsey argued that banning specific individuals on the basis of widespread outrage would undermine efforts to establish consistent and fair policy. Those who called for banning Jones criticised Dorsey’s approach as lacking integrity, a moral cop-out. But let us be precise: what exactly makes Twitter’s decision ‘lack integrity’? Do we believe that if somebody is widely condemned enough, then platforms must reflect this public outrage in their judgment? That is, in many obvious ways, a dangerous path. Alternatively, we might insist that if Twitter’s rules are so impotent as to allow an Alex Jones to roam free, they should become more stringent. In other words, we are effectively asking social media platforms to play an even more dominant and deliberate role in their ability to censor public discourse.

In many ways, this is the logical consequence of the widespread demand since November 2016 that social media platforms take on the responsibility of policing our information and our speech, and that they take on the role of determining who or what is trustworthy. We scolded Mark Zuckerberg in a global reality TV drama of a Congressional hearing, telling him that with power comes responsibility: it turns out that with responsibility comes new powers, too.

180821 Zuck img

Now, we might say that the problem isn’t that Facebook is scoring us for trustworthiness, but that the process needs to be – we all know the magic phrase by heart – open, transparent, accountable. What would that look like? Presumably, this would require not only that users’ trust scores are visible to themselves, but that the scoring process can be audited, contested, and corrected. That might involve another third party, though it is uncertain what kind of institution is trustworthy these days to arbitrate our trust scores.

Here, we find the second reference point: the NSA. Tessa Lyons, Facebook’s product manager, explains that their scoring system must remain as secret as possible, for fears that bad actors will learn to abuse it. This is, of course, a rationale we have seen in the past not only from social media platforms with regards to their other algorithms, but also the NSA and other agencies with regards to government surveillance programs. We can’t tell you who exactly is being spied on in what way, because that would help the terrorists. And we can’t tell you how exactly our search results work, because this would help bad actors game the algorithm (and undermine our market position).

So we come back to a number of enduring structural dilemmas in the way our Internet is made. In protesting the disproportionate impact of a select few powerful platforms on the public sphere, we also demand that these platforms increase their control over public speech. Even as we ask them to censor the public with greater zeal, we possess little effective oversight over how the censors are to act.

In the wake of Russian election interference, Cambridge Analytica and other scandals, the pretention to neutrality has been punctured. There is now a wider recognition that platforms shape, bias and regulate public discourse. But while there is much sense in demanding better from the platforms themselves, passing off too much responsibility onto their hands risks encouraging exactly the hubris we like to accuse them of. “What makes you think you know the best for us, Mark Zuckerberg? Now, please hurry up and come up with the right rules and mechanisms to police our speech.”

If there is sufficient public outrage, Facebook might well be moved to retire its current system of trust scores. It has a history of stepping back from particularly rancorous features, only to gradually reintroduce them in more palatable forms. (Jose van Dijck, The Culture of Connectivity.) Why? Not because Facebook is an evil force that advances its dark arts whenever nobody’s looking, but because Facebook has both commercial and social pressures to develop more expansive scoring systems in order to perform its censoring functions.

Facebook’s development of ‘trust scores’ reflects a deeper, underlying problem: that traditional indices of trustworthiness, such as mainstream media institutions, have lost their effectiveness. Facebook isn’t sure which of us to trust, and neither are we. And if we are searching for new ways to assign trust fairly and transparently, we should not expect the massive tech corporations, whose primary responsibility is to maximise profit from an attention economy, to do that work for us. They are not the guiding lights we are looking for.