Art in America piece w/ Trevor Paglen

I recently spoke to Trevor Paglen – well known for works like ‘Limit Telephotography’ (2007-2012) and its images of NSA buildings and deep-sea fibreoptic cables – about surveillance, machine vision, and the changing politics of the visible / machine-readable. Full piece @ Art in America.

Much of that discussion – around the proliferation of images created by and for machines, and the exponential expansion of pathways by which surveillance, data, and capital can profitably intersect – is also taken up in my upcoming book, Technologies of Speculation (NYUP 2020). There my focus is on what happens after Snowden’s leaks – the strange symbiosis of transparency and conspiracy, the lingering unknowability of surveillance apparatuses and the terrorists they chase. It also examines the passage from the vision of the Quantified Self, where we use all these smart machines to hack ourselves and know ourselves better, to the Quantified Us/Them which plugs that data back into the circuits of surveillance capitalism.

In the piece, Paglen also discusses his recent collaboration with Kate Crawford on ImageNet Roulette, also on display at the Training Humans exhibition (Fondazione Prada Osservertario, Milan):

“Some of my work, like that in “From ‘Apple’ to ‘Anomaly,’” asks what vision algorithms see and how they abstract images. It’s an installation of about 30,000 images taken from a widely used dataset of training images called ImageNet. Labeling images is a slippery slope: there are 20,000 categories in ImageNet, 2,000 of which are of people. There’s crazy shit in there! There are “jezebel” and “criminal” categories, which are determined solely on how people look; there are plenty of racist and misogynistic tags.

If you just want to train a neural network to distinguish between apples and oranges, you feed it a giant collection of example images. Creating a taxonomy and defining the set in a way that’s intelligible to the system is often political. Apples and oranges aren’t particularly controversial, though reducing images to tags is already horrifying enough to someone like an artist: I’m thinking of René Magritte’s Ceci n’est pas une pomme (This is Not an Apple) [1964]. Gender is even more loaded. Companies are creating gender detection algorithms. Microsoft, among others, has decided that gender is binary—man and woman. This is a serious decision that has huge political implications, just like the Trump administration’s attempt to erase nonbinary people.”

apple_treachery.jpg

Crawford & Paglen also have a longer read on training sets, Excavating AI (also source for above image).

 

Facebook scores our trustworthiness because we asked them to

News that Facebook assigns users with secret ‘trustworthiness’ scores based on the content they report has drawn a familiar round of worries. We are told that as users flag objectionable content on the platform, they are themselves scored on a scale of 0 to 1, based on how reliably their flagging concurs with the censors’ judgment. Presumably, other factors play into the scoring; but the exact mechanism, as well as the users’ actual scores, remain secret. The secrecy has drawn critical comparisons with China’s developing ‘social credit’ systems. At least with China’s Zhima Credit, you know you’re being scored and the score is freely available – something not afforded by American credit rating systems, either.

But there’s a bit more to it. In many ways, Facebook is doing what we asked it to do. To untangle the issue, we might look to two other points of reference.

One is Peeple, the short-lived ‘Yelp for People’ that enjoyed a short period in 2015 as the Internet’s most reviled object. The app proposed that ordinary people should rate and review each other, from dates to landlords to neighbours – and, in the original design, the ratings would be published without the victim’s consent. The Internet boiled over, and the app was quickly stripped of its most controversial (and useful) features. That particular effort, at least, was dead in the water.

180821 Peeple img

But Peeple wasn’t an anomaly. It wasn’t even the first to try such scoring. (Unvarnished tried it in 2010; people compared that to Yelp, too; and it died just as quickly.) These efforts are an inevitable extension of the tendency to substitute lived experience, human interaction and personal judgment with aggregated scores managed by profit-driven corporations and their proprietary algorithms. (Joseph Weizenbaum, Computer Power and Human Reason.) If restaurants and Uber drivers are scored, then why not landlords, blind dates, neighbours, classmates? Once we say news sources have to be scored and censored in the battle against fake news, there is a practical incentive to score the people who report that fake news as well.

More broadly, platforms have long been about ‘scoring’ us for reliability and usefulness, and weighting the visibility and impact of our contributions. It’s been pointed out that such scoring & weighting of users is standard practice for platforms like Youtube. Facebook isn’t leaping off into a brave new world of platform hubris; it is only extending its core social function of scoring and sorting people, in response to widespread demand that it do so.

In recent weeks, Twitter was described as essentially a rogue actor amongst social media platforms. Where Facebook and others moved to ban notorious misinformer Alex Jones, Jack Dorsey argued that banning specific individuals on the basis of widespread outrage would undermine efforts to establish consistent and fair policy. Those who called for banning Jones criticised Dorsey’s approach as lacking integrity, a moral cop-out. But let us be precise: what exactly makes Twitter’s decision ‘lack integrity’? Do we believe that if somebody is widely condemned enough, then platforms must reflect this public outrage in their judgment? That is, in many obvious ways, a dangerous path. Alternatively, we might insist that if Twitter’s rules are so impotent as to allow an Alex Jones to roam free, they should become more stringent. In other words, we are effectively asking social media platforms to play an even more dominant and deliberate role in their ability to censor public discourse.

In many ways, this is the logical consequence of the widespread demand since November 2016 that social media platforms take on the responsibility of policing our information and our speech, and that they take on the role of determining who or what is trustworthy. We scolded Mark Zuckerberg in a global reality TV drama of a Congressional hearing, telling him that with power comes responsibility: it turns out that with responsibility comes new powers, too.

180821 Zuck img

Now, we might say that the problem isn’t that Facebook is scoring us for trustworthiness, but that the process needs to be – we all know the magic phrase by heart – open, transparent, accountable. What would that look like? Presumably, this would require not only that users’ trust scores are visible to themselves, but that the scoring process can be audited, contested, and corrected. That might involve another third party, though it is uncertain what kind of institution is trustworthy these days to arbitrate our trust scores.

Here, we find the second reference point: the NSA. Tessa Lyons, Facebook’s product manager, explains that their scoring system must remain as secret as possible, for fears that bad actors will learn to abuse it. This is, of course, a rationale we have seen in the past not only from social media platforms with regards to their other algorithms, but also the NSA and other agencies with regards to government surveillance programs. We can’t tell you who exactly is being spied on in what way, because that would help the terrorists. And we can’t tell you how exactly our search results work, because this would help bad actors game the algorithm (and undermine our market position).

So we come back to a number of enduring structural dilemmas in the way our Internet is made. In protesting the disproportionate impact of a select few powerful platforms on the public sphere, we also demand that these platforms increase their control over public speech. Even as we ask them to censor the public with greater zeal, we possess little effective oversight over how the censors are to act.

In the wake of Russian election interference, Cambridge Analytica and other scandals, the pretention to neutrality has been punctured. There is now a wider recognition that platforms shape, bias and regulate public discourse. But while there is much sense in demanding better from the platforms themselves, passing off too much responsibility onto their hands risks encouraging exactly the hubris we like to accuse them of. “What makes you think you know the best for us, Mark Zuckerberg? Now, please hurry up and come up with the right rules and mechanisms to police our speech.”

If there is sufficient public outrage, Facebook might well be moved to retire its current system of trust scores. It has a history of stepping back from particularly rancorous features, only to gradually reintroduce them in more palatable forms. (Jose van Dijck, The Culture of Connectivity.) Why? Not because Facebook is an evil force that advances its dark arts whenever nobody’s looking, but because Facebook has both commercial and social pressures to develop more expansive scoring systems in order to perform its censoring functions.

Facebook’s development of ‘trust scores’ reflects a deeper, underlying problem: that traditional indices of trustworthiness, such as mainstream media institutions, have lost their effectiveness. Facebook isn’t sure which of us to trust, and neither are we. And if we are searching for new ways to assign trust fairly and transparently, we should not expect the massive tech corporations, whose primary responsibility is to maximise profit from an attention economy, to do that work for us. They are not the guiding lights we are looking for.