Art in America piece w/ Trevor Paglen

I recently spoke to Trevor Paglen – well known for works like ‘Limit Telephotography’ (2007-2012) and its images of NSA buildings and deep-sea fibreoptic cables – about surveillance, machine vision, and the changing politics of the visible / machine-readable. Full piece @ Art in America.

Much of that discussion – around the proliferation of images created by and for machines, and the exponential expansion of pathways by which surveillance, data, and capital can profitably intersect – is also taken up in my upcoming book, Technologies of Speculation (NYUP 2020). There my focus is on what happens after Snowden’s leaks – the strange symbiosis of transparency and conspiracy, the lingering unknowability of surveillance apparatuses and the terrorists they chase. It also examines the passage from the vision of the Quantified Self, where we use all these smart machines to hack ourselves and know ourselves better, to the Quantified Us/Them which plugs that data back into the circuits of surveillance capitalism.

In the piece, Paglen also discusses his recent collaboration with Kate Crawford on ImageNet Roulette, also on display at the Training Humans exhibition (Fondazione Prada Osservertario, Milan):

“Some of my work, like that in “From ‘Apple’ to ‘Anomaly,’” asks what vision algorithms see and how they abstract images. It’s an installation of about 30,000 images taken from a widely used dataset of training images called ImageNet. Labeling images is a slippery slope: there are 20,000 categories in ImageNet, 2,000 of which are of people. There’s crazy shit in there! There are “jezebel” and “criminal” categories, which are determined solely on how people look; there are plenty of racist and misogynistic tags.

If you just want to train a neural network to distinguish between apples and oranges, you feed it a giant collection of example images. Creating a taxonomy and defining the set in a way that’s intelligible to the system is often political. Apples and oranges aren’t particularly controversial, though reducing images to tags is already horrifying enough to someone like an artist: I’m thinking of René Magritte’s Ceci n’est pas une pomme (This is Not an Apple) [1964]. Gender is even more loaded. Companies are creating gender detection algorithms. Microsoft, among others, has decided that gender is binary—man and woman. This is a serious decision that has huge political implications, just like the Trump administration’s attempt to erase nonbinary people.”

apple_treachery.jpg

Crawford & Paglen also have a longer read on training sets, Excavating AI (also source for above image).

 

Advertisements

When you can trust nobody, trust the smart machine

I will be at AOIR in Montreal, 10-13 October to present some newer work as I look beyond the book. Below is one brief summary of ongoing investigations:


 

What is the connection between smart machines, self-tracking, and the ongoing mis/disinformation epidemic? They are part of a broader shift in the social rules of truth and trust. Emerging today is a strange alliance of objectivity, technology and the ‘personal’ – often cast in opposition to the aging bastions of institutional expertise. The fantasy of an empowered individual who ‘knows for themselves’ smuggles in a new set of dependencies on opaque and powerful technologies.

 

1.

On one hand, individuals are encouraged to know more, and to take that knowing into their own hands. Emblematic is the growth of the self-tracking industry: measure your own health and productivity, discover the unique correlations that make you tick, and take control of rationalising and optimising your life. Taglines of ‘n=1’ and ‘small data’ sloganise the vision: the intrepid, tech-savvy individual on an empowering and personal quest to self-knowledge. Implicit here is a revalorisation of the personal and experiential: you have a claim to the truth of your body in ways that the doctor cannot, despite all their learned expertise. This is territory that I go into in some detail in the book.

 

smarr screencap.png

And so, Calit2’s Larry Smarr builds a giant 3D projection of his own microbiome – which, he claims, helped him diagnose the onset of Crohn’s disease before the doctors could.

 

But what does it mean to take control and know yourself, if this knowing happens through technologies that operate beyond the limits of the human senses? Subsidiary to the wider enthusiasm for big data, smart machines and machine learning, the value proposition of much (not all) of self-tracking tech is predicated on the promise of data-driven objectivity: the idea that the machines will know us better than we know ourselves, and correct the biases and ‘fuzziness’ of human senses, cognition, memory. And this claim to objectivity is predicated on a highly physical relationship: these smart machines live on the wrist, under the bedsheets, sometimes even in the user’s body, embedding their observations, notifications, recommendations, into the lived rhythms of everyday life. What we find is a very particular mixture of the personal and the machinic, the objective and the experiential: know yourself – through machines that know you better than you do.

 

risley affidavit.png

Jeannine Risley’s Fitbit data is used to help disprove her claims of being raped by an intruder. What is called ‘self-knowledge’ becomes increasingly capable being disassociated from the control and intentions of the ‘self’.

 

2.

Another transformative site for how we know and how we trust is that of political mis/disinformation. While the comparison is neither simple nor obvious, I am exploring the idea that they are animated by a common, broader shift towards a particular alliance of the objective, machinic and ‘personal’. In the political sphere, its current enemies are well-defined: institutional expertise, bureaucratic truthmaking and, in a piece of historical irony, liberalism as the dishonest face of a privileged elite. Here, new information technologies are leveraged towards what van Zoonen labelled ‘i-pistemology’: the embrace of personal and experiential truth in opposition to top-down and expert factmaking.

 

fake post.png

In such ‘deceptive’ social media postings, we find no comprehensive and consistent message per se, but a more flexible and scattershot method. The aim is not to defeat a rival message in the game of public opinion and truthtelling, but to add noise to the game until it breaks down. It is this general erosion of established rules that allows half-baked, factually incorrect and otherwise suspect information to compete with more official ones.

 

The ongoing ‘fake news’ epidemic of course has roots in post-Cold War geopolitics, and the free speech ideology embedded into social media platforms and their corporate custodians. But it is also an extension of a decades-long decline in public trust of institutions and experts. It is also an unintended consequence of what we thought was the best part about Internet technologies: the ability to give everyone a voice, to break down artificial gatekeepers, and allow more information to reach more people. It is well known how Dylann Roof, who killed nine in the 2015 Charleston massacre, began that path with a simple online search of ‘black on white crime’. The focus here is on what danah boyd identified as a loss of orienting anchors in the age of online misinformation: emerging generations of media users who are taught to assemble their own eclectic mix of truths in a hyper-pluralistic media environment, while also learning a deep distrust of official sources.

 

science march combo.png

2017 saw the March for Science: an earnest defence of evidence-based, objective, institutionalised truth as an indispensable tool for the government of self and others. The underlying sentiment: this isn’t an agenda for a particular kind of truth and trust, this is just reality – and anyway, didn’t we already settle this debate? But the debate over what counts as reality and how we get access to it is never quite settled.

 

3.

These are strange and unsettling combinations: the displacement of trust from institutions to technologies in the guise of the empowered ‘I’, and the related proliferation of alternative forms of truthtelling. My current suspicion is that they express an increasingly unstable set of contradictions in our long-running relationship with the Enlightenment. On one hand, we find the enduring belief in better knowledge, especially through depersonalised and inhuman forms of objectivity, as the ticket to rational and informed human subjects. At the same time, this figure of the individual who knows for themselves – found in Kant’s inaugural call of Sapere aude! – is increasingly subject to both deliberate and structural manipulations by sociotechnical systems. We are pushed to discover our ‘personal truths’ in the wilderness of speculation, relying only on ourselves – which, in practice, often means relying on technologies whose workings escape our power to audit. There is nobody you can trust these days, but the smart machine shall not lead you astray.