Art in America piece w/ Trevor Paglen

I recently spoke to Trevor Paglen – well known for works like ‘Limit Telephotography’ (2007-2012) and its images of NSA buildings and deep-sea fibreoptic cables – about surveillance, machine vision, and the changing politics of the visible / machine-readable. Full piece @ Art in America.

Much of that discussion – around the proliferation of images created by and for machines, and the exponential expansion of pathways by which surveillance, data, and capital can profitably intersect – is also taken up in my upcoming book, Technologies of Speculation (NYUP 2020). There my focus is on what happens after Snowden’s leaks – the strange symbiosis of transparency and conspiracy, the lingering unknowability of surveillance apparatuses and the terrorists they chase. It also examines the passage from the vision of the Quantified Self, where we use all these smart machines to hack ourselves and know ourselves better, to the Quantified Us/Them which plugs that data back into the circuits of surveillance capitalism.

In the piece, Paglen also discusses his recent collaboration with Kate Crawford on ImageNet Roulette, also on display at the Training Humans exhibition (Fondazione Prada Osservertario, Milan):

“Some of my work, like that in “From ‘Apple’ to ‘Anomaly,’” asks what vision algorithms see and how they abstract images. It’s an installation of about 30,000 images taken from a widely used dataset of training images called ImageNet. Labeling images is a slippery slope: there are 20,000 categories in ImageNet, 2,000 of which are of people. There’s crazy shit in there! There are “jezebel” and “criminal” categories, which are determined solely on how people look; there are plenty of racist and misogynistic tags.

If you just want to train a neural network to distinguish between apples and oranges, you feed it a giant collection of example images. Creating a taxonomy and defining the set in a way that’s intelligible to the system is often political. Apples and oranges aren’t particularly controversial, though reducing images to tags is already horrifying enough to someone like an artist: I’m thinking of René Magritte’s Ceci n’est pas une pomme (This is Not an Apple) [1964]. Gender is even more loaded. Companies are creating gender detection algorithms. Microsoft, among others, has decided that gender is binary—man and woman. This is a serious decision that has huge political implications, just like the Trump administration’s attempt to erase nonbinary people.”

apple_treachery.jpg

Crawford & Paglen also have a longer read on training sets, Excavating AI (also source for above image).

 

Smart Machines & Enlightenment Dreams (1)

I was at UC San Diego in March for a symposium on “Technoscience and Political Algorithms“. In April, I’ll be at the University of Toronto’s McLuhan Centre with Whitney Phillips and Selena Nemorin to talk about “New Technological Ir/rationalities” – and then at NYC’s Theorising the Web. Then in May, I’ll be back at MIT for Media-in-Transition 10.

Across all four, I’m continuing to grapple with an earlier question, or rather, a nagging suspicion: aren’t (1) the fantastically optimistic projections around objective data & AI, and (2) the increasingly high-profile crises of fake news, algorithmic bias, and in short, ‘bad’ machinic information, linked in some way? And aren’t both these hopes and fears rooted, perhaps, in the Enlightenment’s image of the knowing subject?

One popular response to the fake news epidemic was to reassert the ideals of scientific knowledge – that is, of impersonal, neutral, reassuringly objective information – hearkening back to a hodgepodge of 19th and 20th centuries. But in its pure form, we know this is just as atavistic as MAGA: a reference point cast deep enough into the mist, that it sustains us as fantasy precisely as something on the cusp of visibility and actuality. Information has always required an expansive set of emotional, imaginative, irrational investments in order to keep the engines running – which, in fact, is what is being endorsed in those protests for science.

One major aspect of this story is how the emphasis on the critical faculties of the autonomous individual lends itself to both a widespread cynicism (including the emergence of trolling as a mainstream performative style) and the atomising organisation of life as data in the world of surveillance capitalism. I’ll get to that in a future post. In this one, what follows is some preliminary notes about – not so much direct inheritance, but repetitions, rearticulations, resonances – between how contemporary technologies and Enlightenment projects theorise the production of objective truth.

1/4

Consider, for example, self-tracking technologies. Beddit, the sleep tracker, records your heart beats, respiration cycles, and other bodily movement; when you wake, it greets your consciousness with a numerical sleep score. You’re asked to consider how your sleep must have been – information produced and processed while the thinking subject was, literally, turned off. We can now track our exercise; mood swings; shitting; sex behaviour, including ‘thrusts per minute‘ (now sadly defunct). We can ask these devices to recommend, nudge, influence our social relationships, or shock us with electricity to jolt us back to work.

https://www.rescuetime.com/assets/integration-example-pavlok-e2a12d6cea8d10f2a70d6dbad7a5073e.png

In the book, I discuss how these technologies are presented as empowering the knowing individual. The idea is that the unerring objectivity of smart machines can be harnessed for a heroically independent form of ‘personalisation’. But lurking in this vision is a crucial set of displacements and double binds:

  1. First, the objectivity of this data-driven knowledge is secured by displacing the subject of knowledge from the conscious self to the smart machine – a machine which communicates ceaselessly with the nonconscious elements of the human individual, from glucose levels to neural electrical activity.
  2. In other words, this interaction targets human sensation and sensibility as the next frontier of rationalisation. Here, the ideal of the knowing individual, for whom sensation is the privileged path to autonomous knowledge, intersects with the project of datafying humans through sensory data. We find here the dual pursuit of human knowledge, and knowledge about human beings.
  3. The objective is to program the individual towards a fitter, happier, more productive entity. These displacements help put together a vision of voluntary, empowering human control with a machine-driven view of the human as an amalgamation of datafied parameters that can be nudged into the optimal curve.

These dynamics recall, of course, theories of cybernetics and posthumanism – but, I would like to suggest, they also extend longstanding projects and problems of the Enlightenment: how can we establish an autonomous grounding for Reason? How can human sensation be understood not only as a path to objective knowledge, but a basis for a rational optimisation of social systems?

2/4

In 1726, Benjamin Franklin created for himself a journal of virtues: each day, he would record his performance according to thirteen criteria including ‘temperance’ and ‘chastity’.

A page from Benjamin Franklin's virtue journal

Franklin’s journal is a well-cited event in the prehistory of tracking and datafication. What is crucial is that it is the thinking, feeling individual that is at the heart of curating and interpreting this data. In contrast, mass market tracking technologies today seek more automated, ‘frictionless’ design: not only is it more convenient for the users, the idea is that the smart machines will avoid becoming contaminated by the biases and flaws of human subjectivity. In short, ‘machines that know you better than you know yourself’.

So there is a historically very familiar proposition here – that technoscience will provide what Lorraine Daston and Peter Galison called machinic objectivity (in the context of 19th century scientific images). The emphasis is on neutral and accurate data untainted by human bias, which individuals might utilise this information for a more rational life. With self-tracking, as part of what Mark Hansen has called ‘twenty-first century media’, this objectivity is secured through a certain displacement. Where Franklin guarantees the validity of his records through his own integrity, self-tracking promises empowering self-knowledge through a new degree of technological dependency.

3/4

Crucially, this displacement occurs through the contested object of human sensation. Where human feelings, memories, affects, are on one hand demoted as unreliable and biased sources of self-knowledge, it is exactly the minutiae of human sensation and experience that smart machines plumb for new use cases, new markets.

This datafication of sensation puts a new spin on the old Enlightenment relationship between sensation and Reason. Insofar as the Enlightenment was built on a spirit of aufklarung, of overcoming established authorities for truth, the individual’s ability to know for themselves was often predicated on a turn to sensation and experience. It was the human individual, who was equipped with both the senses to acquire data and the Reason to process and verify that data , that would be indispensable for putting the two together.

This emphasis on sensation was fundamental to many Enlightenment projects. In France, we might think of Condillac and Hélvetius, who explored bodily experience as the foundation of all ideas – variously grouped as sensationists, sentimental empiricists, etc. by scholars like Jessica Riskin. (We are, of course, simplifying some of the different ways in which they can be grouped and ungrouped, catching only the broader themes for the moment.) The senses, in the narrowly physical sense, was often connected to sensibility or sentiment, understood as an affective disposition of the soul that would lie at the basis of one’s ability to intelligently and morally process external stimuli.

For our purposes, perhaps the best example for this historical resonance is Julien Offray de La Mettrie’s L’homme machine, or ‘Man, a Machine’. Yet contrary to how the title might sound today, man as machine was not a dry creature of logic, but defined by the effort to annul the Cartesian gap. There is no soul as a thing separate from body, not because the workings of the soul are mere ephemeral illusions, but because sensation and sentiment are the crucial mechanisms that arise from physical bodies and operate what is wrongly attributed to a transcendent soul. The upshot is that the proper analysis of sensation becomes crucial to understanding, manipulating, and optimising man as a machine.

https://upload.wikimedia.org/wikipedia/commons/1/11/LaMettrie_L%27homme_machine.jpg

Indeed, the story goes that La Mettrie, an accomplished physician, developed his strong materialism after a bout of illness and this personal experience of the mental struggle caused by a weak body. In a distant echo, today, a common pattern in the personal testimonies of self-tracking enthusiasts is that illness and other bodily problems were the catalyst for them to turn to tracking technologies. It would be a difficult medical problem, a long convalescence, and other such experience where the machine breaks down, that motivated them to seek a more objective and rational basis for the care of the self.

In this sense, sensation serves an important dual purpose for the Enlightenment:

  1. On one hand, there is sensation as the raw stuff of human machines – the truly empirical and rational ingredient for theorising human behaviour, in opposition to divine design as the First Cause or the intangibly transcendental soul.
  2. On the other hand, sensation and its proximate concepts are also held up as an important and even moral quality, insofar as the individual has a responsibility to cultivate their senses to grow their proficiency with Reason.

With technologies like self-tracking, human individuals’ sensory acquisition of empirical data is displaced onto machinic harvesting of ‘raw data’ – where the former is demoted to a biased, partial, and irredeemably flawed form of collection. And the exercise of human Reason remains, but is increasingly shaped by what the machines have taken out of those sensations, which arrives with again the moral authority of objectivity.

4/4

But there is one last displacement to be made. The abduction of self-knowledge and self-control onto the smart machines takes on futuristic projections about reprogramming subjects. In Surveiller et Punir, Foucault mentions La Mettrie as a marker in the development of the docile body: that his materialism facilitated a “general theory of dressage, at the centre of which reigns the notion of ‘docility’, which joins the analysable body to the manipulable body.” L’homme machine was not only a model for understanding human sensation in a programmatic form, but also a model for reprogramming human behaviour.

Today, the theoretical rationalisation of the human subject in technoscientific terms is again contributing to new techniques for the government of self and others. The Quantified Self, which some early enthusiasts sought to use as a way to break away from the top-down applications of big data, is increasingly a Quantified Us, or rather, a Quantified Them, where the technology crafted to personalise big data is being scaled back up to governments and corporations.

Consider just one intersection of wearables, mind-hacking and state surveillance. Popular tracking devices like Thync have adopted neuroscience tools like EEG to enable monitoring and even direct manipulation of brainwave activity. In 2018, it was reported that the Chinese government had begun to deploy ‘mind reading’ helmets in select workplaces – in reality, fairly simple, probably EEG-based devices for detecting brainwave activity.

The devices can be fitted into the cap of a train driver. Photo: Deayea Technology

This was no surprise: already in 2012, Veritas Scientific claimed to have produced a ‘TruthWave’ helmet that could detect if the subject is lying. TruthWave has since more or less disappeared, and the use-claim is always likely to have been hyperbolic. But it was also the case that the device was partly funded by the US military. At the level of the technologies employed, the principles of data processing and analysis, the private-public flow of funding and expertise, the ubiquitous spread of sensors and smart machines has far more in common with surveillance capitalism than it might like to admit.

***

Smart machines and big data have a long historical and cultural tail, one which draws not only on cybernetics and posthumanism, but the dynamic of sensation, objectivity and social reform that was central to the project of the Enlightenment. In the popular imagination, technoscience is a line, stretching itself ever forwards in inevitable progress: but in the normative and imaginative sense, it more resembles an ouroboros, ever repeating and devouring itself. And as that cycle captures us in the seductive dream of total objectivity, new technologies for surveillance and value extraction are embedding themselves – literally – under our skin.