Art in America piece w/ Trevor Paglen

I recently spoke to Trevor Paglen – well known for works like ‘Limit Telephotography’ (2007-2012) and its images of NSA buildings and deep-sea fibreoptic cables – about surveillance, machine vision, and the changing politics of the visible / machine-readable. Full piece @ Art in America.

Much of that discussion – around the proliferation of images created by and for machines, and the exponential expansion of pathways by which surveillance, data, and capital can profitably intersect – is also taken up in my upcoming book, Technologies of Speculation (NYUP 2020). There my focus is on what happens after Snowden’s leaks – the strange symbiosis of transparency and conspiracy, the lingering unknowability of surveillance apparatuses and the terrorists they chase. It also examines the passage from the vision of the Quantified Self, where we use all these smart machines to hack ourselves and know ourselves better, to the Quantified Us/Them which plugs that data back into the circuits of surveillance capitalism.

In the piece, Paglen also discusses his recent collaboration with Kate Crawford on ImageNet Roulette, also on display at the Training Humans exhibition (Fondazione Prada Osservertario, Milan):

“Some of my work, like that in “From ‘Apple’ to ‘Anomaly,’” asks what vision algorithms see and how they abstract images. It’s an installation of about 30,000 images taken from a widely used dataset of training images called ImageNet. Labeling images is a slippery slope: there are 20,000 categories in ImageNet, 2,000 of which are of people. There’s crazy shit in there! There are “jezebel” and “criminal” categories, which are determined solely on how people look; there are plenty of racist and misogynistic tags.

If you just want to train a neural network to distinguish between apples and oranges, you feed it a giant collection of example images. Creating a taxonomy and defining the set in a way that’s intelligible to the system is often political. Apples and oranges aren’t particularly controversial, though reducing images to tags is already horrifying enough to someone like an artist: I’m thinking of René Magritte’s Ceci n’est pas une pomme (This is Not an Apple) [1964]. Gender is even more loaded. Companies are creating gender detection algorithms. Microsoft, among others, has decided that gender is binary—man and woman. This is a serious decision that has huge political implications, just like the Trump administration’s attempt to erase nonbinary people.”

apple_treachery.jpg

Crawford & Paglen also have a longer read on training sets, Excavating AI (also source for above image).

 

Smart Machines & Enlightenment Dreams (1)

I was at UC San Diego in March for a symposium on “Technoscience and Political Algorithms“. In April, I’ll be at the University of Toronto’s McLuhan Centre with Whitney Phillips and Selena Nemorin to talk about “New Technological Ir/rationalities” – and then at NYC’s Theorising the Web. Then in May, I’ll be back at MIT for Media-in-Transition 10.

Across all four, I’m continuing to grapple with an earlier question, or rather, a nagging suspicion: aren’t (1) the fantastically optimistic projections around objective data & AI, and (2) the increasingly high-profile crises of fake news, algorithmic bias, and in short, ‘bad’ machinic information, linked in some way? And aren’t both these hopes and fears rooted, perhaps, in the Enlightenment’s image of the knowing subject?

One popular response to the fake news epidemic was to reassert the ideals of scientific knowledge – that is, of impersonal, neutral, reassuringly objective information – hearkening back to a hodgepodge of 19th and 20th centuries. But in its pure form, we know this is just as atavistic as MAGA: a reference point cast deep enough into the mist, that it sustains us as fantasy precisely as something on the cusp of visibility and actuality. Information has always required an expansive set of emotional, imaginative, irrational investments in order to keep the engines running – which, in fact, is what is being endorsed in those protests for science.

One major aspect of this story is how the emphasis on the critical faculties of the autonomous individual lends itself to both a widespread cynicism (including the emergence of trolling as a mainstream performative style) and the atomising organisation of life as data in the world of surveillance capitalism. I’ll get to that in a future post. In this one, what follows is some preliminary notes about – not so much direct inheritance, but repetitions, rearticulations, resonances – between how contemporary technologies and Enlightenment projects theorise the production of objective truth.

1/4

Consider, for example, self-tracking technologies. Beddit, the sleep tracker, records your heart beats, respiration cycles, and other bodily movement; when you wake, it greets your consciousness with a numerical sleep score. You’re asked to consider how your sleep must have been – information produced and processed while the thinking subject was, literally, turned off. We can now track our exercise; mood swings; shitting; sex behaviour, including ‘thrusts per minute‘ (now sadly defunct). We can ask these devices to recommend, nudge, influence our social relationships, or shock us with electricity to jolt us back to work.

https://www.rescuetime.com/assets/integration-example-pavlok-e2a12d6cea8d10f2a70d6dbad7a5073e.png

In the book, I discuss how these technologies are presented as empowering the knowing individual. The idea is that the unerring objectivity of smart machines can be harnessed for a heroically independent form of ‘personalisation’. But lurking in this vision is a crucial set of displacements and double binds:

  1. First, the objectivity of this data-driven knowledge is secured by displacing the subject of knowledge from the conscious self to the smart machine – a machine which communicates ceaselessly with the nonconscious elements of the human individual, from glucose levels to neural electrical activity.
  2. In other words, this interaction targets human sensation and sensibility as the next frontier of rationalisation. Here, the ideal of the knowing individual, for whom sensation is the privileged path to autonomous knowledge, intersects with the project of datafying humans through sensory data. We find here the dual pursuit of human knowledge, and knowledge about human beings.
  3. The objective is to program the individual towards a fitter, happier, more productive entity. These displacements help put together a vision of voluntary, empowering human control with a machine-driven view of the human as an amalgamation of datafied parameters that can be nudged into the optimal curve.

These dynamics recall, of course, theories of cybernetics and posthumanism – but, I would like to suggest, they also extend longstanding projects and problems of the Enlightenment: how can we establish an autonomous grounding for Reason? How can human sensation be understood not only as a path to objective knowledge, but a basis for a rational optimisation of social systems?

2/4

In 1726, Benjamin Franklin created for himself a journal of virtues: each day, he would record his performance according to thirteen criteria including ‘temperance’ and ‘chastity’.

A page from Benjamin Franklin's virtue journal

Franklin’s journal is a well-cited event in the prehistory of tracking and datafication. What is crucial is that it is the thinking, feeling individual that is at the heart of curating and interpreting this data. In contrast, mass market tracking technologies today seek more automated, ‘frictionless’ design: not only is it more convenient for the users, the idea is that the smart machines will avoid becoming contaminated by the biases and flaws of human subjectivity. In short, ‘machines that know you better than you know yourself’.

So there is a historically very familiar proposition here – that technoscience will provide what Lorraine Daston and Peter Galison called machinic objectivity (in the context of 19th century scientific images). The emphasis is on neutral and accurate data untainted by human bias, which individuals might utilise this information for a more rational life. With self-tracking, as part of what Mark Hansen has called ‘twenty-first century media’, this objectivity is secured through a certain displacement. Where Franklin guarantees the validity of his records through his own integrity, self-tracking promises empowering self-knowledge through a new degree of technological dependency.

3/4

Crucially, this displacement occurs through the contested object of human sensation. Where human feelings, memories, affects, are on one hand demoted as unreliable and biased sources of self-knowledge, it is exactly the minutiae of human sensation and experience that smart machines plumb for new use cases, new markets.

This datafication of sensation puts a new spin on the old Enlightenment relationship between sensation and Reason. Insofar as the Enlightenment was built on a spirit of aufklarung, of overcoming established authorities for truth, the individual’s ability to know for themselves was often predicated on a turn to sensation and experience. It was the human individual, who was equipped with both the senses to acquire data and the Reason to process and verify that data , that would be indispensable for putting the two together.

This emphasis on sensation was fundamental to many Enlightenment projects. In France, we might think of Condillac and Hélvetius, who explored bodily experience as the foundation of all ideas – variously grouped as sensationists, sentimental empiricists, etc. by scholars like Jessica Riskin. (We are, of course, simplifying some of the different ways in which they can be grouped and ungrouped, catching only the broader themes for the moment.) The senses, in the narrowly physical sense, was often connected to sensibility or sentiment, understood as an affective disposition of the soul that would lie at the basis of one’s ability to intelligently and morally process external stimuli.

For our purposes, perhaps the best example for this historical resonance is Julien Offray de La Mettrie’s L’homme machine, or ‘Man, a Machine’. Yet contrary to how the title might sound today, man as machine was not a dry creature of logic, but defined by the effort to annul the Cartesian gap. There is no soul as a thing separate from body, not because the workings of the soul are mere ephemeral illusions, but because sensation and sentiment are the crucial mechanisms that arise from physical bodies and operate what is wrongly attributed to a transcendent soul. The upshot is that the proper analysis of sensation becomes crucial to understanding, manipulating, and optimising man as a machine.

https://upload.wikimedia.org/wikipedia/commons/1/11/LaMettrie_L%27homme_machine.jpg

Indeed, the story goes that La Mettrie, an accomplished physician, developed his strong materialism after a bout of illness and this personal experience of the mental struggle caused by a weak body. In a distant echo, today, a common pattern in the personal testimonies of self-tracking enthusiasts is that illness and other bodily problems were the catalyst for them to turn to tracking technologies. It would be a difficult medical problem, a long convalescence, and other such experience where the machine breaks down, that motivated them to seek a more objective and rational basis for the care of the self.

In this sense, sensation serves an important dual purpose for the Enlightenment:

  1. On one hand, there is sensation as the raw stuff of human machines – the truly empirical and rational ingredient for theorising human behaviour, in opposition to divine design as the First Cause or the intangibly transcendental soul.
  2. On the other hand, sensation and its proximate concepts are also held up as an important and even moral quality, insofar as the individual has a responsibility to cultivate their senses to grow their proficiency with Reason.

With technologies like self-tracking, human individuals’ sensory acquisition of empirical data is displaced onto machinic harvesting of ‘raw data’ – where the former is demoted to a biased, partial, and irredeemably flawed form of collection. And the exercise of human Reason remains, but is increasingly shaped by what the machines have taken out of those sensations, which arrives with again the moral authority of objectivity.

4/4

But there is one last displacement to be made. The abduction of self-knowledge and self-control onto the smart machines takes on futuristic projections about reprogramming subjects. In Surveiller et Punir, Foucault mentions La Mettrie as a marker in the development of the docile body: that his materialism facilitated a “general theory of dressage, at the centre of which reigns the notion of ‘docility’, which joins the analysable body to the manipulable body.” L’homme machine was not only a model for understanding human sensation in a programmatic form, but also a model for reprogramming human behaviour.

Today, the theoretical rationalisation of the human subject in technoscientific terms is again contributing to new techniques for the government of self and others. The Quantified Self, which some early enthusiasts sought to use as a way to break away from the top-down applications of big data, is increasingly a Quantified Us, or rather, a Quantified Them, where the technology crafted to personalise big data is being scaled back up to governments and corporations.

Consider just one intersection of wearables, mind-hacking and state surveillance. Popular tracking devices like Thync have adopted neuroscience tools like EEG to enable monitoring and even direct manipulation of brainwave activity. In 2018, it was reported that the Chinese government had begun to deploy ‘mind reading’ helmets in select workplaces – in reality, fairly simple, probably EEG-based devices for detecting brainwave activity.

The devices can be fitted into the cap of a train driver. Photo: Deayea Technology

This was no surprise: already in 2012, Veritas Scientific claimed to have produced a ‘TruthWave’ helmet that could detect if the subject is lying. TruthWave has since more or less disappeared, and the use-claim is always likely to have been hyperbolic. But it was also the case that the device was partly funded by the US military. At the level of the technologies employed, the principles of data processing and analysis, the private-public flow of funding and expertise, the ubiquitous spread of sensors and smart machines has far more in common with surveillance capitalism than it might like to admit.

***

Smart machines and big data have a long historical and cultural tail, one which draws not only on cybernetics and posthumanism, but the dynamic of sensation, objectivity and social reform that was central to the project of the Enlightenment. In the popular imagination, technoscience is a line, stretching itself ever forwards in inevitable progress: but in the normative and imaginative sense, it more resembles an ouroboros, ever repeating and devouring itself. And as that cycle captures us in the seductive dream of total objectivity, new technologies for surveillance and value extraction are embedding themselves – literally – under our skin.

When you can trust nobody, trust the smart machine

I will be at AOIR in Montreal, 10-13 October to present some newer work as I look beyond the book. Below is one brief summary of ongoing investigations:


 

What is the connection between smart machines, self-tracking, and the ongoing mis/disinformation epidemic? They are part of a broader shift in the social rules of truth and trust. Emerging today is a strange alliance of objectivity, technology and the ‘personal’ – often cast in opposition to the aging bastions of institutional expertise. The fantasy of an empowered individual who ‘knows for themselves’ smuggles in a new set of dependencies on opaque and powerful technologies.

 

1.

On one hand, individuals are encouraged to know more, and to take that knowing into their own hands. Emblematic is the growth of the self-tracking industry: measure your own health and productivity, discover the unique correlations that make you tick, and take control of rationalising and optimising your life. Taglines of ‘n=1’ and ‘small data’ sloganise the vision: the intrepid, tech-savvy individual on an empowering and personal quest to self-knowledge. Implicit here is a revalorisation of the personal and experiential: you have a claim to the truth of your body in ways that the doctor cannot, despite all their learned expertise. This is territory that I go into in some detail in the book.

 

smarr screencap.png

And so, Calit2’s Larry Smarr builds a giant 3D projection of his own microbiome – which, he claims, helped him diagnose the onset of Crohn’s disease before the doctors could.

 

But what does it mean to take control and know yourself, if this knowing happens through technologies that operate beyond the limits of the human senses? Subsidiary to the wider enthusiasm for big data, smart machines and machine learning, the value proposition of much (not all) of self-tracking tech is predicated on the promise of data-driven objectivity: the idea that the machines will know us better than we know ourselves, and correct the biases and ‘fuzziness’ of human senses, cognition, memory. And this claim to objectivity is predicated on a highly physical relationship: these smart machines live on the wrist, under the bedsheets, sometimes even in the user’s body, embedding their observations, notifications, recommendations, into the lived rhythms of everyday life. What we find is a very particular mixture of the personal and the machinic, the objective and the experiential: know yourself – through machines that know you better than you do.

 

risley affidavit.png

Jeannine Risley’s Fitbit data is used to help disprove her claims of being raped by an intruder. What is called ‘self-knowledge’ becomes increasingly capable being disassociated from the control and intentions of the ‘self’.

 

2.

Another transformative site for how we know and how we trust is that of political mis/disinformation. While the comparison is neither simple nor obvious, I am exploring the idea that they are animated by a common, broader shift towards a particular alliance of the objective, machinic and ‘personal’. In the political sphere, its current enemies are well-defined: institutional expertise, bureaucratic truthmaking and, in a piece of historical irony, liberalism as the dishonest face of a privileged elite. Here, new information technologies are leveraged towards what van Zoonen labelled ‘i-pistemology’: the embrace of personal and experiential truth in opposition to top-down and expert factmaking.

 

fake post.png

In such ‘deceptive’ social media postings, we find no comprehensive and consistent message per se, but a more flexible and scattershot method. The aim is not to defeat a rival message in the game of public opinion and truthtelling, but to add noise to the game until it breaks down. It is this general erosion of established rules that allows half-baked, factually incorrect and otherwise suspect information to compete with more official ones.

 

The ongoing ‘fake news’ epidemic of course has roots in post-Cold War geopolitics, and the free speech ideology embedded into social media platforms and their corporate custodians. But it is also an extension of a decades-long decline in public trust of institutions and experts. It is also an unintended consequence of what we thought was the best part about Internet technologies: the ability to give everyone a voice, to break down artificial gatekeepers, and allow more information to reach more people. It is well known how Dylann Roof, who killed nine in the 2015 Charleston massacre, began that path with a simple online search of ‘black on white crime’. The focus here is on what danah boyd identified as a loss of orienting anchors in the age of online misinformation: emerging generations of media users who are taught to assemble their own eclectic mix of truths in a hyper-pluralistic media environment, while also learning a deep distrust of official sources.

 

science march combo.png

2017 saw the March for Science: an earnest defence of evidence-based, objective, institutionalised truth as an indispensable tool for the government of self and others. The underlying sentiment: this isn’t an agenda for a particular kind of truth and trust, this is just reality – and anyway, didn’t we already settle this debate? But the debate over what counts as reality and how we get access to it is never quite settled.

 

3.

These are strange and unsettling combinations: the displacement of trust from institutions to technologies in the guise of the empowered ‘I’, and the related proliferation of alternative forms of truthtelling. My current suspicion is that they express an increasingly unstable set of contradictions in our long-running relationship with the Enlightenment. On one hand, we find the enduring belief in better knowledge, especially through depersonalised and inhuman forms of objectivity, as the ticket to rational and informed human subjects. At the same time, this figure of the individual who knows for themselves – found in Kant’s inaugural call of Sapere aude! – is increasingly subject to both deliberate and structural manipulations by sociotechnical systems. We are pushed to discover our ‘personal truths’ in the wilderness of speculation, relying only on ourselves – which, in practice, often means relying on technologies whose workings escape our power to audit. There is nobody you can trust these days, but the smart machine shall not lead you astray.

 

Lecture @ Wattis Institute for Contemporary Arts

I will be at the Wattis Institute for Contemporary Arts, San Francisco, on 20 March 2018 to discuss data, bodies and intimacy, as part of their year-long program on the work of Seth Price. More information here.

 

Data, or, Bodies into Facts

Data never stops accumulating. There is always more of it. Data covers everything and everyone, like skin, and yet different people have different levels of access to it,so it’s never quite fair to call it “objective” or even “truthful.

Entire industries are built around storing data, and then protecting,  organizing, verifying, optimizing, and distributing itFrom there, even the most banal pieces of data work to penetrate the most intimate corners of our lives.

For Sunha Hong, the promise of data is the promise to turn bodies into factsemotions, behavior, and every messy amorphous human reality can be distilled into the discrete, clean cuts of calculable information. We track our exercise, our sexual lives, our relationships, our happiness, in the hope of selfknowledge achieved through machines wrought in the hands of others. Data promises a certain kind of intimacy, but everything about our lived experience constantly violates this serene aesthetic wherein bodies are sanitized, purified, and disinfected into objective and neutral facts. This is the pushpull between the raw and the mediated.

Whether it be by looking at surveillance,algorithmic, or selftracking technologies,Hong’s work points to the question of how human individuals become the ingredient for the production of truths and judgments about them by things other than themselves.

 

Update: He gives a talk.

sign

Lecture @ MIT List Centre

I will be speaking at MIT’s List Centre on May 19, as part of the exhibition An Inventory of Shimmers: Objects of Intimacy in Contemporary Art. The talk / discussion is titled Intimacy with Technology. See more info here, and RSVP here.

“Join List Visual Arts Center and Sun-ha Hong, Mellon Postdoctoral Fellow, Comparative Media Studies/Writing, MIT in a discussion about the affective charge and the promise of objectivity surrounding the idea of raw data. As we move into an era of ambient and ubiquitous computing, Hong will discuss the intimate aspects of data to explore how we as humans engage in relationships with technologies—relationships which lead us to ask, ‘what is intimacy?’. In response to the List Center’s exhibition An Inventory of Shimmers: Objects of Intimacy in Contemporary Art Hong will delve into the mediated and aesthetic elements that objects assume.”

 

 

Lecture @ Copenhagen Business School

I was recently at Copenhagen Business School, courtesy of Mikkel Flyverbom, to discuss the book-in progress. An earlier version of the talk, given at MIT, is online in podcast form here.

An excerpt from my notes:

“So we have two episodes, two ongoing episodes. On one hand, you have the state and its technological system, designed for bulk collection of massive scales, and energised by the moral and political injunction towards ‘national security’ – and all of this leaked through Edward Snowden. On the other hand, you have the popularisation of self-tracking devices, a fresh addition to the growing forms of constant care and management required of the employable and productive subject, Silicon Valley being its epicentre.

These are part of a wider penumbra of practices: algorithmic predictions, powered by Bayesian inference and artificial neural nets, corporate data-mining under the moniker of ‘big’ data… now, by no means are they the same thing, or governed by some central force that perpetuates them. But as they pop up around every street corner, there are certain tendencies that start to characterise ‘data-driven’ as a mode of thinking and decision-making.

The tendency I focus on here is the effort to render things known, predictable, calculable – and how pursuing that hunger entails, in fact, many close encounters uncertainty and the unknown.

Here surveillance is not reducible to questions of security and privacy. It is a scene for ongoing conflicts over what counts as knowledge, who or what gets the authority to declare what you are, what we consider ‘good enough’ evidence to watch people, to change our diet, to arrest them. What we’re seeing is a renewed effort at valorising a certain project of objective knowledge, of factual certainty, of capturing the viscera of life into bits, of producing the right number that tells us what to do.”

 

 

Data’s Intimacy

New piece at the open source Communication+1, titled ‘Data’s Intimacy: Machinic Sensibility and the Quantified Self’. This is the first of, I think, two or three pieces on self-tracking that will be rolling out over the next couple of years.

expereal

expereal, a mood visualisation service (now possibly defunct)

 

Abstract

Today, machines observe, record, sense the world – not just for us, but sometimes instead of us (in our stead), and even indifferently to us humans. And yet, we remain human. Correlationism may not be up to a comprehensive ontology, but the ways in which we encounter, and struggle to make some kind of sense of, machinic sensibility matters. The nature of that encounter is not instrumentality, or even McLuhanian extension, but a full-blown ‘relationship’ where the terms by which machines ‘experience’ the world, and communicate with each other, parametrises the conditions for our own experience. This essay will play out one such relationship currently in the making: the boom in self-tracking technologies, and the attendant promise of data’s intimacy.

This essay proceeds in three sections, all of which draw on a larger research project into self-tracking and contemporary data epistemologies. It thus leverages observations from close reading of self-tracking’s publicisation in the mass media between 2007 and 2016; analysis of over fifty self-tracking products, some of it through self-experimentation; and interviews and ethnographic observation, primarily of the ‘Quantified Self’ connoisseur community. The first section examines the dominant public presentations of self-tracking in early twenty-first century discourse. This discourse embraces a vision of automated and intimate self-surveillance, which is then promised to deliver superior control and objective knowledge over the self. Next, I link these promises to the recent theoretical turns towards the agency of objects and the autonomous sensory capacities of new media to consider the implications of such theories – and the technological shifts they address – for the phenomenology of the new media subject. Finally, I return to self-tracking discourse to consider its own idealisation of such a subject – what I call ‘data-sense’. I conclude by calling for a more explicit public and intellectual debate around the relationships we forge with new technologies, and the consequences they have for who – and what – is given which kinds of authority to speak the truth of the ‘self’.

[Talk] U. Milano-Bicocca

I will be at the University of Milano-Bicocca next week to give a talk on surveillance, self-tracking and the data-driven life. It will overlap significantly with my presentation last week at the Affect Theory conference; I’ll be posting the full text and slides afterwards. Abstract below.

The Data-Driven Life: The Parameters of Knowing in the Online Surveillance Society

‘Information overload’ is an old cliché, but when it was still fresh, it conveyed a broad and fundamental liquidity in the parameters of our experience. What it meant – and felt like – to remember, to know, was changing. Surveillance today is not simply a question of privacy or governmental power, but a practical extension of such liquidity. Surveillance’s fevered dream of total prediction hinges on its ability to subtend human sensibility – with its forgetfulness, bias, and other problems – to reach ‘raw’ and comprehensive data. This data-hunger, shared by states, corporations and individuals alike, betrays a ‘honeymoon objectivity’. The rise of new technologies for knowledge production is being misconstrued as a discovery of pure and unmediated information. The result is a profound shift in what qualifies as knowledge; who, or what, does the knowing; what decisions and actions are legitimated through that knowledge. Surveillance practices and controversies today host a reparametrisation of what ‘knowing’ entails.

In this talk, I will address two specific cases: the state surveillance of the Snowden Affair, and the self-surveillance of the Quantified Self (QS) movement. I draw on interviews, ethnographic observation and archival research that is part of a larger, ongoing project.

  1. I know we are being watched, Snowden told us so – but I don’t see it, and I don’t feel it. A vast surveillance program withdraws into the recesses of technological systems, denying our capacity to know and experience it. Conventional forms of proof or risk probabilities elude both arguments for and against it. This situation provokes two major patterns of ‘knowing’. First, subjunctivity leverages the recessive unknowability surrounding surveillance as if it were in some way true and certain, producing hypothetical, provisionary bases for real, enduring actions and beliefs. Statistical measures of danger become mathematically negligible, yet affectively overwhelming. Second, interpassivity projects others (human and nonhuman) who believe and experience what we cannot ourselves in our stead. Even if the world of surveillance and terror is not real in my back yard, these interpellated others help make it ‘real enough’. Technology’s recession thus provokes an existential dilemma; how do I ‘know’? What is it supposed to feel like to ‘know’?
  1. We cannot stand guard over our judgments without machines to keep us steady. If our knowledge of our own bodies, habits and affects were previously left to unreliable memories and gut feeling, QS promises a data-driven existence where you truly “come into contact with yourself” – through persistent, often wearable, self-surveillance. The Delphic maxim know thyself is applied to a very different existential condition, where my lived relationship with technology becomes the authoritative site for an abstracted relationship with my own body. Yet QSers also acknowledge that data is always incomplete, raising new uncertainties and requiring the intervention of subjective judgment. Here, it is technology’s protrusion which forces the question: how will you ‘know’ yourself through a digitality that subtends your memory and intention?