When you can trust nobody, trust the smart machine

I will be at AOIR in Montreal, 10-13 October to present some newer work as I look beyond the book. Below is one brief summary of ongoing investigations:


 

What is the connection between smart machines, self-tracking, and the ongoing mis/disinformation epidemic? They are part of a broader shift in the social rules of truth and trust. Emerging today is a strange alliance of objectivity, technology and the ‘personal’ – often cast in opposition to the aging bastions of institutional expertise. The fantasy of an empowered individual who ‘knows for themselves’ smuggles in a new set of dependencies on opaque and powerful technologies.

 

1.

On one hand, individuals are encouraged to know more, and to take that knowing into their own hands. Emblematic is the growth of the self-tracking industry: measure your own health and productivity, discover the unique correlations that make you tick, and take control of rationalising and optimising your life. Taglines of ‘n=1’ and ‘small data’ sloganise the vision: the intrepid, tech-savvy individual on an empowering and personal quest to self-knowledge. Implicit here is a revalorisation of the personal and experiential: you have a claim to the truth of your body in ways that the doctor cannot, despite all their learned expertise. This is territory that I go into in some detail in the book.

 

smarr screencap.png

And so, Calit2’s Larry Smarr builds a giant 3D projection of his own microbiome – which, he claims, helped him diagnose the onset of Crohn’s disease before the doctors could.

 

But what does it mean to take control and know yourself, if this knowing happens through technologies that operate beyond the limits of the human senses? Subsidiary to the wider enthusiasm for big data, smart machines and machine learning, the value proposition of much (not all) of self-tracking tech is predicated on the promise of data-driven objectivity: the idea that the machines will know us better than we know ourselves, and correct the biases and ‘fuzziness’ of human senses, cognition, memory. And this claim to objectivity is predicated on a highly physical relationship: these smart machines live on the wrist, under the bedsheets, sometimes even in the user’s body, embedding their observations, notifications, recommendations, into the lived rhythms of everyday life. What we find is a very particular mixture of the personal and the machinic, the objective and the experiential: know yourself – through machines that know you better than you do.

 

risley affidavit.png

Jeannine Risley’s Fitbit data is used to help disprove her claims of being raped by an intruder. What is called ‘self-knowledge’ becomes increasingly capable being disassociated from the control and intentions of the ‘self’.

 

2.

Another transformative site for how we know and how we trust is that of political mis/disinformation. While the comparison is neither simple nor obvious, I am exploring the idea that they are animated by a common, broader shift towards a particular alliance of the objective, machinic and ‘personal’. In the political sphere, its current enemies are well-defined: institutional expertise, bureaucratic truthmaking and, in a piece of historical irony, liberalism as the dishonest face of a privileged elite. Here, new information technologies are leveraged towards what van Zoonen labelled ‘i-pistemology’: the embrace of personal and experiential truth in opposition to top-down and expert factmaking.

 

fake post.png

In such ‘deceptive’ social media postings, we find no comprehensive and consistent message per se, but a more flexible and scattershot method. The aim is not to defeat a rival message in the game of public opinion and truthtelling, but to add noise to the game until it breaks down. It is this general erosion of established rules that allows half-baked, factually incorrect and otherwise suspect information to compete with more official ones.

 

The ongoing ‘fake news’ epidemic of course has roots in post-Cold War geopolitics, and the free speech ideology embedded into social media platforms and their corporate custodians. But it is also an extension of a decades-long decline in public trust of institutions and experts. It is also an unintended consequence of what we thought was the best part about Internet technologies: the ability to give everyone a voice, to break down artificial gatekeepers, and allow more information to reach more people. It is well known how Dylann Roof, who killed nine in the 2015 Charleston massacre, began that path with a simple online search of ‘black on white crime’. The focus here is on what danah boyd identified as a loss of orienting anchors in the age of online misinformation: emerging generations of media users who are taught to assemble their own eclectic mix of truths in a hyper-pluralistic media environment, while also learning a deep distrust of official sources.

 

science march combo.png

2017 saw the March for Science: an earnest defence of evidence-based, objective, institutionalised truth as an indispensable tool for the government of self and others. The underlying sentiment: this isn’t an agenda for a particular kind of truth and trust, this is just reality – and anyway, didn’t we already settle this debate? But the debate over what counts as reality and how we get access to it is never quite settled.

 

3.

These are strange and unsettling combinations: the displacement of trust from institutions to technologies in the guise of the empowered ‘I’, and the related proliferation of alternative forms of truthtelling. My current suspicion is that they express an increasingly unstable set of contradictions in our long-running relationship with the Enlightenment. On one hand, we find the enduring belief in better knowledge, especially through depersonalised and inhuman forms of objectivity, as the ticket to rational and informed human subjects. At the same time, this figure of the individual who knows for themselves – found in Kant’s inaugural call of Sapere aude! – is increasingly subject to both deliberate and structural manipulations by sociotechnical systems. We are pushed to discover our ‘personal truths’ in the wilderness of speculation, relying only on ourselves – which, in practice, often means relying on technologies whose workings escape our power to audit. There is nobody you can trust these days, but the smart machine shall not lead you astray.

 

Advertisements

Facebook scores our trustworthiness because we asked them to

News that Facebook assigns users with secret ‘trustworthiness’ scores based on the content they report has drawn a familiar round of worries. We are told that as users flag objectionable content on the platform, they are themselves scored on a scale of 0 to 1, based on how reliably their flagging concurs with the censors’ judgment. Presumably, other factors play into the scoring; but the exact mechanism, as well as the users’ actual scores, remain secret. The secrecy has drawn critical comparisons with China’s developing ‘social credit’ systems. At least with China’s Zhima Credit, you know you’re being scored and the score is freely available – something not afforded by American credit rating systems, either.

But there’s a bit more to it. In many ways, Facebook is doing what we asked it to do. To untangle the issue, we might look to two other points of reference.

One is Peeple, the short-lived ‘Yelp for People’ that enjoyed a short period in 2015 as the Internet’s most reviled object. The app proposed that ordinary people should rate and review each other, from dates to landlords to neighbours – and, in the original design, the ratings would be published without the victim’s consent. The Internet boiled over, and the app was quickly stripped of its most controversial (and useful) features. That particular effort, at least, was dead in the water.

180821 Peeple img

But Peeple wasn’t an anomaly. It wasn’t even the first to try such scoring. (Unvarnished tried it in 2010; people compared that to Yelp, too; and it died just as quickly.) These efforts are an inevitable extension of the tendency to substitute lived experience, human interaction and personal judgment with aggregated scores managed by profit-driven corporations and their proprietary algorithms. (Joseph Weizenbaum, Computer Power and Human Reason.) If restaurants and Uber drivers are scored, then why not landlords, blind dates, neighbours, classmates? Once we say news sources have to be scored and censored in the battle against fake news, there is a practical incentive to score the people who report that fake news as well.

More broadly, platforms have long been about ‘scoring’ us for reliability and usefulness, and weighting the visibility and impact of our contributions. It’s been pointed out that such scoring & weighting of users is standard practice for platforms like Youtube. Facebook isn’t leaping off into a brave new world of platform hubris; it is only extending its core social function of scoring and sorting people, in response to widespread demand that it do so.

In recent weeks, Twitter was described as essentially a rogue actor amongst social media platforms. Where Facebook and others moved to ban notorious misinformer Alex Jones, Jack Dorsey argued that banning specific individuals on the basis of widespread outrage would undermine efforts to establish consistent and fair policy. Those who called for banning Jones criticised Dorsey’s approach as lacking integrity, a moral cop-out. But let us be precise: what exactly makes Twitter’s decision ‘lack integrity’? Do we believe that if somebody is widely condemned enough, then platforms must reflect this public outrage in their judgment? That is, in many obvious ways, a dangerous path. Alternatively, we might insist that if Twitter’s rules are so impotent as to allow an Alex Jones to roam free, they should become more stringent. In other words, we are effectively asking social media platforms to play an even more dominant and deliberate role in their ability to censor public discourse.

In many ways, this is the logical consequence of the widespread demand since November 2016 that social media platforms take on the responsibility of policing our information and our speech, and that they take on the role of determining who or what is trustworthy. We scolded Mark Zuckerberg in a global reality TV drama of a Congressional hearing, telling him that with power comes responsibility: it turns out that with responsibility comes new powers, too.

180821 Zuck img

Now, we might say that the problem isn’t that Facebook is scoring us for trustworthiness, but that the process needs to be – we all know the magic phrase by heart – open, transparent, accountable. What would that look like? Presumably, this would require not only that users’ trust scores are visible to themselves, but that the scoring process can be audited, contested, and corrected. That might involve another third party, though it is uncertain what kind of institution is trustworthy these days to arbitrate our trust scores.

Here, we find the second reference point: the NSA. Tessa Lyons, Facebook’s product manager, explains that their scoring system must remain as secret as possible, for fears that bad actors will learn to abuse it. This is, of course, a rationale we have seen in the past not only from social media platforms with regards to their other algorithms, but also the NSA and other agencies with regards to government surveillance programs. We can’t tell you who exactly is being spied on in what way, because that would help the terrorists. And we can’t tell you how exactly our search results work, because this would help bad actors game the algorithm (and undermine our market position).

So we come back to a number of enduring structural dilemmas in the way our Internet is made. In protesting the disproportionate impact of a select few powerful platforms on the public sphere, we also demand that these platforms increase their control over public speech. Even as we ask them to censor the public with greater zeal, we possess little effective oversight over how the censors are to act.

In the wake of Russian election interference, Cambridge Analytica and other scandals, the pretention to neutrality has been punctured. There is now a wider recognition that platforms shape, bias and regulate public discourse. But while there is much sense in demanding better from the platforms themselves, passing off too much responsibility onto their hands risks encouraging exactly the hubris we like to accuse them of. “What makes you think you know the best for us, Mark Zuckerberg? Now, please hurry up and come up with the right rules and mechanisms to police our speech.”

If there is sufficient public outrage, Facebook might well be moved to retire its current system of trust scores. It has a history of stepping back from particularly rancorous features, only to gradually reintroduce them in more palatable forms. (Jose van Dijck, The Culture of Connectivity.) Why? Not because Facebook is an evil force that advances its dark arts whenever nobody’s looking, but because Facebook has both commercial and social pressures to develop more expansive scoring systems in order to perform its censoring functions.

Facebook’s development of ‘trust scores’ reflects a deeper, underlying problem: that traditional indices of trustworthiness, such as mainstream media institutions, have lost their effectiveness. Facebook isn’t sure which of us to trust, and neither are we. And if we are searching for new ways to assign trust fairly and transparently, we should not expect the massive tech corporations, whose primary responsibility is to maximise profit from an attention economy, to do that work for us. They are not the guiding lights we are looking for.

Interview @ Gas Gallery

I spoke with Ceci Moss at Gas, a mobile art gallery that roams Los Angeles and the web, about different forms of self-tracking: the technological promises and economic precarities, moral injunctions and everyday habits… found here.

An excerpt:

We have to ask not only ‘is this really empowering or not’, but also ‘what is it about our society that makes us feel like we need to empower ourselves in this way?’ In the same way, we have to ask what kind of new labours, new troubles, new responsibilities, new guilts, that these empowering activities bring to our doorstep. From an economic perspective, if you are someone who has to constantly sell one’s productivity to the market, the ‘empowerment’ of self-tracking and self-care becomes a necessary labour for one’s survival. The injunction to ‘care for yourself’ is a truncated version of ‘you’ve got to care for yourself to stay afloat, because nobody will do it for you.’

The interview is part of their ongoing exhibition:

take care | June 9–July 20, 2018

Featuring: Hayley Barker, Darya Diamond, Ian James, Young Joon Kwak, C. Lavender, Sarah Manuwal, Saewon Oh, Amanda Vincelli, and SoftCells presents: Jules Gimbrone

How do radical ambitions of “self-care” persist or depart from capitalist society’s preoccupation with wellness and the industry surrounding it, particularly when filtered through technological advances? How can we imagine personal wellness that complicates or diverges from capitalist and consumerist tendencies? Taking its name from the common valediction, which is both an expression of familiarity and an instruction of caution, take care, is a group exhibition that considers the many tensions surrounding the possibilities of self-care.

 

Lecture @ Wattis Institute for Contemporary Arts

I will be at the Wattis Institute for Contemporary Arts, San Francisco, on 20 March 2018 to discuss data, bodies and intimacy, as part of their year-long program on the work of Seth Price. More information here.

 

Data, or, Bodies into Facts

Data never stops accumulating. There is always more of it. Data covers everything and everyone, like skin, and yet different people have different levels of access to it,so it’s never quite fair to call it “objective” or even “truthful.

Entire industries are built around storing data, and then protecting,  organizing, verifying, optimizing, and distributing itFrom there, even the most banal pieces of data work to penetrate the most intimate corners of our lives.

For Sunha Hong, the promise of data is the promise to turn bodies into factsemotions, behavior, and every messy amorphous human reality can be distilled into the discrete, clean cuts of calculable information. We track our exercise, our sexual lives, our relationships, our happiness, in the hope of selfknowledge achieved through machines wrought in the hands of others. Data promises a certain kind of intimacy, but everything about our lived experience constantly violates this serene aesthetic wherein bodies are sanitized, purified, and disinfected into objective and neutral facts. This is the pushpull between the raw and the mediated.

Whether it be by looking at surveillance,algorithmic, or selftracking technologies,Hong’s work points to the question of how human individuals become the ingredient for the production of truths and judgments about them by things other than themselves.

 

Update: He gives a talk.

sign

Presentation @ Digital Existence II: Precarious Media Life

Later this month I will be presenting at Digital Existence II: Precarious Media Life, at the Sigtuna Foundation, Sweden, organised by Amanda Lagerkvist via the DIGMEX Research Network (of which I am a part) and the Nordic Network for the Study of Media and Religion. The abstract for my part of the show:

 

On the terror of becoming known

Today, we keenly feel the terror of becoming known: of being predicted and determined by data-driven surveillance systems. The webs of significance which sustain us also produce persistent vulnerability to becoming known by things other than ourselves. From the efforts to predict ‘Lone Wolf’ terrorists through comprehensive personal communications surveillance, to pilot programs for calculating insurance premiums by monitoring daily behaviour, the expressed fear is often of misidentification and misunderstanding. Yet the more general root of this anxiety is not of error or falsehood, but a highly entrenched moralisation of knowing. Digital technologies are the newest frontier for the reprisal of old Enlightenment dreams, wherein the subject has a duty to know and technological inventions are an ineluctable force for better knowledge. This nexus demands and requires subjects’ constant vulnerability to producing data and being socially determined by it. In turn, subjects turn to what Foucault called illegalisms[1]: forms of complaint, compromise, obfuscation, and other everyday efforts to mitigate the violence of becoming known. The presentation threads this normative argument with two kinds of grounding material: (1) episodes in becoming-known drawn from original research into American state- and self-surveillance, and (2) select works in moral philosophy and technology criticism.[2]

 

[1] Foucault, M., 2015. The Punitive Society: Lectures at the College de France 1972-1973 B. E. Harcourt, ed., New York: Palgrave Macmillan.

[2] E.g. Jasanoff, S., 2016. The Ethics of Invention: Technology and the Human Future, New York: W.W. Norton & Co; Vallor, S., 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford: Oxford University Press; Winner, L., 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology, Chicago: Chicago University Press.

Presentation @ 4S

I will be at 4S [Society for Social Studies of Science], Boston this week in the ‘Surveillance and Security’ panel: Thursday 31 August, 4.00pm, Sheraton Boston Flr 3 Kent.

Recessive Objects: Surveillance and the (Dis)appearance of fact

Recessive objects are things which promise to extend our knowledge, but thereby publicise the very uncertainty threatening that knowing. These archives, statistical figures, black boxes mobilise our enduring faith in nonhuman objectivity and technological progress, imposing a sense of calculability and predictability. Yet far from extinguishing uncertainty, they provide material presence of the absent, secret, unknowable – especially the widening gap between human and machinic sensibility. Recessive objects address longstanding questions about the social production of what counts as objective fact, and what kind of virtues are invested into ‘new’ technologies. They emphasise the practical junctures where public imagination, material artefacts and the operational logic of new technologies intersect.

I discuss two recessive objects featuring centrally in America’s recent encounters with surveillance technologies: (1) the Snowden Files, an indefinite archive of state secrets leaking profusely since 2013; (2) the latest generation of self-tracking devices. What does it mean to know about a vast state surveillance system, even as it operates almost entirely removed from individuals’ sensory experience? How can the public make its judgment when proof of surveillance’s efficacy is itself classified? What kind of ‘self-knowledge’ is it when we learn about our bodies through machines that track us in ways our senses cannot follow – and claim to ‘know you better than you know yourself?

Presentation @ ICA 2017

Next week, I’ll be presenting at the International Communications Association conference, San Diego, on the future as a trope by which potentiality and speculation may be folded into the domain of reasoned judgment. Saturday 27 May, 9.30am, Aqua Salon AB.

On Futures, and Epistemic Black Markets

The future does not exist: and this simple fact gives it a special epistemic function.

The future is where truths too uncertain, fears too politically incorrect, ideas too unprovable, receive unofficial license to roam… The future is a liminal zone, a margin of tolerated unorthodoxy that provides essential compensation for the rigidity of modern epistemic systems. This ‘flexibility’ is central to the perceived ruptures of traditional authorities in the contemporary moment. What we call post-fact politics (David Roberts), the age of skepticism (Siebers, Leiter), the rise of pre-emption (Massumi, Amoore), describe situations where apparently well-established infrastructures of belief and proof are disrupted by transgressive associations of words and things. The future is here conceptualised as a mode for such interventions.

This view helps us understand the present-day intersection of two contradictory fantasies: first, the quest to know and predict exhaustively, especially through new technologies and algorithms; second, heightened anxiety over uncertainties that both necessitate and elude those efforts. In the talk, I trace these contradictory fantasies across several interconnected scenes. We find voracious data hunger and apophenia (Steyerl) accompanied by visions of uncontrolled futures in the Snowden affair, and the narrative of radical terrorism; in Donald Trump’s depiction of eroded borders, and the use of latest surveillance technologies in tracking Muslims; in the revival (endurance?) of the paranoid style (Richard Hofstadter); and even in the apocalyptic warnings over climate change, tempered by deep confusion over the reliability of such estimates.

The trading of ‘futures’ in stock markets originated from the need to align uncertainties inherent in agricultural timescales with the epistemic demands of human markets. The futures I speak of are currency in epistemic black markets: spaces where things other than presently real, proven, accounted for, may nevertheless be traded for sentiment and opinion.

 

Criticising Surveillance and Surveillance Critique

New article now available on open access @ Surveillance & Society.

Abstract:

The current debate on surveillance, both academic and public, is constantly tempted towards a ‘negative’ criticism of present surveillance systems. In contrast, a ‘positive’ critique would be one which seeks to present alternative ways of thinking, evaluating, and even undertaking surveillance. Surveillance discourse today propagates a host of normative claims about what is admissible as true, probable, efficient – based upon which it cannot fail to justify its own expansion. A positive critique questions and subverts this epistemological foundation. It argues that surveillance must be held accountable by terms other than those of its own making. The objective is an open debate not only about ‘surveillance or not’, but the possibility of ‘another surveillance’.

To demonstrate the necessity of this shift, I first examine two existing frames of criticism. Privacy and humanism (appeal to human rights, freedoms and decency) are necessary but insufficient tools for positive critique. They implicitly accept surveillance’s bargain of trade-offs: the benefit of security measured against the cost of rights. To demonstrate paths towards positive critique, I analyse risk and security: two load-bearing concepts that hold up existing rationalisations of surveillance. They are the ‘openings’ for reforming those evaluative paradigms and rigged bargains on offer today.

Lecture @ MIT List Centre

I will be speaking at MIT’s List Centre on May 19, as part of the exhibition An Inventory of Shimmers: Objects of Intimacy in Contemporary Art. The talk / discussion is titled Intimacy with Technology. See more info here, and RSVP here.

“Join List Visual Arts Center and Sun-ha Hong, Mellon Postdoctoral Fellow, Comparative Media Studies/Writing, MIT in a discussion about the affective charge and the promise of objectivity surrounding the idea of raw data. As we move into an era of ambient and ubiquitous computing, Hong will discuss the intimate aspects of data to explore how we as humans engage in relationships with technologies—relationships which lead us to ask, ‘what is intimacy?’. In response to the List Center’s exhibition An Inventory of Shimmers: Objects of Intimacy in Contemporary Art Hong will delve into the mediated and aesthetic elements that objects assume.”

 

 

Lecture @ Copenhagen Business School

I was recently at Copenhagen Business School, courtesy of Mikkel Flyverbom, to discuss the book-in progress. An earlier version of the talk, given at MIT, is online in podcast form here.

An excerpt from my notes:

“So we have two episodes, two ongoing episodes. On one hand, you have the state and its technological system, designed for bulk collection of massive scales, and energised by the moral and political injunction towards ‘national security’ – and all of this leaked through Edward Snowden. On the other hand, you have the popularisation of self-tracking devices, a fresh addition to the growing forms of constant care and management required of the employable and productive subject, Silicon Valley being its epicentre.

These are part of a wider penumbra of practices: algorithmic predictions, powered by Bayesian inference and artificial neural nets, corporate data-mining under the moniker of ‘big’ data… now, by no means are they the same thing, or governed by some central force that perpetuates them. But as they pop up around every street corner, there are certain tendencies that start to characterise ‘data-driven’ as a mode of thinking and decision-making.

The tendency I focus on here is the effort to render things known, predictable, calculable – and how pursuing that hunger entails, in fact, many close encounters uncertainty and the unknown.

Here surveillance is not reducible to questions of security and privacy. It is a scene for ongoing conflicts over what counts as knowledge, who or what gets the authority to declare what you are, what we consider ‘good enough’ evidence to watch people, to change our diet, to arrest them. What we’re seeing is a renewed effort at valorising a certain project of objective knowledge, of factual certainty, of capturing the viscera of life into bits, of producing the right number that tells us what to do.”