Control Creep

The second of two pieces with the Centre for International Governance Innovation – Control Creep explores how data produced for one purpose inevitably spreads to new (mis)uses and relations of control:

“The trouble with thinking about data as personal property, however, is that what our data means for us has little to do with what it can be made to mean for others. Being paid for our data will not empower us if that data is still being recombined into unappealable judgments by cops or bosses.

Why Transparency Won’t Save Us

My new piece, “Why Transparency Won’t Save Us” @ Centre for International Governance Innovation talks about how transparency around data has become a form of neoliberal responsibilisation:

Too often, transparency ends up a form of free labour, where we are burdened with dis- or misinformation but deprived of the capacity for meaningful corrective action. What results is a form of neoliberal “responsibilization,” in which the public becomes burdened with duties it cannot possibly fulfill: to read every terms of service, understand every complex case of algorithmic harm, fact-check every piece of news. This shift in responsibility makes it, implicitly, our fault for lacking technological literacy or caring enough about privacy — never mind that vast amounts of money and resources are poured into obfuscating how our data is collected and used. This is the crux of the problem. Transparency is often valued as the great equalizer, a way to turn the tables on those in power and to correct the harms of technological systems. But sometimes what you need to correct abuses of power isn’t more information — it’s a redistribution of power.

Privacy Must Be Defended

This month, I reviewed Sarah E. Igo’s The Known Citizen: A History of Privacy in Modern America (Harvard, 2018) for The New Rambler.

I focus on Igo’s history as a history of privacy never quite fulfilled, a history of partial protections won only through sustained criticism and outrage – and then often extended disparately across society, ‘divvying up civic membership’ in ways that maintain longstanding inequalities.

(I also draw on Igo’s work to talk about privacy in Technologies of Speculation, where I argue that “to talk privacy in the data-driven society is to not demand a hermetic seal between private and public but to demand more humanly meaningful access to the ways in which bodies are datafied and manipulated at a distance.”)

Screenshot_2020-08-13 Privacy Must Be Defended - New Rambler Review(3)

Some resources on COVID surveillance

Below is a loose collection of COVID surveillance developments around the world. We see tales of unproven, hastily duct-taped contact tracing apps that run headlong into predictable train wrecks in actual use cases; thermal cameras that don’t work; fantasies of drone and robot surveillance; and almost comically harmful renditions of workplace surveillance & exam proctoring.

It is a partial, eclectic collection of whatever I spotted between early March & early May (updates potentially to come), but some folks have found it useful so I am putting it up here. Disclaimer that the notes are often going to be messy & full of my initial, personal views. Any questions / errors / concerns let me know at sun_ha [at] sfu.ca!

Continue reading “Some resources on COVID surveillance”

The Futures of Anticipatory Reason

Coming out very soon at Security Dialogue is a piece I worked on together with Piotr Szpunar, whose book Homegrown: Identity and Difference in the American War on Terror came out last year with NYU Press – so both of us looking closely at current developments in surveillance, counter-terrorism and the demand to predict. In the article, we argue that anticipatory security practices (just one part of the even broader current obsession with prediction) invoke the future to open up wiggle room for unorthodox, uncertain and otherwise problematic claims about people. This gap, which we call ‘epistemic black market’, is very useful for the flexibility it affords security practices – flexibility that is typically used to reinforce longstanding biases and power relations, exemplified by the continuing insistence on the figure of the brown, Muslim terrorist.

You can find the pre-proofread version on this site here.

 

Abstract:

This article examines invocations of the future in contemporary security discourse and practice. This future constitutes not a temporal zone of events to come, or a horizon of concrete visions for tomorrow, but an indefinite source of contingency and speculation. The ongoing proliferation of predictive, pre-emptive and otherwise anticipatory security practices strategically utilise the future to circulate the kinds of truths, beliefs, claims, that might otherwise be difficult to legitimise. The article synthesises critical security studies with broader humanistic thought on the future, with a focus on the sting operations in recent US counter-terrorism practice. It argues that the future today functions as an ‘epistemic black market’; a zone of tolerated unorthodoxy where boundaries defining proper truth-claims become porous and flexible. Importantly, this epistemic flexibility is often leveraged towards a certain conservatism, where familiar relations of state control are reconfirmed and expanded upon. This conceptualisation of the future has important implications for standards of truth and justice, as well as public imaginations of security practices, in a period of increasingly pre-emptive and anticipatory securitisation.

Presentation @ Digital Existence II: Precarious Media Life

Later this month I will be presenting at Digital Existence II: Precarious Media Life, at the Sigtuna Foundation, Sweden, organised by Amanda Lagerkvist via the DIGMEX Research Network (of which I am a part) and the Nordic Network for the Study of Media and Religion. The abstract for my part of the show:

 

On the terror of becoming known

Today, we keenly feel the terror of becoming known: of being predicted and determined by data-driven surveillance systems. The webs of significance which sustain us also produce persistent vulnerability to becoming known by things other than ourselves. From the efforts to predict ‘Lone Wolf’ terrorists through comprehensive personal communications surveillance, to pilot programs for calculating insurance premiums by monitoring daily behaviour, the expressed fear is often of misidentification and misunderstanding. Yet the more general root of this anxiety is not of error or falsehood, but a highly entrenched moralisation of knowing. Digital technologies are the newest frontier for the reprisal of old Enlightenment dreams, wherein the subject has a duty to know and technological inventions are an ineluctable force for better knowledge. This nexus demands and requires subjects’ constant vulnerability to producing data and being socially determined by it. In turn, subjects turn to what Foucault called illegalisms[1]: forms of complaint, compromise, obfuscation, and other everyday efforts to mitigate the violence of becoming known. The presentation threads this normative argument with two kinds of grounding material: (1) episodes in becoming-known drawn from original research into American state- and self-surveillance, and (2) select works in moral philosophy and technology criticism.[2]

 

[1] Foucault, M., 2015. The Punitive Society: Lectures at the College de France 1972-1973 B. E. Harcourt, ed., New York: Palgrave Macmillan.

[2] E.g. Jasanoff, S., 2016. The Ethics of Invention: Technology and the Human Future, New York: W.W. Norton & Co; Vallor, S., 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford: Oxford University Press; Winner, L., 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology, Chicago: Chicago University Press.

Criticising Surveillance and Surveillance Critique

New article now available on open access @ Surveillance & Society.

Abstract:

The current debate on surveillance, both academic and public, is constantly tempted towards a ‘negative’ criticism of present surveillance systems. In contrast, a ‘positive’ critique would be one which seeks to present alternative ways of thinking, evaluating, and even undertaking surveillance. Surveillance discourse today propagates a host of normative claims about what is admissible as true, probable, efficient – based upon which it cannot fail to justify its own expansion. A positive critique questions and subverts this epistemological foundation. It argues that surveillance must be held accountable by terms other than those of its own making. The objective is an open debate not only about ‘surveillance or not’, but the possibility of ‘another surveillance’.

To demonstrate the necessity of this shift, I first examine two existing frames of criticism. Privacy and humanism (appeal to human rights, freedoms and decency) are necessary but insufficient tools for positive critique. They implicitly accept surveillance’s bargain of trade-offs: the benefit of security measured against the cost of rights. To demonstrate paths towards positive critique, I analyse risk and security: two load-bearing concepts that hold up existing rationalisations of surveillance. They are the ‘openings’ for reforming those evaluative paradigms and rigged bargains on offer today.

Lecture @ Copenhagen Business School

I was recently at Copenhagen Business School, courtesy of Mikkel Flyverbom, to discuss the book-in progress. An earlier version of the talk, given at MIT, is online in podcast form here.

An excerpt from my notes:

“So we have two episodes, two ongoing episodes. On one hand, you have the state and its technological system, designed for bulk collection of massive scales, and energised by the moral and political injunction towards ‘national security’ – and all of this leaked through Edward Snowden. On the other hand, you have the popularisation of self-tracking devices, a fresh addition to the growing forms of constant care and management required of the employable and productive subject, Silicon Valley being its epicentre.

These are part of a wider penumbra of practices: algorithmic predictions, powered by Bayesian inference and artificial neural nets, corporate data-mining under the moniker of ‘big’ data… now, by no means are they the same thing, or governed by some central force that perpetuates them. But as they pop up around every street corner, there are certain tendencies that start to characterise ‘data-driven’ as a mode of thinking and decision-making.

The tendency I focus on here is the effort to render things known, predictable, calculable – and how pursuing that hunger entails, in fact, many close encounters uncertainty and the unknown.

Here surveillance is not reducible to questions of security and privacy. It is a scene for ongoing conflicts over what counts as knowledge, who or what gets the authority to declare what you are, what we consider ‘good enough’ evidence to watch people, to change our diet, to arrest them. What we’re seeing is a renewed effort at valorising a certain project of objective knowledge, of factual certainty, of capturing the viscera of life into bits, of producing the right number that tells us what to do.”

 

 

Data Epistemologies – Dissertation online

Data Epistemologies: Surveillance and Uncertainty, my dissertation at the University of Pennsylvania, is now online for public access here. It is, in many ways, a working draft for my current book project.

Abstract:

Data Epistemologies studies the changing ways in which ‘knowledge’ is defined, promised, problematised, legitimated vis-á-vis the advent of digital, ‘big’ data surveillance technologies in early twenty-first century America. As part of the period’s fascination with ‘new’ media and ‘big’ data, such technologies intersect ambitious claims to better knowledge with a problematisation of uncertainty. This entanglement, I argue, results in contextual reconfigurations of what ‘counts’ as knowledge and who (or what) is granted authority to produce it – whether it involves proving that indiscriminate domestic surveillance prevents terrorist attacks, to arguing that machinic sensors can know us better than we can ever know ourselves.

The present work focuses on two empirical cases. The first is the ‘Snowden Affair’ (2013-Present): the public controversy unleashed through the leakage of vast quantities of secret material on the electronic surveillance practices of the U.S. government. The second is the ‘Quantified Self’ (2007-Present), a name which describes both an international community of experimenters and the wider industry built up around the use of data-driven surveillance technology for self-tracking every possible aspect of the individual ‘self’. By triangulating media coverage, connoisseur communities, advertising discourse and leaked material, I examine how surveillance technologies were presented for public debate and speculation.

This dissertation is thus a critical diagnosis of the contemporary faith in ‘raw’ data, sensing machines and algorithmic decision-making, and of their public promotion as the next great leap towards objective knowledge. Surveillance is not only a means of totalitarian control or a technology for objective knowledge, but a collective fantasy that seeks to mobilise public support for new epistemic systems. Surveillance, as part of a broader enthusiasm for ‘data-driven’ societies, extends the old modern project whereby the human subject – its habits, its affects, its actions – become the ingredient, the raw material, the object, the target, for the production of truths and judgments about them by things other than themselves.

Colloquium @ MIT

On September 15, I will be giving a talk at MIT’s CMS/W department, titled ‘Knowledge’s Allure: Surveillance and Uncertainty’. I will be covering some of the material I am remoulding from dissertation to book. I think a podcast version will be uploaded online some time afterwards.

For the physical event, it’s 5pm, location 3-133. More info here.

colloquium