Some resources on COVID surveillance

Below is a loose collection of COVID surveillance developments around the world. We see tales of unproven, hastily duct-taped contact tracing apps that run headlong into predictable train wrecks in actual use cases; thermal cameras that don’t work; fantasies of drone and robot surveillance; and almost comically harmful renditions of workplace surveillance & exam proctoring.

It is a partial, eclectic collection of whatever I spotted between early March & early May (updates potentially to come), but some folks have found it useful so I am putting it up here. Disclaimer that the notes are often going to be messy & full of my initial, personal views. Any questions / errors / concerns let me know at sun_ha [at] sfu.ca!

May 27: General updates + new details on Middle East courtesy of Laya Behbahani

May 28: A couple snippets; Ctrl+F the latest date (e.g. “May 28”) to find new entries.

June 11: A few additional updates on contact tracing apps across UK, France & Australia.

June 16: A few more entries on how contact tracing apps are faring after deployment.

June 24: More updates on the lifecycle of contact tracing apps; scholars’ critique of surveillance / civil rights implications; and Brown’s controversial plans to surveil its students and employees in fall.

 

 

Articles from technology researchers on COVID surveillance

An April 8 ACLU white paper by Jay Stanley & Jennifer Stisa Granick warns that “simplistic understandings of how technology works will lead to investments that do little good, or are actually counterproductive”. They assess some popular proposed uses systematically and point out key questions.

Susan Landau writes for Lawfare that it is an efficacy question, and on that point the proposed measures are often either failing or failing to provide adequate answers. “If a privacy- and civil liberties-infringing program isn’t efficacious, then there is no reason to consider it further.” She notes that cell phone GPS based tracking systems are unlikely to be worthwhile, since they are not accurate enough & often fail indoors & thus are unsuited to the 2m proximity problem.

In a US Senate Committee hearing on big data & COVID, privacy/law scholar Ryan Calo advises ‘humility and caution’. Calo also notes against mission creep: “there will be measures that are appropriate in this context, but not beyond it” – that, “to paraphrase the late Justice Robert Jackson, a problem with emergency powers is that they tend to kindle emergencies.”

Calo later joins Ashkan Soltani & Carl Bergstrom for Brookings with the same message: CT apps e.g. Apple/Google involve issues of false positives & false negatives with corresponding social/psychological errors; access / uptake difficulties; privacy & security issues, such as correlating BT tags to people via stationery cameras; and the normalisation problem in which “these voluntary surveillance technologies will effectively become compulsory for any public and social engagement.”

U of Ottawa researcher Teresa Scassa counsels caution for Canada on data surveillance, arguing that individual tracking is ‘inherently privacy-invasive’.

Scholars, including data ethics researcher Ben Green, write directly disputing the privacy-health tradeoff: “rather than privacy behing an inhibitor of public health (or vice versa), our eroded privacy stems from the same exploitative logics that undergird our inadequate capacity to fund and provide public health.”

Harvard’s Berkman Centre’s Bietti & Cobbe caution “the normalisation of surveillance to a point of no return” – but characterised not as the greedy advance of the powerful state, rather an eager ‘devolution of public power’ by ‘hollowed out governments’ to Big Tech.

A Brennan Center piece directly invokes post-9/11 surveillance programs as an example of pernicious and ineffective systems outlasting the crisis.

Andrejevic & Selwyn critique the fantasy of technological solutionism: could we use AI to invent the vaccine, blockchain to constrain the spread, could all of us simply WFH forever? Of course, the real impact is made through often faulty, wonky apps for data extraction; repurposing of surveillance techniques for unprecedented populational control.

Naomi Klein interprets it as a ‘pandemic shock doctrine’. She tells us what we’re getting is disaster capitalism’s free market ‘remedies’ that exploit the crisis of inequality. The response is to fund, save, promote, justify, Big Tech to save us through pointless and wonky surveillance tools. Airline companies to save us by, uh, not going bankrupt; and so on. Klein thus notes we will do here what we did to banks in 2008.

Early May, Naomi Klein writes for The Intercept of the ‘screen new deal’ for the pandemic shock doctrine: Cuomo’s handshake with Eric Schmidt & the Gates Foundation for New York as a technologist’s experimental ground zero. And of course it is one where every bad idea they’d been pushing becomes accelerated past proper judgment.

Germany-based AlgorithmWatch cautions that COVID is fundamentally ‘not a technological problem’, and that the ‘rush to digital surveillance’ risks new forms of discrimination and other abuses.

The Ada Lovelace Institute (independent, AI focus) reviews recently mooted proposals and concludes ‘an absence of evidence’ regarding efficacy and harm mitigation.

A Brennan Center piece directly invokes post-9/11 surveillance programs as an example of pernicious and ineffective systems outlasting the crisis.

CIGI notes re. US sweeping travel bans but also policies like “treating Chinese state journalists as official representatives of the Chinese state and intelligence apparatus” (to which China retaliated with expulsions) – that there is a broader concern about abuse of authority & human rights: “Epidemic response is a rare circumstance in which governments make sweeping decisions about the collective good and take unchecked action, often in contravention of individual human rights.”

UT Austin researchers Katie Joseff & Sam Woolley warn that COVID-related location tracking is a golden opportunity for what their UT Austin team calls ‘geopropaganda’: “he use of location data by campaigns, super PACs, lobbyists, and other political groups to influence political discussions and decisions.” Hong Kong, of course, is proving to be ground zero.

June 24: Joven Narwal at the University of British Columbia argues that police must not have access to CT app data, citing past instances in which police try to draw on existing public health data.

 

 

Contact Tracing (CT) Apps & Other Software/Databases

MIT Tech Review now has a great database of government CT apps around the world.

Even the United Nations has come out with not CT but a ‘social distancing’ app that, well, doesn’t work: the 1point5 app is supposed to alert when other BT devices enter range, but it often fails to do so.

 

Google-Apple Contact Tracing System

A key player has been the international research collaboration for open-source DP-3T (decentralised privacy-preserving proximity tracing) based on bluetooth beacons of randomised numbers that track solely proximity, and then can be retroactively used to contact trace a positive case.

DP-3T is the clear inspiration for the later Google-Apple proposal. Apr 10 – Google & Apple announce a collaborative effort on a bluetooth-based contact tracing system. This means first APIs in May to facilitate third party apps’ interoperability across the OS, but ultimately, an OS-level, no-download functionality. They promise privacy and transparency, with the system always being user opt-in.

The basic functionality of the bluetooth system is essentially identical to DP-3T. Bluetooth beacons exchange codes, and upon a positive’s upload of 14 days of keys, contacts receive phone notification w/ info. Google promises ‘explicit user consent’ req, ‘doesn’t collect’ PII / location data, list of contacts ‘never leaves phone’, positives aren’t ID’d, and the system as a whole will only be used for COVID contact tracing. But of course, you could then have states and employers require that users use their particular third party app enabled by the API.

google

 

Immediately, Jason Bay, product lead for Singapore’s TraceTogether, put out a clear warning: Bluetooth tracing cannot replace manual tracing, only supplement, and any optimism otherwise is “an exercise in hubris”.

Bay notes that TT worked insofar as it worked with the public health auths from day 1, including shadowing manual tracers at work. Bay argues HITL & human fronting is crucial to fight T1/T2 errors and introduce judicious judgment to how each contact should be traced. He cites an example of the Washington choir super-spreader where 2m tracing would have flagged nothing.

 

Lawmakers in US are beginning to join conversations around the risks: “without a national privacy law, this is a black hole.” (Anna G. Eshoo, D-Menlo Park)

In 21 April, the French government requested modifications so that the app can work in the background for iPhones (currently bluetooth functions it requires cannot be done in bkgr), but it’s not expected that Apple will agree.

For similar reasons, UK NHS rejects Apple-Google model in favour of an option where the matching would occur in a centralised server, citing analytics benefits, & wants the app to work on background.

 

There are generally applicable concerns about the actual efficacy of any digital contact tracing app: adoption barriers, type 1 & 2 error rates, public behaviour change (e.g. false confidence/fear).

Julia Angwin reviews points out the positives: the Google-Apple solution is opt-in & the keys are anonymous from the beginning, the system is mostly decentralised, and G/A themselves collect no PII. But the very lack of PII renders it vulnerable to trolling (send the whole school home, or Russian DDOS). The keys themselves are very public, so you can, say, grab & blast somewhere and spoof presence. All these false alerts will undermine the system. There also remains some risk of re-identification & location info exposure. There is likely to be some limited degree of location info centralisation (e.g. to limit keysharing by region). Angwin also notes that “other apps may try to grab the data” e.g. advertisers – and this gets into what I noted earlier: the ecosystem problem, what third party apps or government requirements or healthcare databases bring into the original G/A specs.

 

China & Asia

China: NYT reports on the Chinese case of Alipay Health Code, which you are required to install & notifies on their quarantine status, and then you show to officials to gain free movement across thresholds. Once installed & agreed, “reportInfoAndLocationToPolice” function communicates to server.

 

South Korea: Widely known for its aggressive contact tracing, South Korea’s system is primarily dependent on traditional CT rather than app use. We know that each and every confirmed case is followed up through interviews, where in most cases individuals voluntarily surrender certain data for this purpose. Credit card use in particular allows a detailed mapping of their movements. This info is then typically revealed to the public – right down to “X convenience store” or “Y clothing store” – which has also prompted predictable online gossiping.

Mar 7 sees a new government app for ~32400 self-quarantine subjects. Using GPS, it alerts users if they exit the quarantine area (or if they shut off GPS) – and police claim right to force noncompliant persons to return. But app installation was not made mandatory, and a week later one region saw only 21% of quarantiners install. And of course leaving the smartphone at home circumvents the system entirely.

A key feature in SK’s use of contact tracing is the way each quarantining individual is assigned a government worker for follow-up; not only regular calls, but for instance, location data from the CT app showing the individual has left the home immediately alerts the case worker. Some info here (Korean).

 

Singapore: Like South Korea, Singapore has publicly reported personal details for each infected person, e.g. below. A ‘BlueTrace’ system developed by a government team is used to extract what sounds like cellphone location data: “The collection and logging of encounter/proximity data between devices that implement BlueTrace is done in a peer-to-peer, decentralised fashion, to preserve privacy.”

On May 1, a tech editor at a major Singapore newspaper calls for making the app mandatory by law, lamenting that only 1.1m installs is far short of ~3.2m minimum (3/4 of pop).

 

India: One Indian state is branding people’s hands with quarantine stamps. The Bangalore Mirror reports that Karnaktaka has begun to require ‘hourly selfies’ to the government between 7am-10pm using an app, or be liable for ‘mass quarantine’. The app, “Quarantine Watch”, appears to demand personal information, symptoms, and have a photo submission system (that many users reports simply does not work).

karnakata

 

United States

Mar 13: Trump makes up on the fly a nonexisting ‘national scale coronavirus site’ from Google during a news conference. What was on the way was an Alphabet subsidy Verily-made triage tool, available that weekend in a pilot for Santa Clara & San Mateo. “Project Baseline” was more or less a screening survey based on Google Forms.

Verily’s terms and services show, is not HIPAA-covered & your data may be shared with “including but not limited to” Google and Salesforce. In contrast, similar functionalities have already been available from, say, Portland-based Bright.md without the invasive data collection.

Apple has since created a similar COVID screening tool.

Palantir is ‘working with’ CDC to provide modeling services. By Apr 21, we know that Palantir’s been granted the federal contract for a major aspect of ‘HHS Protect Now’, around their usual expertise in data analytics tools.

US States are actively working on CT apps; Utah has released an app developer (Twenty) to create ‘Healthy Together’. It promises bluetooth and location based matching as well as basic symptom screener & ability to receive test results. There is clear push to pump money in and give out lucrative contracts for tech platforms to enable CT; a letter from ‘health experts’ asks Congress for 46.5b towards CT, including 180,000 workers to conduct interviews.

The degree to which CT apps have been successful in terms of adoption, or of actually achieving CTs, is unclear. Utah HT app after a month has not led to any actual CT work being done, says officials; the highest known rate of participation (i.e. installs) remains South Dakota (Care19, 2%, single positive case used).

North Dakota also uses Care19, which collects location data then sends random ID # & advertising phone ID # to Foursquare, and Bugfender (software bug manager) that then sends on to Google. Examples of Care19 interface:

June 16: after early enthusiasm and rollout in some states during April and May, there is now some evidence that state governments are not so impressed. States like California have declined to actively pursue CT solutions. The American public do not appear to be very enthusiastic, either, with one survey reporting that 71% would rather not use such an app.

 

Canada

Alberta Health Services deployed a screening tool early on.

Alberta also deploys a CT app, using a version of Singapore’s TT – ABTraceTogether. Data collected via bluetooth temp IDs, 21 days local storage, Alberta health uses anonymised data for analytics – no location data, and promises encrypted IDs that upon health servs decryption doesn’t reveal identity.

 

UK & Europe

The NHS has a symptom ‘survey’ that is more about helping NHS collect data and help them plan. It appears to require DOB, postcode, household composition, and to have that data retained for 8 years (?!) shared across govt dept, ‘other organisations’ and ‘research bodies’.

on 26th the Economist reports that Palantir is also ‘teaming up’ with the NHS though its exact contribution is unclear. (One assumes that it’s Palantir’s bread and butter of database consolidation, visualisation & search.) On Mar 28, the NHS reveals in a blog post that as expected, it’s the existing Foundry system for the front-end, & lists other partners like MS & Google.

UK deploys a NHS CT app in early May, developed by NHSX (NHS digital) with VMWare Inc (Palo Alto) & Zuhlike (Swiss). On 4 May, UK announced NHS CT app pilot test for Isle of Wight using indeed a more ‘centralised’ approach, with plans to record location data. BT needs to be on at all times but it seems app can work background, with the handshakes sent to UK-based server for matching process.

Buzzfeed reports complaints of device incompatibility, battery drain, and poor UX causing confusion – as well as clear technical failures in terms of picking up BT signals or maintaining contact w/ hundreds of devices. Not only that, the app appears non-GDPR compliant; e.g. links used in app to privacy policy ironically uses Google Analytics that would allow ad targeting.

June 11: UK is having second thoughts, with a second trial of the NHS CT app postponed and a return to the Google/Apple model being considered. A preprint study presents survey data indicating 60.3% willingness to install CT apps in the UK.

June 24: BBC provides a broad chronological overview of the UK CT app and its discontents. It notes a chronic pattern of missed deadlines, second thoughts around design features & privacy concerns, and periods of radio silence from the UK government.

 

May 28: France is set to roll out their CT app “StopCovid”. The bluetooth matching is intended to supplement traditional CT work. Like the UK solution, it differs from the Google-Apple design by centralising the matching process. This makes it theoretically far easier to re-identify persons on the basis of anonymised information.

June 11: France’s StopCovid reports uptake levels of around 2% of population. The influential Oxford study suggests that CT apps can be effective in reducing infection levels at any level of uptake, but ~60% has generally been mooted as the required uptake level in order to open up while keeping infection levels at lockdown rates.

 

A Polish app, which is described as persistently accessing your location, asks for regular selfie updates, and of course was a rushed job ‘prone to failure’. Many downloaded it voluntarily.

 

June 16: Norway’s Smittestopp has been suspended and ordered to delete existing data. As in Bahrain, Kuwait and elsewhere, the key concern is that collection of GPS location data as well as Bluetooth signals means a far more intrusive level of knowledge around individual movements.

 

Australia & New Zealand

In late April we have the Australian COVIDSafe app deployed with 1m installations. Not open source; no location data, only bluetooth. The foreground question is handled poorly: the app tells you to “keep the app running” but gov official’s answer on whether you can minimise was “It’s unclear at this stage”. People trying to figure it out aren’t sure (e.g. does it go into Apple’s overflow area? What about Apple’s low power mode?)

By June 11, Australia reports ~6.2m users (in a country of ~25m). The government had not regularly released clear, up to date data of app downloads; an independent initiative collected existing info & estimates until May 24, when numbers had settled around 6m.

Wellington, NZ introduces Rippl app coinciding with gradual drop in alert levels. Rippl is voluntary check-in and claims zero location tracking whatsoever; people simply scan QR codes posted in shops to check in, with paper sign-in available. A Rippl worker confirms zero collection, thus requiring voluntary contact to follow up on alerts. (Some of the side effects are obvious: an Auckland woman is hit on by a Subway worker who used the contact info she input for CT.)

 

Middle East

Israel: On Mar 16 Netanyahu announces that he has “authorized the country’s internal security agency to tap into a vast and previously undisclosed trove of cellphone data to retrace the movements of people who have contracted the coronavirus and identify others who should be quarantined because their paths crossed.” Here again, geolocation data systems built for commercial uses and legislated for antiterrorism purposes becomes reappropriated for another end. Where Israel has already extensively developed and deployed surveillance techniques against Palestinians, including drones, FR, algorithmic profiling, this sets a precedent for use on Israel’s non-Palestinian citizens. Similar government use of telecom data to track movement has been announced by South Africa.

Conversely, we know that an Israeli firm, Cellebrite, which used to sell jailbreaking software to police, now supplies similar tech for extracting contacts & location data.

 

Qatar: April saw the release of the EHTERAZ CT app, including functions for ‘following up on’ quarantiners; alerting users of contact with positive cases; and Alipay (China)-like colour-coded scheme for pos/neg status.

Later, an Amnesty investigation shows Qatar’s EHTERAZ CT app had key security flaw allowing access to users’ personal information, including location data.

 

Saudi Arabia: The ‘Tawakkalna’ app arrives in May, designed to manage movement restrictions rather than directly surveil for COVID. The app is reported to work by allocating “four hours per week” to people that they can use for essential trips.

 

June 16: Bahrain and Kuwait’s CT apps have been flagged by Amnesty International as excessively intrusive. They feed location data to a central server, meaning that highly accurate mapping of individual movements can very easily be performed.

 

 

‘Immunity Passports’

Apr 10 – Fauci notes that some kind of immunity IDs / cards for Americans is on the cards; UK & Italy are considering. UK startup Onfido has discussed an ‘immunity passport’ w/ US gov, claiming this personally identifiable system can be scaled up rapidly. The initial effort has come from German researchers at Helmholtz Centre for Infection Research, involving mailing out antibody tests en masse then providing some vaccination card. NZ is considering its own bluetooth ‘CovidCard’ no doubt inspired by Singapore & US moves.

May 4, we spot a DOD contract live, 25m, calling for “Wearable Diagnostic for Detection of COVID-19 Infection” that would be demonstrated within 9 months and then able to deploy at mass.

 

 

Drones & Robots

US: In Apr 3, a drone was spotted in Manhattan urging social distancing, but CBS describes it as run by a ‘volunteer force’. Despite concerns by the Gothamist, there remains no clear proven case of NYPD using drones for SD. Regular patrolling by cars is taking place, and $250-500fines given out.

Companies are also building and marketing their own drone solutions – e.g. a Digital Trends report on ‘Draganfly’ touting drones w/ thermal scanners that can detect temperature & visually identify coughing. Perhaps the PR was effective. On April 23, we hear that Westport, Connecticut is using its drones to detect high temp & close proximity and issue verbal warnings.

The robotics company Promobot advertises a new screening robot that takes temperatures – after its earlier Feb publicity stunt where a robot asked New Yorkers about their symptoms, and was kicked out for lacking permits. It’s your standard invasive FR for age, gender, plus a (likely inaccurate) temp check tool.

 

Madrid is also using drones to monitor movement, using the speaker to ‘order people home’. Lombardy authorities are reported as ‘using cellphone location data’ to assess lockdown compliance.

UAE has contracted ‘unmanned ground vehicles’ to be deployed at the Abu Dhabi Airports for disinfection work. Oman has also announced the use of drones to detect high temperature individuals, though details are scarce. ‘Stay at home’ drones that approach individuals outdoors and blast audio instructions has been announced in UAE, Jordan, Bahrain, Kuwait, Morocco as well.

Singapore is now using a Boston Dynamics drone with camera and loudspeaker to encourage social distancing in a trial.

singapore

 

In some cases, robots and other tech designed not specifically for COVID surveillance are being deployed for the pandemic response.

General healthcare-related robot/tech companies are donating/pitching their tools to overwhelmed hospitals: e.g. in UAE, a Softbank-backed company is offering ‘free healthcare bots’ that seem to be more for telemedicine related assistance rather than COVID surveillance in a direct sense. In the US, Google is now partnering with Mount Sinai (and others?) to send Nest cams to hospitals for patient monitoring.

The California startup Zipline has used autonomous drones to deliver medical supplies in Ghana.

 

 

Thermal Cameras & Public Space/Retail Surveillance

China has used thermal cameras in Wuhan as early as 21 January.

Numerous other countries have considered thermal screening at airports / ports of entry; e.g. Qatar claims all persons entering country are subject to thermal screening, and numerous US airports used it as early as February.

 

In US, thermal cameras have been explored by both state and corporate bodies:

“Athena Security”, Austin TX, promises ‘AI thermal cameras’ to detect fevers in crowded spaces, modelled on existing product for gun detection. About a week later, cybersecurity firm IPVM reports that Athena faked pretty much everything in its promo: Hikvision cameras rebranded as Athena, photoshopped interface, fake restaurant & customer testimonies.

Other grifters like RedSpeed USA have been advertising thermal scanner setups – just like Athena, with China-sourced hardware.

May 28: Companies like FLIR, who sell thermal cameras and sensors, have seen rise in demand and revenues, despite unanswered questions over efficacy; as experts in the Wired report notes, this risks false confidence around asymptomatic persons, and similar measures have proven largely ineffective in past outbreaks.

TSA is reported on May 15 to be preparing temperature scanning in airports, though details remain unclear. Some within TSA have raised concerns around logistics, accuracy, and procedures; “People with high temperatures will be turned over to officials with the Centers for Disease Control and Prevention, the administration official said.”

 

 

 

Facial Recognition & Smart tech / AI

April 28, 2020 – NBC reports that Clearview AI is “in talks with federal and state agencies” towards FR-based COVID detection. This info appears to come from Hoan Ton-That, the infamous Clearview founder with white supremacist ties, in a public bid to raise the company’s profile and fish for contracts.

There’s been a rush to collect masked faces to try and produce data for FR systems. Researchers have published a ‘covid19 mask image dataset’ to github, of 1200 images from IG, in April.

 

There’s plenty of vague talk around ‘using AI’ to improve COVID surveillance, but it is often unclear what exactly is the AI component. We might assume, as Arvind Narayanan has shown elsewhere, that many such claims to AI are (1) not actually AI, and/or (2) not actually effective.

Sometimes we get a little more detail. Dubai Police claims to use ‘smart helmets’ with IR cameras for temperature reading, facial recognition & car plate readers for on the ground surveillance. This follows similar reportsof smart helmets in use across Shenzhen, Chengdu & Shanghai – though actual efficacy remains unknown.

 

May 28: WaPo reports that data streams from wearables like Fitbit might be useful for early detection of flu-like symptoms. This is an old chestnut in decade+ old effort to show utility of wearable tech (e.g. occasional news stories about individuals catching heart attacks thanks to Fitbit). There are still no peer-reviewed studies showing clear replicable results, of course.

 

 

Workplace & School Surveillance

Workplaces are resorting to always-on webcams and other measures to try and maintain control as WFH becomes implemented. Some are exceptionally draconian, including keylogging.

PwC is getting in on the act, offering its own tracing tool that would track employees. It combines bluetooth with its own ‘special sauce’ of looking at BT/WiFi signals within each room to reduce inaccurate pings (e.g. people on other side of wall). They tout huge interest by clients – and privacy provisions full of gaping holes.

Companies are also exploring ‘smart cameras’ to check for masks and social distancing, e.g. Motorola, Camio (San Mateo), often targeting factories and offices. Proxxi (Canada) sells wearable bracelets instead that vibrate upon proximity; Ford and others are trying them too, and sometimes the idea is to combine it with CT.

 

Schools are joining in: online proctor services, where globally outsourced workers monitor students by webcams while they take tests & score their reliability as an anti-cheating measure, has exploded in popularity & inquiries with COVID. Tilburg U rector defends use of Proctorio: “Digital surveillance is indispensable to maintain the value of the diploma”, he says – and anyway, don’t worry, “we only use the most essential tools”. The lazy whataboutism trotted out to distract from the issue: hey, physical exams can be even more invasive, someone could look in your bag!

Similarly, Examity requires students to hand over personal info, provide remote access to the proctor, they monitor students’ keystrokes, and of course the easy translation of unscientific bullshit hunches into ‘common sense’ features: “closely watch the face of the student to see if there is something suspicious, like suspicious eye movements”. Schools are now training their students to swallow their concerns and critical thinking and to accept bad technology as a price of admission. One student says: “When you’re a student, you have no choice […] This is school in 2020.”

Wilfried Laurier maths dept has forced students to buy webcams – during, of course, a webcam supply shortage – to facilitate proctoring, arguing that “there are no alternatives”.

June 5: Proximity detectors are planned for deployment in an Ohio school district for COVID.

June 24: As many universities seek to find ways to open up for the fall semester and keep revenue flowing, they are turning to a wide array of surveillance technologies to try and encourage students – and coerce employees – to show up on campus. Brown is partnering with Alphabet-owned Verily and its Healthy at Work program, where all students & employees use apps to log symptoms, randomly assign individuals to be tested, and collect their data for further development.

 

Beyond the workplace, COVID is also becoming an opportunity to Trojan horse new surveillance measures. E.g. real estate & proptech, already a growing field post-2008. A Boston Review article mentions Flock Safety; Bioconnect and Stonelock, which use biometrics/FR to admit known individuals, and now with added bullshit about helping prevent COVID through such access control; Yardi, a ‘virtual landlord’ service that helps manage large rental populations, and now boasts a remarkably unnecessary Trojan horse:

In response to COVID-19 the company is offering communication services meant to allow families to check on loved ones in nursing home facilities while remaining socially distanced. Using its services, the company states, “family members can view the latest records about their residents, including vital signs, diagnoses and medication orders. This data is shared from Yardi EHR [electronic health records] in real time.” Here again we see a window into the expansive ambitions of one of the largest virtual landlord firms: extending proptech’s reach into personal health data and even electronic health records.

 

 

more to come.

 

 

Art in America piece w/ Trevor Paglen

I recently spoke to Trevor Paglen – well known for works like ‘Limit Telephotography’ (2007-2012) and its images of NSA buildings and deep-sea fibreoptic cables – about surveillance, machine vision, and the changing politics of the visible / machine-readable. Full piece @ Art in America.

Much of that discussion – around the proliferation of images created by and for machines, and the exponential expansion of pathways by which surveillance, data, and capital can profitably intersect – is also taken up in my upcoming book, Technologies of Speculation (NYUP 2020). There my focus is on what happens after Snowden’s leaks – the strange symbiosis of transparency and conspiracy, the lingering unknowability of surveillance apparatuses and the terrorists they chase. It also examines the passage from the vision of the Quantified Self, where we use all these smart machines to hack ourselves and know ourselves better, to the Quantified Us/Them which plugs that data back into the circuits of surveillance capitalism.

In the piece, Paglen also discusses his recent collaboration with Kate Crawford on ImageNet Roulette, also on display at the Training Humans exhibition (Fondazione Prada Osservertario, Milan):

“Some of my work, like that in “From ‘Apple’ to ‘Anomaly,’” asks what vision algorithms see and how they abstract images. It’s an installation of about 30,000 images taken from a widely used dataset of training images called ImageNet. Labeling images is a slippery slope: there are 20,000 categories in ImageNet, 2,000 of which are of people. There’s crazy shit in there! There are “jezebel” and “criminal” categories, which are determined solely on how people look; there are plenty of racist and misogynistic tags.

If you just want to train a neural network to distinguish between apples and oranges, you feed it a giant collection of example images. Creating a taxonomy and defining the set in a way that’s intelligible to the system is often political. Apples and oranges aren’t particularly controversial, though reducing images to tags is already horrifying enough to someone like an artist: I’m thinking of René Magritte’s Ceci n’est pas une pomme (This is Not an Apple) [1964]. Gender is even more loaded. Companies are creating gender detection algorithms. Microsoft, among others, has decided that gender is binary—man and woman. This is a serious decision that has huge political implications, just like the Trump administration’s attempt to erase nonbinary people.”

apple_treachery.jpg

Crawford & Paglen also have a longer read on training sets, Excavating AI (also source for above image).

 

The Futures of Anticipatory Reason

Coming out very soon at Security Dialogue is a piece I worked on together with Piotr Szpunar, whose book Homegrown: Identity and Difference in the American War on Terror came out last year with NYU Press – so both of us looking closely at current developments in surveillance, counter-terrorism and the demand to predict. In the article, we argue that anticipatory security practices (just one part of the even broader current obsession with prediction) invoke the future to open up wiggle room for unorthodox, uncertain and otherwise problematic claims about people. This gap, which we call ‘epistemic black market’, is very useful for the flexibility it affords security practices – flexibility that is typically used to reinforce longstanding biases and power relations, exemplified by the continuing insistence on the figure of the brown, Muslim terrorist.

You can find the pre-proofread version on this site here.

 

Abstract:

This article examines invocations of the future in contemporary security discourse and practice. This future constitutes not a temporal zone of events to come, or a horizon of concrete visions for tomorrow, but an indefinite source of contingency and speculation. The ongoing proliferation of predictive, pre-emptive and otherwise anticipatory security practices strategically utilise the future to circulate the kinds of truths, beliefs, claims, that might otherwise be difficult to legitimise. The article synthesises critical security studies with broader humanistic thought on the future, with a focus on the sting operations in recent US counter-terrorism practice. It argues that the future today functions as an ‘epistemic black market’; a zone of tolerated unorthodoxy where boundaries defining proper truth-claims become porous and flexible. Importantly, this epistemic flexibility is often leveraged towards a certain conservatism, where familiar relations of state control are reconfirmed and expanded upon. This conceptualisation of the future has important implications for standards of truth and justice, as well as public imaginations of security practices, in a period of increasingly pre-emptive and anticipatory securitisation.

Smart Machines and Enlightenment Dreams (2)

In part one, I mentioned a ‘nagging suspicion’:

aren’t (1) the fantastically optimistic projections around objective data & AI, and (2) the increasingly high-profile crises of fake news, algorithmic bias, and in short, ‘bad’ machinic information, linked in some way? And aren’t both these hopes and fears rooted, perhaps, in the Enlightenment’s image of the knowing subject?

As usual, we’re caught up in two seemingly opposite fantasies. First, that the human is a biased, stupid, unreliable processor of information, and must be augmented – e.g. by the expanding industry of smart machines for self-tracking. Second, that the individual can know for themselves, they can find the truth, if only they can be more educated, ingest more information – e.g. by watching more Jordan Peterson videos.

Below are some of my still-early thoughts around what we might call the rise of personal truthmaking: an individualistic approach that says technology is going to empower people to know better than the experts, often in cynical and aggressive opposition to institutional truth, but a style that we find in normative discourses around fact-checking and media literacy as well as by redpilled conspiracy theorists, and in mainstream marketisation of smart devices as well as the public concern around the corruption of politics.

 

Smart machines

Let’s start with the relatively celebrated, mainstream instance, a frontrunner in all the latest fads in data futurism. Big data is passé; the contrarian cool is with small data, the n=1, where you measure your exercise, quantify your sleep, analyse your productivity, take pictures of your shit, get an app to listen to you having sex, to discover the unique truths about you and nobody else, and use that data to ping, nudge, gamify yourself to a better place.

Implicit here is a clear message of individual empowerment: you can know yourself in a way that the experts cannot. Take the case of Larry Smarr, whose self-tracking exploits were widely covered by mainstream media as well as self-tracking communities. Smarr made a 3D model of his gut microbiota, and tracked it in minute detail:

smarr screencap

This, Smarr says, helped him diagnose the onset of Crohn’s disease before the doctors could. He speaks about the limitations of the doctor-patient relationship, and how, given the limited personal attention the healthcare system can afford for your own idiosyncratic body and lifestyle, you are the one that has to take more control. Ironically, there is a moment where Kant, in his 1784 What is Enlightenment?, broaches the same theme:

It is so easy to be immature [unmündigkeit]. If I have […] a doctor who judges my diet for me […] surely I do not need to trouble myself. I have no need to think, if only I can pay.

To be sure, Kant is no proto-anti-vaxxer. Leaving aside for a moment (though a major topic for my research) the many readings of aufklärung and its place in historicising the Enlightenment, we can glimpse in that text a deep tension between the exhortation to overcome tutelage, to have the courage to use your own understanding, and the pursuit of universally objective truth as the basis for rationalisation and reform. And it is this tension that again animates the contemporary fantasy of ubiquitous smart machines that will know you better than you know yourself, and in the process empower a knowing, rational, happy individual.

Now, it just so happens that Larry Smarr is a director at Calit2, a pioneer of supercomputing tech. He has the money, the tech savvy, the giant room to install his gut in triple-XL. But for everybody else, the promise of personal knowledge often involves a new set of dependencies. As I’ve discussed elsewhere, the selling point of many of these devices is that they will collect the kind of data that lies beyond our own sensory capabilities, such as sleep disturbances or galvanic skin response, and that they will deliver data that is objective and impartial. It’s a kind of ‘personal’ empowerment that works by empowering a new class of personalised machines to, the advertising mantra goes, ‘know us better than we know ourselves’.

The book will focus on how this particular kind of truthmaking begins with the image of the hacker-enthusiast, tracking oneself by oneself using self-made tools, and over time, scales up to the appropriation of these data production lines by insurance companies, law enforcement, and other institutions of capture and control. But here, we might ask: how does this particular dynamic resonate with other contexts of personal truthmaking?

 

Redpilling

We might recall that what’s happening with self-tracking follows a well-worn pattern in technologies of datafication. With the likes of Google and Amazon, ‘personalisation’ meant two things at the same time. We were offered the boon of personal choice and convenience, but what we also got was a personalised form of surveillance, manipulation, and social sorting. In the world of data, there’s always a fine print to the promise of the personal – and often it’s the kind of fine print that lies beyond the reach of ordinary human lives and/or the human senses.

Fast forward a few years, and personalisation is again being raised as a pernicious, antidemocratic force. This time, it’s fake news, and the idea that we’re all falling into our own filter bubbles and rabbit holes, a world of delusions curated by youtube algorithms. When Russian-manufactured Facebook content looks like this:

fake post

we find no consistent and directly political message per se, but a more flexible and scattershot method. The aim is not to defeat a rival message in the game of public opinion and truthtelling, but to add noise to the game until it breaks down under the weight of unverifiable nonsense. It is this general erosion of established rules that allows half-baked, factually incorrect and otherwise suspect information to compete with more official ones.

We recognise here the long, generational decline across many Western nations of public trust in institutions that folks like Ethan Zuckerman has emphasised as the backdrop for the fake news epidemic. At the same time, as Fred Turner explains, the current disinformation epidemic is also an unintended consequence of what we thought was the best part about Internet technologies: the ability to give everyone a voice, to break down artificial gatekeepers, and allow more information to reach more people.

Consider the well known story of how the 2015 Charleston shooter began that path with a simple online search of ‘black on white crime’ – and stumbling on a range of sources, showing him an increasingly funneled branch of information around crime and race relations. In a way, he was doing exactly what we asked of the Internet and its users: consult multiple sources of information. Discover unlikely connections. Make up your own mind.

The same goes for the man who shot up a pizza restaurant because his research led him to believe Pizzagate was real. In a handwritten letter, Welch shows earnest regret about the harm he has done – because he sought to ‘help people’ and ‘end corruption that he truly felt was harming innocent lives.’

Here we find what danah boyd calls the backfire of media literacy. It’s not that these people ran away from information. The problem was that they dove into it with the confidence that they could read enough, process it properly, and come to the secret truth. Thus the meme is that you need to ‘redpilling’ yourself, to see the world in an objective way, to defeat the lies of the mainstream media.

Picture2

Once again, there is a certain displacement, the fine print, parallel to what we saw with self-tracking. Smart machines promise autonomous self-knowledge, but only by putting your trust in a new set of technological mediators to know you better than you know yourself. Redpilling invites individuals to do their research and figure out their own truth – but you’ll do it through a new class of mediators that help plug you into a network of alternative facts.

 

Charisma entrepreneurs

The Pizzagate shooter, we know, was an avid subscriber to Alex Jones’ Infowars. The trail of dependencies behind the promise of individual empowerment reveals shifting cultural norms around what a trustworthy, authentic, likeable source of information feels like.

America, of course, woke up to this shift in November 2016. And in the days after, the outgoing President offered a stern warning about the complicity of our new media technologies:

An explanation of climate change from a Nobel Prize-winning physicist looks exactly the same on your Facebook page as the denial of climate change by somebody on the Koch brothers’ payroll.

The assumption being, of course, that we would universally still find the Nobel a marker of unquestionable trust, and vice versa for Koch money. But what if a Harvard professorship is no longer such an unquestioned seal of guarantee, and what if being funded by oil money isn’t a death knell for your own credibility about climate change?

To describe these changes in terms of cynicism and paranoia is to capture an important part of this picture, but not all of it. We rarely pass from a world of belief to a world without, but from one set of heuristics and fantasies to another. What recent reports such as one on the ‘alternative influence network‘ of youtube microcelebrities reveals is the emergence of a certain charismatic form of truth-peddling.

By charismatic, I am contrasting the more serious, institutionalised bureaucratic styles to what Weber had called ‘charismatic authority’ [charismatische Herrschaft]: that which attracts belief precisely through its appearance as an unorganised, extraordinary form of truth. It’s critical here to distinguish this charisma from some internal psychological power, as if certain people possess a magical quality to entrance others. Weber considered charisma in more or less relational terms, as an effect of others’ invested belief, and something which often undergirds more institutionalised forms of power as well. The key is to understand charisma’s self-presentation as an explicitly extra-institutional circuit, through which actors are able to promise truth and action too radical for normal process, and to claim a certain ideological purity or proximity to Truth beyond the messiness of the status quo.

We can immediately recognise how alternative influencers, elements of the far-right, etc. have sought to turn issues like anti-political correctness into a marker of such charismatic authority. And individuals like Jones become exhibits in the emerging performative styles of such charisma, from his regular Mongol horde-like forays into the mainstream to pick up notoriety, or his self-righteous masculine rage as a default emotional state:

Picture1.png

But we should add to that a couple of slightly less obvious dimensions, ones which make clear the parallels and resonances across the different businesses that sell the fantasy of personal truthmaking.

The first is that influencers like Jones consistently claim that they are the rational ones, they are the ones that go for scientific evidence, they are the true heirs of the Enlightenment. The common refrain is: I’m not gonna tell you what to think: I just want to inform you about what’s happening, about Pizzagate, about fluoride in your water, about the vaccines, and let you make up your own mind. The reams of paper strewn about Jones’ desk, regularly waved at the camera with gusto, are markers of this seeming commitment to Reason and data – even though, in many cases, this ‘evidence’ is simply Infowars articles reprinted to testify on Infowars the show.

Alex Jones doesn’t reject the Enlightenment; he wants to own it.

Second, all this is further complicated by the commercialised structure of this charismatic truthmaking. Alex Jones isn’t just a fearless truthspeaker, but also a full time vitamin peddler. While Jones works to obscure his exact revenue and audience numbers, his ‘side’ business of dubious supplements has grown into a major source of funding that helps support the continued production of political content. In his many infomercials seeded into the show, Jones touts products like Super Male Vitality – a mostly pointless mixture of common herbal ingredients packed with a premium price and a phallic rubber stopper.

Picture3.png

Recently, Jones has updated his stock with products like “Happease” – “Declare war on stress and fatigue with mother nature’s ultimate weapons” – in a clear nod to the more dominant market of highly feminised wellness markets (think Gwyneth Paltrow’s goop and its ‘Psychic Vampire Repellent’). The connection between fake news and fake pills is made clear in one of Jones’ own sales pitches:

You know, many revolutionaries rob banks, and kidnap people for funds. We promote in the free market the products we use that are about preparedness. That’s how we fund this revolution for the new world order.

Such shifts threaten to leave Obama’s earlier warning behind as a quaint reminder of older standards. For instance, exposing someone’s financial conflict of interest used to be a surefire way to destroy their credibility as a neutral, objective truthteller. But how do we adapt if that equation has changed? As Sarah Banet-Weiser has shown in Authentic, you can now sell out and be authentic, you can brand your authenticity. You can make your name as a mysterious, counter-cultural graffiti artist speaking truth to power, and then make a killing auctioning your piece at Sotheby’s, having the piece rip itself up on front of the buyer’s eyes – and they will love how real it is. In such times, can we really win the battle for reason by showing how transparent and independent our fact-checkers are?

 

Truth isn’t truth

Today, we often say truth is in crisis. The emblematic moment was the Marches for Science, which brought out outrage, but also a certain Sisyphean exasperation. Haven’t we been through this already? Surely truth is truth? Surely the correct path of history has already been established, and what must be done is to remind everyone of this?

science march combo

Well, Rudy Giuliani has the answer for us: truth isn’t truth. Facts are in the eyes of the beholder, or at least, nowadays they are. Or, to be less glib: the struggle today is not simply between truth and ignorance, science and anger – a binary in which the right side goes without saying, and the wrong side is the dustbin of history screaming and wailing for the hopefully final time. Rather, it is a struggle over what kinds of authorities, what kinds of ways of talking and thinking, might count as rational, and how everybody’s trying to say the data and the technology are on their side.

It’s a twisted kind of Enlightenment, where the call to know for yourself, to use Reason, doesn’t unify us on common ground, but becomes a weapon to wield against the other side. Insisting on the restoration and revalorisation of objective journalism or faith in objective science might be tempting, and certainly an essential part of any realistic solution. But taken too far, they risk becoming just as atavistic as MAGA: a reference point cast deep enough into the mist, that it sustains us as fantasy precisely as something on the cusp of visibility and actuality. A nice dream about Making America Modern Again.

Information has always required an expansive set of emotional, imaginative, irrational investments in order to keep the engines running. What we see in self-tracking and charismatic entrepreneurs are emerging ‘disruptive’ groups that transform the ecosystem for the production and circulation of such imaginations. We might then ask: what is the notion of the good life quietly holding up, and spreading through, the futurism of smart machines or the paranoid reason of charismatic influencer?

 

Smart Machines & Enlightenment Dreams (1)

I was at UC San Diego in March for a symposium on “Technoscience and Political Algorithms“. In April, I’ll be at the University of Toronto’s McLuhan Centre with Whitney Phillips and Selena Nemorin to talk about “New Technological Ir/rationalities” – and then at NYC’s Theorising the Web. Then in May, I’ll be back at MIT for Media-in-Transition 10.

Across all four, I’m continuing to grapple with an earlier question, or rather, a nagging suspicion: aren’t (1) the fantastically optimistic projections around objective data & AI, and (2) the increasingly high-profile crises of fake news, algorithmic bias, and in short, ‘bad’ machinic information, linked in some way? And aren’t both these hopes and fears rooted, perhaps, in the Enlightenment’s image of the knowing subject?

One popular response to the fake news epidemic was to reassert the ideals of scientific knowledge – that is, of impersonal, neutral, reassuringly objective information – hearkening back to a hodgepodge of 19th and 20th centuries. But in its pure form, we know this is just as atavistic as MAGA: a reference point cast deep enough into the mist, that it sustains us as fantasy precisely as something on the cusp of visibility and actuality. Information has always required an expansive set of emotional, imaginative, irrational investments in order to keep the engines running – which, in fact, is what is being endorsed in those protests for science.

One major aspect of this story is how the emphasis on the critical faculties of the autonomous individual lends itself to both a widespread cynicism (including the emergence of trolling as a mainstream performative style) and the atomising organisation of life as data in the world of surveillance capitalism. I’ll get to that in a future post. In this one, what follows is some preliminary notes about – not so much direct inheritance, but repetitions, rearticulations, resonances – between how contemporary technologies and Enlightenment projects theorise the production of objective truth.

1/4

Consider, for example, self-tracking technologies. Beddit, the sleep tracker, records your heart beats, respiration cycles, and other bodily movement; when you wake, it greets your consciousness with a numerical sleep score. You’re asked to consider how your sleep must have been – information produced and processed while the thinking subject was, literally, turned off. We can now track our exercise; mood swings; shitting; sex behaviour, including ‘thrusts per minute‘ (now sadly defunct). We can ask these devices to recommend, nudge, influence our social relationships, or shock us with electricity to jolt us back to work.

https://www.rescuetime.com/assets/integration-example-pavlok-e2a12d6cea8d10f2a70d6dbad7a5073e.png

In the book, I discuss how these technologies are presented as empowering the knowing individual. The idea is that the unerring objectivity of smart machines can be harnessed for a heroically independent form of ‘personalisation’. But lurking in this vision is a crucial set of displacements and double binds:

  1. First, the objectivity of this data-driven knowledge is secured by displacing the subject of knowledge from the conscious self to the smart machine – a machine which communicates ceaselessly with the nonconscious elements of the human individual, from glucose levels to neural electrical activity.
  2. In other words, this interaction targets human sensation and sensibility as the next frontier of rationalisation. Here, the ideal of the knowing individual, for whom sensation is the privileged path to autonomous knowledge, intersects with the project of datafying humans through sensory data. We find here the dual pursuit of human knowledge, and knowledge about human beings.
  3. The objective is to program the individual towards a fitter, happier, more productive entity. These displacements help put together a vision of voluntary, empowering human control with a machine-driven view of the human as an amalgamation of datafied parameters that can be nudged into the optimal curve.

These dynamics recall, of course, theories of cybernetics and posthumanism – but, I would like to suggest, they also extend longstanding projects and problems of the Enlightenment: how can we establish an autonomous grounding for Reason? How can human sensation be understood not only as a path to objective knowledge, but a basis for a rational optimisation of social systems?

2/4

In 1726, Benjamin Franklin created for himself a journal of virtues: each day, he would record his performance according to thirteen criteria including ‘temperance’ and ‘chastity’.

A page from Benjamin Franklin's virtue journal

Franklin’s journal is a well-cited event in the prehistory of tracking and datafication. What is crucial is that it is the thinking, feeling individual that is at the heart of curating and interpreting this data. In contrast, mass market tracking technologies today seek more automated, ‘frictionless’ design: not only is it more convenient for the users, the idea is that the smart machines will avoid becoming contaminated by the biases and flaws of human subjectivity. In short, ‘machines that know you better than you know yourself’.

So there is a historically very familiar proposition here – that technoscience will provide what Lorraine Daston and Peter Galison called machinic objectivity (in the context of 19th century scientific images). The emphasis is on neutral and accurate data untainted by human bias, which individuals might utilise this information for a more rational life. With self-tracking, as part of what Mark Hansen has called ‘twenty-first century media’, this objectivity is secured through a certain displacement. Where Franklin guarantees the validity of his records through his own integrity, self-tracking promises empowering self-knowledge through a new degree of technological dependency.

3/4

Crucially, this displacement occurs through the contested object of human sensation. Where human feelings, memories, affects, are on one hand demoted as unreliable and biased sources of self-knowledge, it is exactly the minutiae of human sensation and experience that smart machines plumb for new use cases, new markets.

This datafication of sensation puts a new spin on the old Enlightenment relationship between sensation and Reason. Insofar as the Enlightenment was built on a spirit of aufklarung, of overcoming established authorities for truth, the individual’s ability to know for themselves was often predicated on a turn to sensation and experience. It was the human individual, who was equipped with both the senses to acquire data and the Reason to process and verify that data , that would be indispensable for putting the two together.

This emphasis on sensation was fundamental to many Enlightenment projects. In France, we might think of Condillac and Hélvetius, who explored bodily experience as the foundation of all ideas – variously grouped as sensationists, sentimental empiricists, etc. by scholars like Jessica Riskin. (We are, of course, simplifying some of the different ways in which they can be grouped and ungrouped, catching only the broader themes for the moment.) The senses, in the narrowly physical sense, was often connected to sensibility or sentiment, understood as an affective disposition of the soul that would lie at the basis of one’s ability to intelligently and morally process external stimuli.

For our purposes, perhaps the best example for this historical resonance is Julien Offray de La Mettrie’s L’homme machine, or ‘Man, a Machine’. Yet contrary to how the title might sound today, man as machine was not a dry creature of logic, but defined by the effort to annul the Cartesian gap. There is no soul as a thing separate from body, not because the workings of the soul are mere ephemeral illusions, but because sensation and sentiment are the crucial mechanisms that arise from physical bodies and operate what is wrongly attributed to a transcendent soul. The upshot is that the proper analysis of sensation becomes crucial to understanding, manipulating, and optimising man as a machine.

https://upload.wikimedia.org/wikipedia/commons/1/11/LaMettrie_L%27homme_machine.jpg

Indeed, the story goes that La Mettrie, an accomplished physician, developed his strong materialism after a bout of illness and this personal experience of the mental struggle caused by a weak body. In a distant echo, today, a common pattern in the personal testimonies of self-tracking enthusiasts is that illness and other bodily problems were the catalyst for them to turn to tracking technologies. It would be a difficult medical problem, a long convalescence, and other such experience where the machine breaks down, that motivated them to seek a more objective and rational basis for the care of the self.

In this sense, sensation serves an important dual purpose for the Enlightenment:

  1. On one hand, there is sensation as the raw stuff of human machines – the truly empirical and rational ingredient for theorising human behaviour, in opposition to divine design as the First Cause or the intangibly transcendental soul.
  2. On the other hand, sensation and its proximate concepts are also held up as an important and even moral quality, insofar as the individual has a responsibility to cultivate their senses to grow their proficiency with Reason.

With technologies like self-tracking, human individuals’ sensory acquisition of empirical data is displaced onto machinic harvesting of ‘raw data’ – where the former is demoted to a biased, partial, and irredeemably flawed form of collection. And the exercise of human Reason remains, but is increasingly shaped by what the machines have taken out of those sensations, which arrives with again the moral authority of objectivity.

4/4

But there is one last displacement to be made. The abduction of self-knowledge and self-control onto the smart machines takes on futuristic projections about reprogramming subjects. In Surveiller et Punir, Foucault mentions La Mettrie as a marker in the development of the docile body: that his materialism facilitated a “general theory of dressage, at the centre of which reigns the notion of ‘docility’, which joins the analysable body to the manipulable body.” L’homme machine was not only a model for understanding human sensation in a programmatic form, but also a model for reprogramming human behaviour.

Today, the theoretical rationalisation of the human subject in technoscientific terms is again contributing to new techniques for the government of self and others. The Quantified Self, which some early enthusiasts sought to use as a way to break away from the top-down applications of big data, is increasingly a Quantified Us, or rather, a Quantified Them, where the technology crafted to personalise big data is being scaled back up to governments and corporations.

Consider just one intersection of wearables, mind-hacking and state surveillance. Popular tracking devices like Thync have adopted neuroscience tools like EEG to enable monitoring and even direct manipulation of brainwave activity. In 2018, it was reported that the Chinese government had begun to deploy ‘mind reading’ helmets in select workplaces – in reality, fairly simple, probably EEG-based devices for detecting brainwave activity.

The devices can be fitted into the cap of a train driver. Photo: Deayea Technology

This was no surprise: already in 2012, Veritas Scientific claimed to have produced a ‘TruthWave’ helmet that could detect if the subject is lying. TruthWave has since more or less disappeared, and the use-claim is always likely to have been hyperbolic. But it was also the case that the device was partly funded by the US military. At the level of the technologies employed, the principles of data processing and analysis, the private-public flow of funding and expertise, the ubiquitous spread of sensors and smart machines has far more in common with surveillance capitalism than it might like to admit.

***

Smart machines and big data have a long historical and cultural tail, one which draws not only on cybernetics and posthumanism, but the dynamic of sensation, objectivity and social reform that was central to the project of the Enlightenment. In the popular imagination, technoscience is a line, stretching itself ever forwards in inevitable progress: but in the normative and imaginative sense, it more resembles an ouroboros, ever repeating and devouring itself. And as that cycle captures us in the seductive dream of total objectivity, new technologies for surveillance and value extraction are embedding themselves – literally – under our skin.

When you can trust nobody, trust the smart machine

I will be at AOIR in Montreal, 10-13 October to present some newer work as I look beyond the book. Below is one brief summary of ongoing investigations:


 

What is the connection between smart machines, self-tracking, and the ongoing mis/disinformation epidemic? They are part of a broader shift in the social rules of truth and trust. Emerging today is a strange alliance of objectivity, technology and the ‘personal’ – often cast in opposition to the aging bastions of institutional expertise. The fantasy of an empowered individual who ‘knows for themselves’ smuggles in a new set of dependencies on opaque and powerful technologies.

 

1.

On one hand, individuals are encouraged to know more, and to take that knowing into their own hands. Emblematic is the growth of the self-tracking industry: measure your own health and productivity, discover the unique correlations that make you tick, and take control of rationalising and optimising your life. Taglines of ‘n=1’ and ‘small data’ sloganise the vision: the intrepid, tech-savvy individual on an empowering and personal quest to self-knowledge. Implicit here is a revalorisation of the personal and experiential: you have a claim to the truth of your body in ways that the doctor cannot, despite all their learned expertise. This is territory that I go into in some detail in the book.

 

smarr screencap.png

And so, Calit2’s Larry Smarr builds a giant 3D projection of his own microbiome – which, he claims, helped him diagnose the onset of Crohn’s disease before the doctors could.

 

But what does it mean to take control and know yourself, if this knowing happens through technologies that operate beyond the limits of the human senses? Subsidiary to the wider enthusiasm for big data, smart machines and machine learning, the value proposition of much (not all) of self-tracking tech is predicated on the promise of data-driven objectivity: the idea that the machines will know us better than we know ourselves, and correct the biases and ‘fuzziness’ of human senses, cognition, memory. And this claim to objectivity is predicated on a highly physical relationship: these smart machines live on the wrist, under the bedsheets, sometimes even in the user’s body, embedding their observations, notifications, recommendations, into the lived rhythms of everyday life. What we find is a very particular mixture of the personal and the machinic, the objective and the experiential: know yourself – through machines that know you better than you do.

 

risley affidavit.png

Jeannine Risley’s Fitbit data is used to help disprove her claims of being raped by an intruder. What is called ‘self-knowledge’ becomes increasingly capable being disassociated from the control and intentions of the ‘self’.

 

2.

Another transformative site for how we know and how we trust is that of political mis/disinformation. While the comparison is neither simple nor obvious, I am exploring the idea that they are animated by a common, broader shift towards a particular alliance of the objective, machinic and ‘personal’. In the political sphere, its current enemies are well-defined: institutional expertise, bureaucratic truthmaking and, in a piece of historical irony, liberalism as the dishonest face of a privileged elite. Here, new information technologies are leveraged towards what van Zoonen labelled ‘i-pistemology’: the embrace of personal and experiential truth in opposition to top-down and expert factmaking.

 

fake post.png

In such ‘deceptive’ social media postings, we find no comprehensive and consistent message per se, but a more flexible and scattershot method. The aim is not to defeat a rival message in the game of public opinion and truthtelling, but to add noise to the game until it breaks down. It is this general erosion of established rules that allows half-baked, factually incorrect and otherwise suspect information to compete with more official ones.

 

The ongoing ‘fake news’ epidemic of course has roots in post-Cold War geopolitics, and the free speech ideology embedded into social media platforms and their corporate custodians. But it is also an extension of a decades-long decline in public trust of institutions and experts. It is also an unintended consequence of what we thought was the best part about Internet technologies: the ability to give everyone a voice, to break down artificial gatekeepers, and allow more information to reach more people. It is well known how Dylann Roof, who killed nine in the 2015 Charleston massacre, began that path with a simple online search of ‘black on white crime’. The focus here is on what danah boyd identified as a loss of orienting anchors in the age of online misinformation: emerging generations of media users who are taught to assemble their own eclectic mix of truths in a hyper-pluralistic media environment, while also learning a deep distrust of official sources.

 

science march combo.png

2017 saw the March for Science: an earnest defence of evidence-based, objective, institutionalised truth as an indispensable tool for the government of self and others. The underlying sentiment: this isn’t an agenda for a particular kind of truth and trust, this is just reality – and anyway, didn’t we already settle this debate? But the debate over what counts as reality and how we get access to it is never quite settled.

 

3.

These are strange and unsettling combinations: the displacement of trust from institutions to technologies in the guise of the empowered ‘I’, and the related proliferation of alternative forms of truthtelling. My current suspicion is that they express an increasingly unstable set of contradictions in our long-running relationship with the Enlightenment. On one hand, we find the enduring belief in better knowledge, especially through depersonalised and inhuman forms of objectivity, as the ticket to rational and informed human subjects. At the same time, this figure of the individual who knows for themselves – found in Kant’s inaugural call of Sapere aude! – is increasingly subject to both deliberate and structural manipulations by sociotechnical systems. We are pushed to discover our ‘personal truths’ in the wilderness of speculation, relying only on ourselves – which, in practice, often means relying on technologies whose workings escape our power to audit. There is nobody you can trust these days, but the smart machine shall not lead you astray.

 

Facebook scores our trustworthiness because we asked them to

News that Facebook assigns users with secret ‘trustworthiness’ scores based on the content they report has drawn a familiar round of worries. We are told that as users flag objectionable content on the platform, they are themselves scored on a scale of 0 to 1, based on how reliably their flagging concurs with the censors’ judgment. Presumably, other factors play into the scoring; but the exact mechanism, as well as the users’ actual scores, remain secret. The secrecy has drawn critical comparisons with China’s developing ‘social credit’ systems. At least with China’s Zhima Credit, you know you’re being scored and the score is freely available – something not afforded by American credit rating systems, either.

But there’s a bit more to it. In many ways, Facebook is doing what we asked it to do. To untangle the issue, we might look to two other points of reference.

One is Peeple, the short-lived ‘Yelp for People’ that enjoyed a short period in 2015 as the Internet’s most reviled object. The app proposed that ordinary people should rate and review each other, from dates to landlords to neighbours – and, in the original design, the ratings would be published without the victim’s consent. The Internet boiled over, and the app was quickly stripped of its most controversial (and useful) features. That particular effort, at least, was dead in the water.

180821 Peeple img

But Peeple wasn’t an anomaly. It wasn’t even the first to try such scoring. (Unvarnished tried it in 2010; people compared that to Yelp, too; and it died just as quickly.) These efforts are an inevitable extension of the tendency to substitute lived experience, human interaction and personal judgment with aggregated scores managed by profit-driven corporations and their proprietary algorithms. (Joseph Weizenbaum, Computer Power and Human Reason.) If restaurants and Uber drivers are scored, then why not landlords, blind dates, neighbours, classmates? Once we say news sources have to be scored and censored in the battle against fake news, there is a practical incentive to score the people who report that fake news as well.

More broadly, platforms have long been about ‘scoring’ us for reliability and usefulness, and weighting the visibility and impact of our contributions. It’s been pointed out that such scoring & weighting of users is standard practice for platforms like Youtube. Facebook isn’t leaping off into a brave new world of platform hubris; it is only extending its core social function of scoring and sorting people, in response to widespread demand that it do so.

In recent weeks, Twitter was described as essentially a rogue actor amongst social media platforms. Where Facebook and others moved to ban notorious misinformer Alex Jones, Jack Dorsey argued that banning specific individuals on the basis of widespread outrage would undermine efforts to establish consistent and fair policy. Those who called for banning Jones criticised Dorsey’s approach as lacking integrity, a moral cop-out. But let us be precise: what exactly makes Twitter’s decision ‘lack integrity’? Do we believe that if somebody is widely condemned enough, then platforms must reflect this public outrage in their judgment? That is, in many obvious ways, a dangerous path. Alternatively, we might insist that if Twitter’s rules are so impotent as to allow an Alex Jones to roam free, they should become more stringent. In other words, we are effectively asking social media platforms to play an even more dominant and deliberate role in their ability to censor public discourse.

In many ways, this is the logical consequence of the widespread demand since November 2016 that social media platforms take on the responsibility of policing our information and our speech, and that they take on the role of determining who or what is trustworthy. We scolded Mark Zuckerberg in a global reality TV drama of a Congressional hearing, telling him that with power comes responsibility: it turns out that with responsibility comes new powers, too.

180821 Zuck img

Now, we might say that the problem isn’t that Facebook is scoring us for trustworthiness, but that the process needs to be – we all know the magic phrase by heart – open, transparent, accountable. What would that look like? Presumably, this would require not only that users’ trust scores are visible to themselves, but that the scoring process can be audited, contested, and corrected. That might involve another third party, though it is uncertain what kind of institution is trustworthy these days to arbitrate our trust scores.

Here, we find the second reference point: the NSA. Tessa Lyons, Facebook’s product manager, explains that their scoring system must remain as secret as possible, for fears that bad actors will learn to abuse it. This is, of course, a rationale we have seen in the past not only from social media platforms with regards to their other algorithms, but also the NSA and other agencies with regards to government surveillance programs. We can’t tell you who exactly is being spied on in what way, because that would help the terrorists. And we can’t tell you how exactly our search results work, because this would help bad actors game the algorithm (and undermine our market position).

So we come back to a number of enduring structural dilemmas in the way our Internet is made. In protesting the disproportionate impact of a select few powerful platforms on the public sphere, we also demand that these platforms increase their control over public speech. Even as we ask them to censor the public with greater zeal, we possess little effective oversight over how the censors are to act.

In the wake of Russian election interference, Cambridge Analytica and other scandals, the pretention to neutrality has been punctured. There is now a wider recognition that platforms shape, bias and regulate public discourse. But while there is much sense in demanding better from the platforms themselves, passing off too much responsibility onto their hands risks encouraging exactly the hubris we like to accuse them of. “What makes you think you know the best for us, Mark Zuckerberg? Now, please hurry up and come up with the right rules and mechanisms to police our speech.”

If there is sufficient public outrage, Facebook might well be moved to retire its current system of trust scores. It has a history of stepping back from particularly rancorous features, only to gradually reintroduce them in more palatable forms. (Jose van Dijck, The Culture of Connectivity.) Why? Not because Facebook is an evil force that advances its dark arts whenever nobody’s looking, but because Facebook has both commercial and social pressures to develop more expansive scoring systems in order to perform its censoring functions.

Facebook’s development of ‘trust scores’ reflects a deeper, underlying problem: that traditional indices of trustworthiness, such as mainstream media institutions, have lost their effectiveness. Facebook isn’t sure which of us to trust, and neither are we. And if we are searching for new ways to assign trust fairly and transparently, we should not expect the massive tech corporations, whose primary responsibility is to maximise profit from an attention economy, to do that work for us. They are not the guiding lights we are looking for.

Interview @ Gas Gallery

I spoke with Ceci Moss at Gas, a mobile art gallery that roams Los Angeles and the web, about different forms of self-tracking: the technological promises and economic precarities, moral injunctions and everyday habits… found here.

An excerpt:

We have to ask not only ‘is this really empowering or not’, but also ‘what is it about our society that makes us feel like we need to empower ourselves in this way?’ In the same way, we have to ask what kind of new labours, new troubles, new responsibilities, new guilts, that these empowering activities bring to our doorstep. From an economic perspective, if you are someone who has to constantly sell one’s productivity to the market, the ‘empowerment’ of self-tracking and self-care becomes a necessary labour for one’s survival. The injunction to ‘care for yourself’ is a truncated version of ‘you’ve got to care for yourself to stay afloat, because nobody will do it for you.’

The interview is part of their ongoing exhibition:

take care | June 9–July 20, 2018

Featuring: Hayley Barker, Darya Diamond, Ian James, Young Joon Kwak, C. Lavender, Sarah Manuwal, Saewon Oh, Amanda Vincelli, and SoftCells presents: Jules Gimbrone

How do radical ambitions of “self-care” persist or depart from capitalist society’s preoccupation with wellness and the industry surrounding it, particularly when filtered through technological advances? How can we imagine personal wellness that complicates or diverges from capitalist and consumerist tendencies? Taking its name from the common valediction, which is both an expression of familiarity and an instruction of caution, take care, is a group exhibition that considers the many tensions surrounding the possibilities of self-care.

 

Lecture @ Wattis Institute for Contemporary Arts

I will be at the Wattis Institute for Contemporary Arts, San Francisco, on 20 March 2018 to discuss data, bodies and intimacy, as part of their year-long program on the work of Seth Price. More information here.

 

Data, or, Bodies into Facts

Data never stops accumulating. There is always more of it. Data covers everything and everyone, like skin, and yet different people have different levels of access to it,so it’s never quite fair to call it “objective” or even “truthful.

Entire industries are built around storing data, and then protecting,  organizing, verifying, optimizing, and distributing itFrom there, even the most banal pieces of data work to penetrate the most intimate corners of our lives.

For Sunha Hong, the promise of data is the promise to turn bodies into factsemotions, behavior, and every messy amorphous human reality can be distilled into the discrete, clean cuts of calculable information. We track our exercise, our sexual lives, our relationships, our happiness, in the hope of selfknowledge achieved through machines wrought in the hands of others. Data promises a certain kind of intimacy, but everything about our lived experience constantly violates this serene aesthetic wherein bodies are sanitized, purified, and disinfected into objective and neutral facts. This is the pushpull between the raw and the mediated.

Whether it be by looking at surveillance,algorithmic, or selftracking technologies,Hong’s work points to the question of how human individuals become the ingredient for the production of truths and judgments about them by things other than themselves.

 

Update: He gives a talk.

sign

Presentation @ Digital Existence II: Precarious Media Life

Later this month I will be presenting at Digital Existence II: Precarious Media Life, at the Sigtuna Foundation, Sweden, organised by Amanda Lagerkvist via the DIGMEX Research Network (of which I am a part) and the Nordic Network for the Study of Media and Religion. The abstract for my part of the show:

 

On the terror of becoming known

Today, we keenly feel the terror of becoming known: of being predicted and determined by data-driven surveillance systems. The webs of significance which sustain us also produce persistent vulnerability to becoming known by things other than ourselves. From the efforts to predict ‘Lone Wolf’ terrorists through comprehensive personal communications surveillance, to pilot programs for calculating insurance premiums by monitoring daily behaviour, the expressed fear is often of misidentification and misunderstanding. Yet the more general root of this anxiety is not of error or falsehood, but a highly entrenched moralisation of knowing. Digital technologies are the newest frontier for the reprisal of old Enlightenment dreams, wherein the subject has a duty to know and technological inventions are an ineluctable force for better knowledge. This nexus demands and requires subjects’ constant vulnerability to producing data and being socially determined by it. In turn, subjects turn to what Foucault called illegalisms[1]: forms of complaint, compromise, obfuscation, and other everyday efforts to mitigate the violence of becoming known. The presentation threads this normative argument with two kinds of grounding material: (1) episodes in becoming-known drawn from original research into American state- and self-surveillance, and (2) select works in moral philosophy and technology criticism.[2]

 

[1] Foucault, M., 2015. The Punitive Society: Lectures at the College de France 1972-1973 B. E. Harcourt, ed., New York: Palgrave Macmillan.

[2] E.g. Jasanoff, S., 2016. The Ethics of Invention: Technology and the Human Future, New York: W.W. Norton & Co; Vallor, S., 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford: Oxford University Press; Winner, L., 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology, Chicago: Chicago University Press.