Some resources on COVID surveillance

Below is a loose collection of COVID surveillance developments around the world. We see tales of unproven, hastily duct-taped contact tracing apps that run headlong into predictable train wrecks in actual use cases; thermal cameras that don’t work; fantasies of drone and robot surveillance; and almost comically harmful renditions of workplace surveillance & exam proctoring.

It is a partial, eclectic collection of whatever I spotted between early March & early May (updates potentially to come), but some folks have found it useful so I am putting it up here. Disclaimer that the notes are often going to be messy & full of my initial, personal views. Any questions / errors / concerns let me know at sun_ha [at] sfu.ca!

May 27: General updates + new details on Middle East courtesy of Laya Behbahani

May 28: A couple snippets; Ctrl+F the latest date (e.g. “May 28”) to find new entries.

June 11: A few additional updates on contact tracing apps across UK, France & Australia.

June 16: A few more entries on how contact tracing apps are faring after deployment.

June 24: More updates on the lifecycle of contact tracing apps; scholars’ critique of surveillance / civil rights implications; and Brown’s controversial plans to surveil its students and employees in fall.

 

 

Articles from technology researchers on COVID surveillance

An April 8 ACLU white paper by Jay Stanley & Jennifer Stisa Granick warns that “simplistic understandings of how technology works will lead to investments that do little good, or are actually counterproductive”. They assess some popular proposed uses systematically and point out key questions.

Susan Landau writes for Lawfare that it is an efficacy question, and on that point the proposed measures are often either failing or failing to provide adequate answers. “If a privacy- and civil liberties-infringing program isn’t efficacious, then there is no reason to consider it further.” She notes that cell phone GPS based tracking systems are unlikely to be worthwhile, since they are not accurate enough & often fail indoors & thus are unsuited to the 2m proximity problem.

In a US Senate Committee hearing on big data & COVID, privacy/law scholar Ryan Calo advises ‘humility and caution’. Calo also notes against mission creep: “there will be measures that are appropriate in this context, but not beyond it” – that, “to paraphrase the late Justice Robert Jackson, a problem with emergency powers is that they tend to kindle emergencies.”

Calo later joins Ashkan Soltani & Carl Bergstrom for Brookings with the same message: CT apps e.g. Apple/Google involve issues of false positives & false negatives with corresponding social/psychological errors; access / uptake difficulties; privacy & security issues, such as correlating BT tags to people via stationery cameras; and the normalisation problem in which “these voluntary surveillance technologies will effectively become compulsory for any public and social engagement.”

U of Ottawa researcher Teresa Scassa counsels caution for Canada on data surveillance, arguing that individual tracking is ‘inherently privacy-invasive’.

Scholars, including data ethics researcher Ben Green, write directly disputing the privacy-health tradeoff: “rather than privacy behing an inhibitor of public health (or vice versa), our eroded privacy stems from the same exploitative logics that undergird our inadequate capacity to fund and provide public health.”

Harvard’s Berkman Centre’s Bietti & Cobbe caution “the normalisation of surveillance to a point of no return” – but characterised not as the greedy advance of the powerful state, rather an eager ‘devolution of public power’ by ‘hollowed out governments’ to Big Tech.

A Brennan Center piece directly invokes post-9/11 surveillance programs as an example of pernicious and ineffective systems outlasting the crisis.

Andrejevic & Selwyn critique the fantasy of technological solutionism: could we use AI to invent the vaccine, blockchain to constrain the spread, could all of us simply WFH forever? Of course, the real impact is made through often faulty, wonky apps for data extraction; repurposing of surveillance techniques for unprecedented populational control.

Naomi Klein interprets it as a ‘pandemic shock doctrine’. She tells us what we’re getting is disaster capitalism’s free market ‘remedies’ that exploit the crisis of inequality. The response is to fund, save, promote, justify, Big Tech to save us through pointless and wonky surveillance tools. Airline companies to save us by, uh, not going bankrupt; and so on. Klein thus notes we will do here what we did to banks in 2008.

Early May, Naomi Klein writes for The Intercept of the ‘screen new deal’ for the pandemic shock doctrine: Cuomo’s handshake with Eric Schmidt & the Gates Foundation for New York as a technologist’s experimental ground zero. And of course it is one where every bad idea they’d been pushing becomes accelerated past proper judgment.

Germany-based AlgorithmWatch cautions that COVID is fundamentally ‘not a technological problem’, and that the ‘rush to digital surveillance’ risks new forms of discrimination and other abuses.

The Ada Lovelace Institute (independent, AI focus) reviews recently mooted proposals and concludes ‘an absence of evidence’ regarding efficacy and harm mitigation.

A Brennan Center piece directly invokes post-9/11 surveillance programs as an example of pernicious and ineffective systems outlasting the crisis.

CIGI notes re. US sweeping travel bans but also policies like “treating Chinese state journalists as official representatives of the Chinese state and intelligence apparatus” (to which China retaliated with expulsions) – that there is a broader concern about abuse of authority & human rights: “Epidemic response is a rare circumstance in which governments make sweeping decisions about the collective good and take unchecked action, often in contravention of individual human rights.”

UT Austin researchers Katie Joseff & Sam Woolley warn that COVID-related location tracking is a golden opportunity for what their UT Austin team calls ‘geopropaganda’: “he use of location data by campaigns, super PACs, lobbyists, and other political groups to influence political discussions and decisions.” Hong Kong, of course, is proving to be ground zero.

June 24: Joven Narwal at the University of British Columbia argues that police must not have access to CT app data, citing past instances in which police try to draw on existing public health data.

 

 

Contact Tracing (CT) Apps & Other Software/Databases

MIT Tech Review now has a great database of government CT apps around the world.

Even the United Nations has come out with not CT but a ‘social distancing’ app that, well, doesn’t work: the 1point5 app is supposed to alert when other BT devices enter range, but it often fails to do so.

 

Google-Apple Contact Tracing System

A key player has been the international research collaboration for open-source DP-3T (decentralised privacy-preserving proximity tracing) based on bluetooth beacons of randomised numbers that track solely proximity, and then can be retroactively used to contact trace a positive case.

DP-3T is the clear inspiration for the later Google-Apple proposal. Apr 10 – Google & Apple announce a collaborative effort on a bluetooth-based contact tracing system. This means first APIs in May to facilitate third party apps’ interoperability across the OS, but ultimately, an OS-level, no-download functionality. They promise privacy and transparency, with the system always being user opt-in.

The basic functionality of the bluetooth system is essentially identical to DP-3T. Bluetooth beacons exchange codes, and upon a positive’s upload of 14 days of keys, contacts receive phone notification w/ info. Google promises ‘explicit user consent’ req, ‘doesn’t collect’ PII / location data, list of contacts ‘never leaves phone’, positives aren’t ID’d, and the system as a whole will only be used for COVID contact tracing. But of course, you could then have states and employers require that users use their particular third party app enabled by the API.

google

 

Immediately, Jason Bay, product lead for Singapore’s TraceTogether, put out a clear warning: Bluetooth tracing cannot replace manual tracing, only supplement, and any optimism otherwise is “an exercise in hubris”.

Bay notes that TT worked insofar as it worked with the public health auths from day 1, including shadowing manual tracers at work. Bay argues HITL & human fronting is crucial to fight T1/T2 errors and introduce judicious judgment to how each contact should be traced. He cites an example of the Washington choir super-spreader where 2m tracing would have flagged nothing.

 

Lawmakers in US are beginning to join conversations around the risks: “without a national privacy law, this is a black hole.” (Anna G. Eshoo, D-Menlo Park)

In 21 April, the French government requested modifications so that the app can work in the background for iPhones (currently bluetooth functions it requires cannot be done in bkgr), but it’s not expected that Apple will agree.

For similar reasons, UK NHS rejects Apple-Google model in favour of an option where the matching would occur in a centralised server, citing analytics benefits, & wants the app to work on background.

 

There are generally applicable concerns about the actual efficacy of any digital contact tracing app: adoption barriers, type 1 & 2 error rates, public behaviour change (e.g. false confidence/fear).

Julia Angwin reviews points out the positives: the Google-Apple solution is opt-in & the keys are anonymous from the beginning, the system is mostly decentralised, and G/A themselves collect no PII. But the very lack of PII renders it vulnerable to trolling (send the whole school home, or Russian DDOS). The keys themselves are very public, so you can, say, grab & blast somewhere and spoof presence. All these false alerts will undermine the system. There also remains some risk of re-identification & location info exposure. There is likely to be some limited degree of location info centralisation (e.g. to limit keysharing by region). Angwin also notes that “other apps may try to grab the data” e.g. advertisers – and this gets into what I noted earlier: the ecosystem problem, what third party apps or government requirements or healthcare databases bring into the original G/A specs.

 

China & Asia

China: NYT reports on the Chinese case of Alipay Health Code, which you are required to install & notifies on their quarantine status, and then you show to officials to gain free movement across thresholds. Once installed & agreed, “reportInfoAndLocationToPolice” function communicates to server.

 

South Korea: Widely known for its aggressive contact tracing, South Korea’s system is primarily dependent on traditional CT rather than app use. We know that each and every confirmed case is followed up through interviews, where in most cases individuals voluntarily surrender certain data for this purpose. Credit card use in particular allows a detailed mapping of their movements. This info is then typically revealed to the public – right down to “X convenience store” or “Y clothing store” – which has also prompted predictable online gossiping.

Mar 7 sees a new government app for ~32400 self-quarantine subjects. Using GPS, it alerts users if they exit the quarantine area (or if they shut off GPS) – and police claim right to force noncompliant persons to return. But app installation was not made mandatory, and a week later one region saw only 21% of quarantiners install. And of course leaving the smartphone at home circumvents the system entirely.

A key feature in SK’s use of contact tracing is the way each quarantining individual is assigned a government worker for follow-up; not only regular calls, but for instance, location data from the CT app showing the individual has left the home immediately alerts the case worker. Some info here (Korean).

 

Singapore: Like South Korea, Singapore has publicly reported personal details for each infected person, e.g. below. A ‘BlueTrace’ system developed by a government team is used to extract what sounds like cellphone location data: “The collection and logging of encounter/proximity data between devices that implement BlueTrace is done in a peer-to-peer, decentralised fashion, to preserve privacy.”

On May 1, a tech editor at a major Singapore newspaper calls for making the app mandatory by law, lamenting that only 1.1m installs is far short of ~3.2m minimum (3/4 of pop).

 

India: One Indian state is branding people’s hands with quarantine stamps. The Bangalore Mirror reports that Karnaktaka has begun to require ‘hourly selfies’ to the government between 7am-10pm using an app, or be liable for ‘mass quarantine’. The app, “Quarantine Watch”, appears to demand personal information, symptoms, and have a photo submission system (that many users reports simply does not work).

karnakata

 

United States

Mar 13: Trump makes up on the fly a nonexisting ‘national scale coronavirus site’ from Google during a news conference. What was on the way was an Alphabet subsidy Verily-made triage tool, available that weekend in a pilot for Santa Clara & San Mateo. “Project Baseline” was more or less a screening survey based on Google Forms.

Verily’s terms and services show, is not HIPAA-covered & your data may be shared with “including but not limited to” Google and Salesforce. In contrast, similar functionalities have already been available from, say, Portland-based Bright.md without the invasive data collection.

Apple has since created a similar COVID screening tool.

Palantir is ‘working with’ CDC to provide modeling services. By Apr 21, we know that Palantir’s been granted the federal contract for a major aspect of ‘HHS Protect Now’, around their usual expertise in data analytics tools.

US States are actively working on CT apps; Utah has released an app developer (Twenty) to create ‘Healthy Together’. It promises bluetooth and location based matching as well as basic symptom screener & ability to receive test results. There is clear push to pump money in and give out lucrative contracts for tech platforms to enable CT; a letter from ‘health experts’ asks Congress for 46.5b towards CT, including 180,000 workers to conduct interviews.

The degree to which CT apps have been successful in terms of adoption, or of actually achieving CTs, is unclear. Utah HT app after a month has not led to any actual CT work being done, says officials; the highest known rate of participation (i.e. installs) remains South Dakota (Care19, 2%, single positive case used).

North Dakota also uses Care19, which collects location data then sends random ID # & advertising phone ID # to Foursquare, and Bugfender (software bug manager) that then sends on to Google. Examples of Care19 interface:

June 16: after early enthusiasm and rollout in some states during April and May, there is now some evidence that state governments are not so impressed. States like California have declined to actively pursue CT solutions. The American public do not appear to be very enthusiastic, either, with one survey reporting that 71% would rather not use such an app.

 

Canada

Alberta Health Services deployed a screening tool early on.

Alberta also deploys a CT app, using a version of Singapore’s TT – ABTraceTogether. Data collected via bluetooth temp IDs, 21 days local storage, Alberta health uses anonymised data for analytics – no location data, and promises encrypted IDs that upon health servs decryption doesn’t reveal identity.

 

UK & Europe

The NHS has a symptom ‘survey’ that is more about helping NHS collect data and help them plan. It appears to require DOB, postcode, household composition, and to have that data retained for 8 years (?!) shared across govt dept, ‘other organisations’ and ‘research bodies’.

on 26th the Economist reports that Palantir is also ‘teaming up’ with the NHS though its exact contribution is unclear. (One assumes that it’s Palantir’s bread and butter of database consolidation, visualisation & search.) On Mar 28, the NHS reveals in a blog post that as expected, it’s the existing Foundry system for the front-end, & lists other partners like MS & Google.

UK deploys a NHS CT app in early May, developed by NHSX (NHS digital) with VMWare Inc (Palo Alto) & Zuhlike (Swiss). On 4 May, UK announced NHS CT app pilot test for Isle of Wight using indeed a more ‘centralised’ approach, with plans to record location data. BT needs to be on at all times but it seems app can work background, with the handshakes sent to UK-based server for matching process.

Buzzfeed reports complaints of device incompatibility, battery drain, and poor UX causing confusion – as well as clear technical failures in terms of picking up BT signals or maintaining contact w/ hundreds of devices. Not only that, the app appears non-GDPR compliant; e.g. links used in app to privacy policy ironically uses Google Analytics that would allow ad targeting.

June 11: UK is having second thoughts, with a second trial of the NHS CT app postponed and a return to the Google/Apple model being considered. A preprint study presents survey data indicating 60.3% willingness to install CT apps in the UK.

June 24: BBC provides a broad chronological overview of the UK CT app and its discontents. It notes a chronic pattern of missed deadlines, second thoughts around design features & privacy concerns, and periods of radio silence from the UK government.

 

May 28: France is set to roll out their CT app “StopCovid”. The bluetooth matching is intended to supplement traditional CT work. Like the UK solution, it differs from the Google-Apple design by centralising the matching process. This makes it theoretically far easier to re-identify persons on the basis of anonymised information.

June 11: France’s StopCovid reports uptake levels of around 2% of population. The influential Oxford study suggests that CT apps can be effective in reducing infection levels at any level of uptake, but ~60% has generally been mooted as the required uptake level in order to open up while keeping infection levels at lockdown rates.

 

A Polish app, which is described as persistently accessing your location, asks for regular selfie updates, and of course was a rushed job ‘prone to failure’. Many downloaded it voluntarily.

 

June 16: Norway’s Smittestopp has been suspended and ordered to delete existing data. As in Bahrain, Kuwait and elsewhere, the key concern is that collection of GPS location data as well as Bluetooth signals means a far more intrusive level of knowledge around individual movements.

 

Australia & New Zealand

In late April we have the Australian COVIDSafe app deployed with 1m installations. Not open source; no location data, only bluetooth. The foreground question is handled poorly: the app tells you to “keep the app running” but gov official’s answer on whether you can minimise was “It’s unclear at this stage”. People trying to figure it out aren’t sure (e.g. does it go into Apple’s overflow area? What about Apple’s low power mode?)

By June 11, Australia reports ~6.2m users (in a country of ~25m). The government had not regularly released clear, up to date data of app downloads; an independent initiative collected existing info & estimates until May 24, when numbers had settled around 6m.

Wellington, NZ introduces Rippl app coinciding with gradual drop in alert levels. Rippl is voluntary check-in and claims zero location tracking whatsoever; people simply scan QR codes posted in shops to check in, with paper sign-in available. A Rippl worker confirms zero collection, thus requiring voluntary contact to follow up on alerts. (Some of the side effects are obvious: an Auckland woman is hit on by a Subway worker who used the contact info she input for CT.)

 

Middle East

Israel: On Mar 16 Netanyahu announces that he has “authorized the country’s internal security agency to tap into a vast and previously undisclosed trove of cellphone data to retrace the movements of people who have contracted the coronavirus and identify others who should be quarantined because their paths crossed.” Here again, geolocation data systems built for commercial uses and legislated for antiterrorism purposes becomes reappropriated for another end. Where Israel has already extensively developed and deployed surveillance techniques against Palestinians, including drones, FR, algorithmic profiling, this sets a precedent for use on Israel’s non-Palestinian citizens. Similar government use of telecom data to track movement has been announced by South Africa.

Conversely, we know that an Israeli firm, Cellebrite, which used to sell jailbreaking software to police, now supplies similar tech for extracting contacts & location data.

 

Qatar: April saw the release of the EHTERAZ CT app, including functions for ‘following up on’ quarantiners; alerting users of contact with positive cases; and Alipay (China)-like colour-coded scheme for pos/neg status.

Later, an Amnesty investigation shows Qatar’s EHTERAZ CT app had key security flaw allowing access to users’ personal information, including location data.

 

Saudi Arabia: The ‘Tawakkalna’ app arrives in May, designed to manage movement restrictions rather than directly surveil for COVID. The app is reported to work by allocating “four hours per week” to people that they can use for essential trips.

 

June 16: Bahrain and Kuwait’s CT apps have been flagged by Amnesty International as excessively intrusive. They feed location data to a central server, meaning that highly accurate mapping of individual movements can very easily be performed.

 

 

‘Immunity Passports’

Apr 10 – Fauci notes that some kind of immunity IDs / cards for Americans is on the cards; UK & Italy are considering. UK startup Onfido has discussed an ‘immunity passport’ w/ US gov, claiming this personally identifiable system can be scaled up rapidly. The initial effort has come from German researchers at Helmholtz Centre for Infection Research, involving mailing out antibody tests en masse then providing some vaccination card. NZ is considering its own bluetooth ‘CovidCard’ no doubt inspired by Singapore & US moves.

May 4, we spot a DOD contract live, 25m, calling for “Wearable Diagnostic for Detection of COVID-19 Infection” that would be demonstrated within 9 months and then able to deploy at mass.

 

 

Drones & Robots

US: In Apr 3, a drone was spotted in Manhattan urging social distancing, but CBS describes it as run by a ‘volunteer force’. Despite concerns by the Gothamist, there remains no clear proven case of NYPD using drones for SD. Regular patrolling by cars is taking place, and $250-500fines given out.

Companies are also building and marketing their own drone solutions – e.g. a Digital Trends report on ‘Draganfly’ touting drones w/ thermal scanners that can detect temperature & visually identify coughing. Perhaps the PR was effective. On April 23, we hear that Westport, Connecticut is using its drones to detect high temp & close proximity and issue verbal warnings.

The robotics company Promobot advertises a new screening robot that takes temperatures – after its earlier Feb publicity stunt where a robot asked New Yorkers about their symptoms, and was kicked out for lacking permits. It’s your standard invasive FR for age, gender, plus a (likely inaccurate) temp check tool.

 

Madrid is also using drones to monitor movement, using the speaker to ‘order people home’. Lombardy authorities are reported as ‘using cellphone location data’ to assess lockdown compliance.

UAE has contracted ‘unmanned ground vehicles’ to be deployed at the Abu Dhabi Airports for disinfection work. Oman has also announced the use of drones to detect high temperature individuals, though details are scarce. ‘Stay at home’ drones that approach individuals outdoors and blast audio instructions has been announced in UAE, Jordan, Bahrain, Kuwait, Morocco as well.

Singapore is now using a Boston Dynamics drone with camera and loudspeaker to encourage social distancing in a trial.

singapore

 

In some cases, robots and other tech designed not specifically for COVID surveillance are being deployed for the pandemic response.

General healthcare-related robot/tech companies are donating/pitching their tools to overwhelmed hospitals: e.g. in UAE, a Softbank-backed company is offering ‘free healthcare bots’ that seem to be more for telemedicine related assistance rather than COVID surveillance in a direct sense. In the US, Google is now partnering with Mount Sinai (and others?) to send Nest cams to hospitals for patient monitoring.

The California startup Zipline has used autonomous drones to deliver medical supplies in Ghana.

 

 

Thermal Cameras & Public Space/Retail Surveillance

China has used thermal cameras in Wuhan as early as 21 January.

Numerous other countries have considered thermal screening at airports / ports of entry; e.g. Qatar claims all persons entering country are subject to thermal screening, and numerous US airports used it as early as February.

 

In US, thermal cameras have been explored by both state and corporate bodies:

“Athena Security”, Austin TX, promises ‘AI thermal cameras’ to detect fevers in crowded spaces, modelled on existing product for gun detection. About a week later, cybersecurity firm IPVM reports that Athena faked pretty much everything in its promo: Hikvision cameras rebranded as Athena, photoshopped interface, fake restaurant & customer testimonies.

Other grifters like RedSpeed USA have been advertising thermal scanner setups – just like Athena, with China-sourced hardware.

May 28: Companies like FLIR, who sell thermal cameras and sensors, have seen rise in demand and revenues, despite unanswered questions over efficacy; as experts in the Wired report notes, this risks false confidence around asymptomatic persons, and similar measures have proven largely ineffective in past outbreaks.

TSA is reported on May 15 to be preparing temperature scanning in airports, though details remain unclear. Some within TSA have raised concerns around logistics, accuracy, and procedures; “People with high temperatures will be turned over to officials with the Centers for Disease Control and Prevention, the administration official said.”

 

 

 

Facial Recognition & Smart tech / AI

April 28, 2020 – NBC reports that Clearview AI is “in talks with federal and state agencies” towards FR-based COVID detection. This info appears to come from Hoan Ton-That, the infamous Clearview founder with white supremacist ties, in a public bid to raise the company’s profile and fish for contracts.

There’s been a rush to collect masked faces to try and produce data for FR systems. Researchers have published a ‘covid19 mask image dataset’ to github, of 1200 images from IG, in April.

 

There’s plenty of vague talk around ‘using AI’ to improve COVID surveillance, but it is often unclear what exactly is the AI component. We might assume, as Arvind Narayanan has shown elsewhere, that many such claims to AI are (1) not actually AI, and/or (2) not actually effective.

Sometimes we get a little more detail. Dubai Police claims to use ‘smart helmets’ with IR cameras for temperature reading, facial recognition & car plate readers for on the ground surveillance. This follows similar reportsof smart helmets in use across Shenzhen, Chengdu & Shanghai – though actual efficacy remains unknown.

 

May 28: WaPo reports that data streams from wearables like Fitbit might be useful for early detection of flu-like symptoms. This is an old chestnut in decade+ old effort to show utility of wearable tech (e.g. occasional news stories about individuals catching heart attacks thanks to Fitbit). There are still no peer-reviewed studies showing clear replicable results, of course.

 

 

Workplace & School Surveillance

Workplaces are resorting to always-on webcams and other measures to try and maintain control as WFH becomes implemented. Some are exceptionally draconian, including keylogging.

PwC is getting in on the act, offering its own tracing tool that would track employees. It combines bluetooth with its own ‘special sauce’ of looking at BT/WiFi signals within each room to reduce inaccurate pings (e.g. people on other side of wall). They tout huge interest by clients – and privacy provisions full of gaping holes.

Companies are also exploring ‘smart cameras’ to check for masks and social distancing, e.g. Motorola, Camio (San Mateo), often targeting factories and offices. Proxxi (Canada) sells wearable bracelets instead that vibrate upon proximity; Ford and others are trying them too, and sometimes the idea is to combine it with CT.

 

Schools are joining in: online proctor services, where globally outsourced workers monitor students by webcams while they take tests & score their reliability as an anti-cheating measure, has exploded in popularity & inquiries with COVID. Tilburg U rector defends use of Proctorio: “Digital surveillance is indispensable to maintain the value of the diploma”, he says – and anyway, don’t worry, “we only use the most essential tools”. The lazy whataboutism trotted out to distract from the issue: hey, physical exams can be even more invasive, someone could look in your bag!

Similarly, Examity requires students to hand over personal info, provide remote access to the proctor, they monitor students’ keystrokes, and of course the easy translation of unscientific bullshit hunches into ‘common sense’ features: “closely watch the face of the student to see if there is something suspicious, like suspicious eye movements”. Schools are now training their students to swallow their concerns and critical thinking and to accept bad technology as a price of admission. One student says: “When you’re a student, you have no choice […] This is school in 2020.”

Wilfried Laurier maths dept has forced students to buy webcams – during, of course, a webcam supply shortage – to facilitate proctoring, arguing that “there are no alternatives”.

June 5: Proximity detectors are planned for deployment in an Ohio school district for COVID.

June 24: As many universities seek to find ways to open up for the fall semester and keep revenue flowing, they are turning to a wide array of surveillance technologies to try and encourage students – and coerce employees – to show up on campus. Brown is partnering with Alphabet-owned Verily and its Healthy at Work program, where all students & employees use apps to log symptoms, randomly assign individuals to be tested, and collect their data for further development.

 

Beyond the workplace, COVID is also becoming an opportunity to Trojan horse new surveillance measures. E.g. real estate & proptech, already a growing field post-2008. A Boston Review article mentions Flock Safety; Bioconnect and Stonelock, which use biometrics/FR to admit known individuals, and now with added bullshit about helping prevent COVID through such access control; Yardi, a ‘virtual landlord’ service that helps manage large rental populations, and now boasts a remarkably unnecessary Trojan horse:

In response to COVID-19 the company is offering communication services meant to allow families to check on loved ones in nursing home facilities while remaining socially distanced. Using its services, the company states, “family members can view the latest records about their residents, including vital signs, diagnoses and medication orders. This data is shared from Yardi EHR [electronic health records] in real time.” Here again we see a window into the expansive ambitions of one of the largest virtual landlord firms: extending proptech’s reach into personal health data and even electronic health records.

 

 

more to come.

 

 

The Futures of Anticipatory Reason

Coming out very soon at Security Dialogue is a piece I worked on together with Piotr Szpunar, whose book Homegrown: Identity and Difference in the American War on Terror came out last year with NYU Press – so both of us looking closely at current developments in surveillance, counter-terrorism and the demand to predict. In the article, we argue that anticipatory security practices (just one part of the even broader current obsession with prediction) invoke the future to open up wiggle room for unorthodox, uncertain and otherwise problematic claims about people. This gap, which we call ‘epistemic black market’, is very useful for the flexibility it affords security practices – flexibility that is typically used to reinforce longstanding biases and power relations, exemplified by the continuing insistence on the figure of the brown, Muslim terrorist.

You can find the pre-proofread version on this site here.

 

Abstract:

This article examines invocations of the future in contemporary security discourse and practice. This future constitutes not a temporal zone of events to come, or a horizon of concrete visions for tomorrow, but an indefinite source of contingency and speculation. The ongoing proliferation of predictive, pre-emptive and otherwise anticipatory security practices strategically utilise the future to circulate the kinds of truths, beliefs, claims, that might otherwise be difficult to legitimise. The article synthesises critical security studies with broader humanistic thought on the future, with a focus on the sting operations in recent US counter-terrorism practice. It argues that the future today functions as an ‘epistemic black market’; a zone of tolerated unorthodoxy where boundaries defining proper truth-claims become porous and flexible. Importantly, this epistemic flexibility is often leveraged towards a certain conservatism, where familiar relations of state control are reconfirmed and expanded upon. This conceptualisation of the future has important implications for standards of truth and justice, as well as public imaginations of security practices, in a period of increasingly pre-emptive and anticipatory securitisation.

Presentation @ Digital Existence II: Precarious Media Life

Later this month I will be presenting at Digital Existence II: Precarious Media Life, at the Sigtuna Foundation, Sweden, organised by Amanda Lagerkvist via the DIGMEX Research Network (of which I am a part) and the Nordic Network for the Study of Media and Religion. The abstract for my part of the show:

 

On the terror of becoming known

Today, we keenly feel the terror of becoming known: of being predicted and determined by data-driven surveillance systems. The webs of significance which sustain us also produce persistent vulnerability to becoming known by things other than ourselves. From the efforts to predict ‘Lone Wolf’ terrorists through comprehensive personal communications surveillance, to pilot programs for calculating insurance premiums by monitoring daily behaviour, the expressed fear is often of misidentification and misunderstanding. Yet the more general root of this anxiety is not of error or falsehood, but a highly entrenched moralisation of knowing. Digital technologies are the newest frontier for the reprisal of old Enlightenment dreams, wherein the subject has a duty to know and technological inventions are an ineluctable force for better knowledge. This nexus demands and requires subjects’ constant vulnerability to producing data and being socially determined by it. In turn, subjects turn to what Foucault called illegalisms[1]: forms of complaint, compromise, obfuscation, and other everyday efforts to mitigate the violence of becoming known. The presentation threads this normative argument with two kinds of grounding material: (1) episodes in becoming-known drawn from original research into American state- and self-surveillance, and (2) select works in moral philosophy and technology criticism.[2]

 

[1] Foucault, M., 2015. The Punitive Society: Lectures at the College de France 1972-1973 B. E. Harcourt, ed., New York: Palgrave Macmillan.

[2] E.g. Jasanoff, S., 2016. The Ethics of Invention: Technology and the Human Future, New York: W.W. Norton & Co; Vallor, S., 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford: Oxford University Press; Winner, L., 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology, Chicago: Chicago University Press.

Criticising Surveillance and Surveillance Critique

New article now available on open access @ Surveillance & Society.

Abstract:

The current debate on surveillance, both academic and public, is constantly tempted towards a ‘negative’ criticism of present surveillance systems. In contrast, a ‘positive’ critique would be one which seeks to present alternative ways of thinking, evaluating, and even undertaking surveillance. Surveillance discourse today propagates a host of normative claims about what is admissible as true, probable, efficient – based upon which it cannot fail to justify its own expansion. A positive critique questions and subverts this epistemological foundation. It argues that surveillance must be held accountable by terms other than those of its own making. The objective is an open debate not only about ‘surveillance or not’, but the possibility of ‘another surveillance’.

To demonstrate the necessity of this shift, I first examine two existing frames of criticism. Privacy and humanism (appeal to human rights, freedoms and decency) are necessary but insufficient tools for positive critique. They implicitly accept surveillance’s bargain of trade-offs: the benefit of security measured against the cost of rights. To demonstrate paths towards positive critique, I analyse risk and security: two load-bearing concepts that hold up existing rationalisations of surveillance. They are the ‘openings’ for reforming those evaluative paradigms and rigged bargains on offer today.

Lecture @ Copenhagen Business School

I was recently at Copenhagen Business School, courtesy of Mikkel Flyverbom, to discuss the book-in progress. An earlier version of the talk, given at MIT, is online in podcast form here.

An excerpt from my notes:

“So we have two episodes, two ongoing episodes. On one hand, you have the state and its technological system, designed for bulk collection of massive scales, and energised by the moral and political injunction towards ‘national security’ – and all of this leaked through Edward Snowden. On the other hand, you have the popularisation of self-tracking devices, a fresh addition to the growing forms of constant care and management required of the employable and productive subject, Silicon Valley being its epicentre.

These are part of a wider penumbra of practices: algorithmic predictions, powered by Bayesian inference and artificial neural nets, corporate data-mining under the moniker of ‘big’ data… now, by no means are they the same thing, or governed by some central force that perpetuates them. But as they pop up around every street corner, there are certain tendencies that start to characterise ‘data-driven’ as a mode of thinking and decision-making.

The tendency I focus on here is the effort to render things known, predictable, calculable – and how pursuing that hunger entails, in fact, many close encounters uncertainty and the unknown.

Here surveillance is not reducible to questions of security and privacy. It is a scene for ongoing conflicts over what counts as knowledge, who or what gets the authority to declare what you are, what we consider ‘good enough’ evidence to watch people, to change our diet, to arrest them. What we’re seeing is a renewed effort at valorising a certain project of objective knowledge, of factual certainty, of capturing the viscera of life into bits, of producing the right number that tells us what to do.”

 

 

Data Epistemologies – Dissertation online

Data Epistemologies: Surveillance and Uncertainty, my dissertation at the University of Pennsylvania, is now online for public access here. It is, in many ways, a working draft for my current book project.

Abstract:

Data Epistemologies studies the changing ways in which ‘knowledge’ is defined, promised, problematised, legitimated vis-á-vis the advent of digital, ‘big’ data surveillance technologies in early twenty-first century America. As part of the period’s fascination with ‘new’ media and ‘big’ data, such technologies intersect ambitious claims to better knowledge with a problematisation of uncertainty. This entanglement, I argue, results in contextual reconfigurations of what ‘counts’ as knowledge and who (or what) is granted authority to produce it – whether it involves proving that indiscriminate domestic surveillance prevents terrorist attacks, to arguing that machinic sensors can know us better than we can ever know ourselves.

The present work focuses on two empirical cases. The first is the ‘Snowden Affair’ (2013-Present): the public controversy unleashed through the leakage of vast quantities of secret material on the electronic surveillance practices of the U.S. government. The second is the ‘Quantified Self’ (2007-Present), a name which describes both an international community of experimenters and the wider industry built up around the use of data-driven surveillance technology for self-tracking every possible aspect of the individual ‘self’. By triangulating media coverage, connoisseur communities, advertising discourse and leaked material, I examine how surveillance technologies were presented for public debate and speculation.

This dissertation is thus a critical diagnosis of the contemporary faith in ‘raw’ data, sensing machines and algorithmic decision-making, and of their public promotion as the next great leap towards objective knowledge. Surveillance is not only a means of totalitarian control or a technology for objective knowledge, but a collective fantasy that seeks to mobilise public support for new epistemic systems. Surveillance, as part of a broader enthusiasm for ‘data-driven’ societies, extends the old modern project whereby the human subject – its habits, its affects, its actions – become the ingredient, the raw material, the object, the target, for the production of truths and judgments about them by things other than themselves.

Colloquium @ MIT

On September 15, I will be giving a talk at MIT’s CMS/W department, titled ‘Knowledge’s Allure: Surveillance and Uncertainty’. I will be covering some of the material I am remoulding from dissertation to book. I think a podcast version will be uploaded online some time afterwards.

For the physical event, it’s 5pm, location 3-133. More info here.

colloquium

What If / Fabrications

What happens when the ‘what if’ functions as the key operator for the Reason of counter-terrorism, criminal justice and national security?

***

In 2015, a map used in the US military’s ‘Jade Helm’ training exercises was leaked to the public, provoking commentary, speculation and conspiratorial interpretation. The map, detailing one of Jade Helm’s mock scenarios, showed Texas and Utah as ‘hostile states’, which friendly (‘permissive’) states like California and Colorado might help subdue.

At the same time, a different rumour made the rounds. Whispers in the air: concrete ISIS terrorist bases had been discovered in Texas. Senator Ted Cruz, “fresh from his Jade Helm inquiry” (New Yorker), accused the incumbent government of failing to connect the dots. (As we all know, he would show up in presidential debates a few months later, sweating and spitting with promises of being tough on Islamic terrorists.) A mock military scenario and an unconfirmed rumour had mutually reinforced each other’s status as half-truth, or rather, operationalisable fiction. One poll of registered Republican voters immediately following the leaks pegged 32% as “think[ing] that the Government is trying to take over Texas” (Public Policy Polling).

Since then, Donald Trump as the season’s flavour has everyone wondering: where have all the facts gone, or rather, why do they seemingly have no value anymore? How to run the marketplace of ideas without them? But it’s not a binary switch, just different rules of the game. Since 2001, America’s discourse on terrorism – both inside and outside the relevant agencies – narrated the threat as becoming radically unpredictable and radically distributed, producing a situation where traditional prudence in acquiring ‘certain evidence’ became unrealistic. A double-bind: on one hand, you admit that you can ‘never be sure’ if the kid you arrested would really have killed dozens, or where the ‘next attack’ will strike. You can’t wait for certainty. On the other hand, the political and moral pressure to predict and prevent becomes overwhelming. Hence, long before Donald Trump, traditionally ‘respectable’ politicians like Tony Blair, confronted with the Chilcot Report, argues that ‘doing something was better than doing nothing’. Beneath the political limelight, security and counter-terrorist practices in FBI sting operations to biometric surveillance at airports have embraced the use of scenarios, simulations, and ‘what if’ logics to try and plug the gap between knowledge and danger (e.g. the work of Peter Adey, Claudia Aradau, Rens van Munster).

Let’s look more closely at one such practice which combines the fear of uncertainty with the fantasy of prediction – one where the mad beast is incited and produced, precisely so that it can be felled in public and a sense of security restored.

***

1. In 2011, an FBI sting operation began to form around Sami Osmakac. A Kosovo-born American and Muslim, his trusted Muslim friend had introduced him to a man named Dabus, an FBI informant who in turn connected Osmakac to an undercover agent named ‘Amir Jones’. To that point, Osmakac’s record of suspicious activity included a tendency to verbally criticise democracy, argue for his religion in combative and fundamentalist terms – and one streetside fisticuffs with a Christian street preacher that had recently gotten him arrested. In other words, little in the way of convictable behaviour. After meeting Dabus and Jones, however, Osmakac was supplied with money, with which he could purchase (fake, prepared) weapons and explosives; he was trained in their use; and he was even given money for a taxi so he could show up to his own attack spot, where he was finally arrested by the FBI. During the process, the FBI agents spoke of Osmakac as a ‘retarded fool’ who needed the FBI’s support to turn his ‘pipe-dream scenario’ into any semblance of a real threat – a result which they referred to as a ‘Hollywood ending’ (The Intercept). The FBI provided material and psychological encouragement that allowed Osmakac to become ‘dangerous enough’ to be legally and operationally eligible for arrest. Of course, this also means that it becomes impossible to ever confirm whether Osmakac would have acted without such encouragement; the price of a pre-emptive certainty is the absolute unconfirmability of justice.

2. Basaaly Saeed Moalin, a Somali-American, was arrested in 2013. Subsequently, as the Snowden leaks swept the nation, advocates of US government surveillance referred to Moalin as the one case that could be publicly cited as evidence that surveillance worked. In other words, Moalin was supposed to be the proof that pervasive domestic dragnets aren’t just stabs in the dark, but reasonable procedures at reducing uncertainty.

The problem is that Moalin too was coaxed and aided towards this status. Arrested on charges of conspiracy and material support for terrorism – specifically, posting $8,500 to a Somali contact associated with the jihadist group al-Shabaab – the prosecution argued that Moalin’s frequent phone calls and money transfers supported terrorism. In court, the defence directly contested this interpretation of available evidence – and in doing so, publicy exposed the fabrications as a set of uncertain and primordial indices oriented towards certainty. Picking apart Moalin’s phone calls collected by telecommunications surveillance, the defence argued that his comments about ‘jihad’ referred to a local jihad in his native Somalia against the Ethiopians; that his money transfers to his homeland had gone to projects for schools and orphanages; and, indeed, that no record showed any definitive statement in support of terrorist attack (Washington Post).

The defense went as far to submit to the court alternative transcriptions to Moalin’s Somali calls, and even enlisted cultural interpretations. Moalin’s cousins argued that his talk was a well recognised form of fadhi ku dirir (literally ‘sitting and fighting’), a bullish and aggressive but ultimately noncontroversial form of argumentation common amongst Somali men. Since Moalin was apprehended before he could supply further certainty in the form of a violent attack or concrete statements referring to one, surveillance and arrest had to be justified through subjunctive and paranoid readings of relatively cryptic comments like the following:

BASAALY [Moalin]: We are not less worthy than the guys fighting.
ISSA: Yes, that’s it. It’s said that it takes an equal effort to make a knife; whether one makes the handle part, hammers the iron, or bakes it in the fire (New Yorker).

These practices go beyond traditional sting operations, bordering close to that toxic word ‘entrapment’. We might call them fabrications: the deliberate, planned, and increasingly systematic practice of producing what sufficiently ‘counts’ as evidence in counter-terrorism operations. Osmakac is joined by an increasingly sizable contingent – mostly young, Muslim, with a history of mental struggles, and typically with few or no convicted/convictable offences prior to their snaring. One report puts their number at around 30% of counter-terrorism convictions between 2002 and 2011 (Human Rights Watch).

Fabrication is the practical expression of the ‘what if’ as the operator of counter-terrorist rationality. While some degree of fabrication is by definition a necessary part of any preemptive measure, we are seeing a visible embrace of more speculative forms of knowledge that could license more actively interventionist efforts – largely because it is thought that the threats of post-9/11 terrorism does not permit the luxury of greater proof and certainty. If these suspects were being directed and shaped on the basis of potential rather than actual danger, operatives and politicians argued, so be it: such pre-emption is the only way to ever ‘know enough’ in time to stop the next attack. We get a direct glimpse of this in the recent documentary Homegrown (HBO, Greg Barker).

3. In 2005, Ehsanul ‘Shifa’ Sadequee, 19 years old, was arrested and sentenced to 17 years in prison for suspicious activity that largely comprised of translating jihad-related texts, talking big online, and producing a ludicrously amateur ‘casing video’ in Washington D.C. In Sadequee’s case, there was no active fabrication, at least none that has been disclosed publicly; but it was another instance in which highly primordial activities and discussions, which might at most be said to ‘encourage’ terrorism, was mobilised to eliminate the target from social existence. In a rare moment, Sadequee’s family journeyed to meet Philip Mudd – the man who had, as deputy director of the National Counter-Terrorism Centre at the time, had a direct hand in the case. Mudd, while courteous and sympathetic to Sadequee’s family, insisted on the necessity of such an action:

People like me are in a difficult position. We cannot afford to let dozens of innocent people die because a youth makes a mistake […] If we switched roles, what would you do? What would you do? Would you let him go?

***

The ‘zero-tolerance’ policy renders uncertainty intolerable. It is far less acceptable to respect the rights of suspects, because one cannot write off any attack as an ‘acceptable’ or unavoidable loss. And yet, in so many cases, especially that of lone wolves and ‘home-grown’ terrorists, the possibility of crime remains uncertain until it is too late to intervene. Fabrication fills this gap, ensuring that uncertainties are coaxed into the realm of sufficiently known. Thus zero risk, worst case scenario, and the changing status and nature of ‘proof’ are all arranged to follow rationally from each other.

The obvious navigator here is cost-benefit analysis: given the risks of contemporary terrorism (whose brown-skinned, foreign-religioned stereotype gives it a far more threatening figuration than, say, the decades-long history of white supremacist killers), is the unjust imprisonment of vulnerable individuals ‘worth’ the security of the nation? The problem is that none of these variables can be properly formalised for comparison: not the risk of ‘the next terrorist attack’, not the probability of unjust imprisonments versus just ones (for who can tell now what Sadequee or Osmakac may or may not have done, had they been left alone?), not the net increase to national security. And so, the rise of the what if and strategies of fabrication, just as surely as hand-wringing about ‘post-truth politics’ and the farcical furore over Jade Helm, consist of a process where actors are rewriting the rules of the game – often on the fly – and using absolutist spectres like the ‘next 9/11’ to override established relations of probability and evidence. When you have no idea where the next bomb is going to go off, when you’re pretty sure both blues and reds in the quadrennial spectacle are lying through their teeth, ‘predictivity’ and certainty reassert themselves by the simple equation: “What if [the worst]? So, to stop it happening, [everything except the worst] is fair game.”

[Talk] U. Milano-Bicocca

I will be at the University of Milano-Bicocca next week to give a talk on surveillance, self-tracking and the data-driven life. It will overlap significantly with my presentation last week at the Affect Theory conference; I’ll be posting the full text and slides afterwards. Abstract below.

The Data-Driven Life: The Parameters of Knowing in the Online Surveillance Society

‘Information overload’ is an old cliché, but when it was still fresh, it conveyed a broad and fundamental liquidity in the parameters of our experience. What it meant – and felt like – to remember, to know, was changing. Surveillance today is not simply a question of privacy or governmental power, but a practical extension of such liquidity. Surveillance’s fevered dream of total prediction hinges on its ability to subtend human sensibility – with its forgetfulness, bias, and other problems – to reach ‘raw’ and comprehensive data. This data-hunger, shared by states, corporations and individuals alike, betrays a ‘honeymoon objectivity’. The rise of new technologies for knowledge production is being misconstrued as a discovery of pure and unmediated information. The result is a profound shift in what qualifies as knowledge; who, or what, does the knowing; what decisions and actions are legitimated through that knowledge. Surveillance practices and controversies today host a reparametrisation of what ‘knowing’ entails.

In this talk, I will address two specific cases: the state surveillance of the Snowden Affair, and the self-surveillance of the Quantified Self (QS) movement. I draw on interviews, ethnographic observation and archival research that is part of a larger, ongoing project.

  1. I know we are being watched, Snowden told us so – but I don’t see it, and I don’t feel it. A vast surveillance program withdraws into the recesses of technological systems, denying our capacity to know and experience it. Conventional forms of proof or risk probabilities elude both arguments for and against it. This situation provokes two major patterns of ‘knowing’. First, subjunctivity leverages the recessive unknowability surrounding surveillance as if it were in some way true and certain, producing hypothetical, provisionary bases for real, enduring actions and beliefs. Statistical measures of danger become mathematically negligible, yet affectively overwhelming. Second, interpassivity projects others (human and nonhuman) who believe and experience what we cannot ourselves in our stead. Even if the world of surveillance and terror is not real in my back yard, these interpellated others help make it ‘real enough’. Technology’s recession thus provokes an existential dilemma; how do I ‘know’? What is it supposed to feel like to ‘know’?
  1. We cannot stand guard over our judgments without machines to keep us steady. If our knowledge of our own bodies, habits and affects were previously left to unreliable memories and gut feeling, QS promises a data-driven existence where you truly “come into contact with yourself” – through persistent, often wearable, self-surveillance. The Delphic maxim know thyself is applied to a very different existential condition, where my lived relationship with technology becomes the authoritative site for an abstracted relationship with my own body. Yet QSers also acknowledge that data is always incomplete, raising new uncertainties and requiring the intervention of subjective judgment. Here, it is technology’s protrusion which forces the question: how will you ‘know’ yourself through a digitality that subtends your memory and intention?

[ARTICLES] “Subjunctive and Interpassive” and “Presence”

Presence, or the sense of being-there and being-with in the new media society

Open access @First Monday.

This essay argues that the ways in which we come to feel connectivity and intimacy are often inconsistent with and irreducible to traditional markers like physical proximity, the human face or the synchronicity of message transmission. It identifies this non-objective and affective property as presence: conventionalised ways of intuiting sociability and publicness. The new media society is a specific situation where such habits of being affected are socially and historically parametrised. The essay provides two case studies. First: how do we derive a diffuse, indirect, intuitive sense of communicative participation — and yet also manage to convince ourselves of anonymity online? The second describes surveillance and data-mining as a kind of alienation: I am told my personal data is being exploited, but I do not quite ‘feel’ it. Surveillance practices increasingly withdraw from everyday experience, yet this withdrawal actually contributes to its strong presence.

Subjunctive and Interpassive ‘Knowing’ in the Surveillance Society

Open access @Media and Communication.

The Snowden affair marked not a switch from ignorance to informed enlightenment, but a problematisation of knowing as a condition. What does it mean to know of a surveillance apparatus that recedes from your sensory experience at every turn? How do we mobilise that knowledge for opinion and action when its benefits and harms are only articulable in terms of future-forwarded “as if”s? If the extent, legality and efficacy of surveillance is allegedly proven in secrecy, what kind of knowledge can we be said to “possess”? This essay characterises such knowing as “world-building”. We cobble together facts, claims, hypotheticals into a set of often speculative and deferred foundations for thought, opinion, feeling, action. Surveillance technology’s recession from everyday life accentuates this process. Based on close analysis of the public mediated discourse on the Snowden affair, I offer two common patterns of such world-building or knowing. They are (1)subjunctivity, the conceit of “I cannot know, but I must act as if it is true”; (2) interpassivity, which says “I don’t believe it/I am not affected, but someone else is (in my stead)”.