Some resources on COVID surveillance

Below is a loose collection of COVID surveillance developments around the world. We see tales of unproven, hastily duct-taped contact tracing apps that run headlong into predictable train wrecks in actual use cases; thermal cameras that don’t work; fantasies of drone and robot surveillance; and almost comically harmful renditions of workplace surveillance & exam proctoring.

It is a partial, eclectic collection of whatever I spotted between early March & early May (updates potentially to come), but some folks have found it useful so I am putting it up here. Disclaimer that the notes are often going to be messy & full of my initial, personal views. Any questions / errors / concerns let me know at sun_ha [at] sfu.ca!

May 27: General updates + new details on Middle East courtesy of Laya Behbahani

May 28: A couple snippets; Ctrl+F the latest date (e.g. “May 28”) to find new entries.

June 11: A few additional updates on contact tracing apps across UK, France & Australia.

June 16: A few more entries on how contact tracing apps are faring after deployment.

June 24: More updates on the lifecycle of contact tracing apps; scholars’ critique of surveillance / civil rights implications; and Brown’s controversial plans to surveil its students and employees in fall.

 

 

Articles from technology researchers on COVID surveillance

An April 8 ACLU white paper by Jay Stanley & Jennifer Stisa Granick warns that “simplistic understandings of how technology works will lead to investments that do little good, or are actually counterproductive”. They assess some popular proposed uses systematically and point out key questions.

Susan Landau writes for Lawfare that it is an efficacy question, and on that point the proposed measures are often either failing or failing to provide adequate answers. “If a privacy- and civil liberties-infringing program isn’t efficacious, then there is no reason to consider it further.” She notes that cell phone GPS based tracking systems are unlikely to be worthwhile, since they are not accurate enough & often fail indoors & thus are unsuited to the 2m proximity problem.

In a US Senate Committee hearing on big data & COVID, privacy/law scholar Ryan Calo advises ‘humility and caution’. Calo also notes against mission creep: “there will be measures that are appropriate in this context, but not beyond it” – that, “to paraphrase the late Justice Robert Jackson, a problem with emergency powers is that they tend to kindle emergencies.”

Calo later joins Ashkan Soltani & Carl Bergstrom for Brookings with the same message: CT apps e.g. Apple/Google involve issues of false positives & false negatives with corresponding social/psychological errors; access / uptake difficulties; privacy & security issues, such as correlating BT tags to people via stationery cameras; and the normalisation problem in which “these voluntary surveillance technologies will effectively become compulsory for any public and social engagement.”

U of Ottawa researcher Teresa Scassa counsels caution for Canada on data surveillance, arguing that individual tracking is ‘inherently privacy-invasive’.

Scholars, including data ethics researcher Ben Green, write directly disputing the privacy-health tradeoff: “rather than privacy behing an inhibitor of public health (or vice versa), our eroded privacy stems from the same exploitative logics that undergird our inadequate capacity to fund and provide public health.”

Harvard’s Berkman Centre’s Bietti & Cobbe caution “the normalisation of surveillance to a point of no return” – but characterised not as the greedy advance of the powerful state, rather an eager ‘devolution of public power’ by ‘hollowed out governments’ to Big Tech.

A Brennan Center piece directly invokes post-9/11 surveillance programs as an example of pernicious and ineffective systems outlasting the crisis.

Andrejevic & Selwyn critique the fantasy of technological solutionism: could we use AI to invent the vaccine, blockchain to constrain the spread, could all of us simply WFH forever? Of course, the real impact is made through often faulty, wonky apps for data extraction; repurposing of surveillance techniques for unprecedented populational control.

Naomi Klein interprets it as a ‘pandemic shock doctrine’. She tells us what we’re getting is disaster capitalism’s free market ‘remedies’ that exploit the crisis of inequality. The response is to fund, save, promote, justify, Big Tech to save us through pointless and wonky surveillance tools. Airline companies to save us by, uh, not going bankrupt; and so on. Klein thus notes we will do here what we did to banks in 2008.

Early May, Naomi Klein writes for The Intercept of the ‘screen new deal’ for the pandemic shock doctrine: Cuomo’s handshake with Eric Schmidt & the Gates Foundation for New York as a technologist’s experimental ground zero. And of course it is one where every bad idea they’d been pushing becomes accelerated past proper judgment.

Germany-based AlgorithmWatch cautions that COVID is fundamentally ‘not a technological problem’, and that the ‘rush to digital surveillance’ risks new forms of discrimination and other abuses.

The Ada Lovelace Institute (independent, AI focus) reviews recently mooted proposals and concludes ‘an absence of evidence’ regarding efficacy and harm mitigation.

A Brennan Center piece directly invokes post-9/11 surveillance programs as an example of pernicious and ineffective systems outlasting the crisis.

CIGI notes re. US sweeping travel bans but also policies like “treating Chinese state journalists as official representatives of the Chinese state and intelligence apparatus” (to which China retaliated with expulsions) – that there is a broader concern about abuse of authority & human rights: “Epidemic response is a rare circumstance in which governments make sweeping decisions about the collective good and take unchecked action, often in contravention of individual human rights.”

UT Austin researchers Katie Joseff & Sam Woolley warn that COVID-related location tracking is a golden opportunity for what their UT Austin team calls ‘geopropaganda’: “he use of location data by campaigns, super PACs, lobbyists, and other political groups to influence political discussions and decisions.” Hong Kong, of course, is proving to be ground zero.

June 24: Joven Narwal at the University of British Columbia argues that police must not have access to CT app data, citing past instances in which police try to draw on existing public health data.

 

 

Contact Tracing (CT) Apps & Other Software/Databases

MIT Tech Review now has a great database of government CT apps around the world.

Even the United Nations has come out with not CT but a ‘social distancing’ app that, well, doesn’t work: the 1point5 app is supposed to alert when other BT devices enter range, but it often fails to do so.

 

Google-Apple Contact Tracing System

A key player has been the international research collaboration for open-source DP-3T (decentralised privacy-preserving proximity tracing) based on bluetooth beacons of randomised numbers that track solely proximity, and then can be retroactively used to contact trace a positive case.

DP-3T is the clear inspiration for the later Google-Apple proposal. Apr 10 – Google & Apple announce a collaborative effort on a bluetooth-based contact tracing system. This means first APIs in May to facilitate third party apps’ interoperability across the OS, but ultimately, an OS-level, no-download functionality. They promise privacy and transparency, with the system always being user opt-in.

The basic functionality of the bluetooth system is essentially identical to DP-3T. Bluetooth beacons exchange codes, and upon a positive’s upload of 14 days of keys, contacts receive phone notification w/ info. Google promises ‘explicit user consent’ req, ‘doesn’t collect’ PII / location data, list of contacts ‘never leaves phone’, positives aren’t ID’d, and the system as a whole will only be used for COVID contact tracing. But of course, you could then have states and employers require that users use their particular third party app enabled by the API.

google

 

Immediately, Jason Bay, product lead for Singapore’s TraceTogether, put out a clear warning: Bluetooth tracing cannot replace manual tracing, only supplement, and any optimism otherwise is “an exercise in hubris”.

Bay notes that TT worked insofar as it worked with the public health auths from day 1, including shadowing manual tracers at work. Bay argues HITL & human fronting is crucial to fight T1/T2 errors and introduce judicious judgment to how each contact should be traced. He cites an example of the Washington choir super-spreader where 2m tracing would have flagged nothing.

 

Lawmakers in US are beginning to join conversations around the risks: “without a national privacy law, this is a black hole.” (Anna G. Eshoo, D-Menlo Park)

In 21 April, the French government requested modifications so that the app can work in the background for iPhones (currently bluetooth functions it requires cannot be done in bkgr), but it’s not expected that Apple will agree.

For similar reasons, UK NHS rejects Apple-Google model in favour of an option where the matching would occur in a centralised server, citing analytics benefits, & wants the app to work on background.

 

There are generally applicable concerns about the actual efficacy of any digital contact tracing app: adoption barriers, type 1 & 2 error rates, public behaviour change (e.g. false confidence/fear).

Julia Angwin reviews points out the positives: the Google-Apple solution is opt-in & the keys are anonymous from the beginning, the system is mostly decentralised, and G/A themselves collect no PII. But the very lack of PII renders it vulnerable to trolling (send the whole school home, or Russian DDOS). The keys themselves are very public, so you can, say, grab & blast somewhere and spoof presence. All these false alerts will undermine the system. There also remains some risk of re-identification & location info exposure. There is likely to be some limited degree of location info centralisation (e.g. to limit keysharing by region). Angwin also notes that “other apps may try to grab the data” e.g. advertisers – and this gets into what I noted earlier: the ecosystem problem, what third party apps or government requirements or healthcare databases bring into the original G/A specs.

 

China & Asia

China: NYT reports on the Chinese case of Alipay Health Code, which you are required to install & notifies on their quarantine status, and then you show to officials to gain free movement across thresholds. Once installed & agreed, “reportInfoAndLocationToPolice” function communicates to server.

 

South Korea: Widely known for its aggressive contact tracing, South Korea’s system is primarily dependent on traditional CT rather than app use. We know that each and every confirmed case is followed up through interviews, where in most cases individuals voluntarily surrender certain data for this purpose. Credit card use in particular allows a detailed mapping of their movements. This info is then typically revealed to the public – right down to “X convenience store” or “Y clothing store” – which has also prompted predictable online gossiping.

Mar 7 sees a new government app for ~32400 self-quarantine subjects. Using GPS, it alerts users if they exit the quarantine area (or if they shut off GPS) – and police claim right to force noncompliant persons to return. But app installation was not made mandatory, and a week later one region saw only 21% of quarantiners install. And of course leaving the smartphone at home circumvents the system entirely.

A key feature in SK’s use of contact tracing is the way each quarantining individual is assigned a government worker for follow-up; not only regular calls, but for instance, location data from the CT app showing the individual has left the home immediately alerts the case worker. Some info here (Korean).

 

Singapore: Like South Korea, Singapore has publicly reported personal details for each infected person, e.g. below. A ‘BlueTrace’ system developed by a government team is used to extract what sounds like cellphone location data: “The collection and logging of encounter/proximity data between devices that implement BlueTrace is done in a peer-to-peer, decentralised fashion, to preserve privacy.”

On May 1, a tech editor at a major Singapore newspaper calls for making the app mandatory by law, lamenting that only 1.1m installs is far short of ~3.2m minimum (3/4 of pop).

 

India: One Indian state is branding people’s hands with quarantine stamps. The Bangalore Mirror reports that Karnaktaka has begun to require ‘hourly selfies’ to the government between 7am-10pm using an app, or be liable for ‘mass quarantine’. The app, “Quarantine Watch”, appears to demand personal information, symptoms, and have a photo submission system (that many users reports simply does not work).

karnakata

 

United States

Mar 13: Trump makes up on the fly a nonexisting ‘national scale coronavirus site’ from Google during a news conference. What was on the way was an Alphabet subsidy Verily-made triage tool, available that weekend in a pilot for Santa Clara & San Mateo. “Project Baseline” was more or less a screening survey based on Google Forms.

Verily’s terms and services show, is not HIPAA-covered & your data may be shared with “including but not limited to” Google and Salesforce. In contrast, similar functionalities have already been available from, say, Portland-based Bright.md without the invasive data collection.

Apple has since created a similar COVID screening tool.

Palantir is ‘working with’ CDC to provide modeling services. By Apr 21, we know that Palantir’s been granted the federal contract for a major aspect of ‘HHS Protect Now’, around their usual expertise in data analytics tools.

US States are actively working on CT apps; Utah has released an app developer (Twenty) to create ‘Healthy Together’. It promises bluetooth and location based matching as well as basic symptom screener & ability to receive test results. There is clear push to pump money in and give out lucrative contracts for tech platforms to enable CT; a letter from ‘health experts’ asks Congress for 46.5b towards CT, including 180,000 workers to conduct interviews.

The degree to which CT apps have been successful in terms of adoption, or of actually achieving CTs, is unclear. Utah HT app after a month has not led to any actual CT work being done, says officials; the highest known rate of participation (i.e. installs) remains South Dakota (Care19, 2%, single positive case used).

North Dakota also uses Care19, which collects location data then sends random ID # & advertising phone ID # to Foursquare, and Bugfender (software bug manager) that then sends on to Google. Examples of Care19 interface:

June 16: after early enthusiasm and rollout in some states during April and May, there is now some evidence that state governments are not so impressed. States like California have declined to actively pursue CT solutions. The American public do not appear to be very enthusiastic, either, with one survey reporting that 71% would rather not use such an app.

 

Canada

Alberta Health Services deployed a screening tool early on.

Alberta also deploys a CT app, using a version of Singapore’s TT – ABTraceTogether. Data collected via bluetooth temp IDs, 21 days local storage, Alberta health uses anonymised data for analytics – no location data, and promises encrypted IDs that upon health servs decryption doesn’t reveal identity.

 

UK & Europe

The NHS has a symptom ‘survey’ that is more about helping NHS collect data and help them plan. It appears to require DOB, postcode, household composition, and to have that data retained for 8 years (?!) shared across govt dept, ‘other organisations’ and ‘research bodies’.

on 26th the Economist reports that Palantir is also ‘teaming up’ with the NHS though its exact contribution is unclear. (One assumes that it’s Palantir’s bread and butter of database consolidation, visualisation & search.) On Mar 28, the NHS reveals in a blog post that as expected, it’s the existing Foundry system for the front-end, & lists other partners like MS & Google.

UK deploys a NHS CT app in early May, developed by NHSX (NHS digital) with VMWare Inc (Palo Alto) & Zuhlike (Swiss). On 4 May, UK announced NHS CT app pilot test for Isle of Wight using indeed a more ‘centralised’ approach, with plans to record location data. BT needs to be on at all times but it seems app can work background, with the handshakes sent to UK-based server for matching process.

Buzzfeed reports complaints of device incompatibility, battery drain, and poor UX causing confusion – as well as clear technical failures in terms of picking up BT signals or maintaining contact w/ hundreds of devices. Not only that, the app appears non-GDPR compliant; e.g. links used in app to privacy policy ironically uses Google Analytics that would allow ad targeting.

June 11: UK is having second thoughts, with a second trial of the NHS CT app postponed and a return to the Google/Apple model being considered. A preprint study presents survey data indicating 60.3% willingness to install CT apps in the UK.

June 24: BBC provides a broad chronological overview of the UK CT app and its discontents. It notes a chronic pattern of missed deadlines, second thoughts around design features & privacy concerns, and periods of radio silence from the UK government.

 

May 28: France is set to roll out their CT app “StopCovid”. The bluetooth matching is intended to supplement traditional CT work. Like the UK solution, it differs from the Google-Apple design by centralising the matching process. This makes it theoretically far easier to re-identify persons on the basis of anonymised information.

June 11: France’s StopCovid reports uptake levels of around 2% of population. The influential Oxford study suggests that CT apps can be effective in reducing infection levels at any level of uptake, but ~60% has generally been mooted as the required uptake level in order to open up while keeping infection levels at lockdown rates.

 

A Polish app, which is described as persistently accessing your location, asks for regular selfie updates, and of course was a rushed job ‘prone to failure’. Many downloaded it voluntarily.

 

June 16: Norway’s Smittestopp has been suspended and ordered to delete existing data. As in Bahrain, Kuwait and elsewhere, the key concern is that collection of GPS location data as well as Bluetooth signals means a far more intrusive level of knowledge around individual movements.

 

Australia & New Zealand

In late April we have the Australian COVIDSafe app deployed with 1m installations. Not open source; no location data, only bluetooth. The foreground question is handled poorly: the app tells you to “keep the app running” but gov official’s answer on whether you can minimise was “It’s unclear at this stage”. People trying to figure it out aren’t sure (e.g. does it go into Apple’s overflow area? What about Apple’s low power mode?)

By June 11, Australia reports ~6.2m users (in a country of ~25m). The government had not regularly released clear, up to date data of app downloads; an independent initiative collected existing info & estimates until May 24, when numbers had settled around 6m.

Wellington, NZ introduces Rippl app coinciding with gradual drop in alert levels. Rippl is voluntary check-in and claims zero location tracking whatsoever; people simply scan QR codes posted in shops to check in, with paper sign-in available. A Rippl worker confirms zero collection, thus requiring voluntary contact to follow up on alerts. (Some of the side effects are obvious: an Auckland woman is hit on by a Subway worker who used the contact info she input for CT.)

 

Middle East

Israel: On Mar 16 Netanyahu announces that he has “authorized the country’s internal security agency to tap into a vast and previously undisclosed trove of cellphone data to retrace the movements of people who have contracted the coronavirus and identify others who should be quarantined because their paths crossed.” Here again, geolocation data systems built for commercial uses and legislated for antiterrorism purposes becomes reappropriated for another end. Where Israel has already extensively developed and deployed surveillance techniques against Palestinians, including drones, FR, algorithmic profiling, this sets a precedent for use on Israel’s non-Palestinian citizens. Similar government use of telecom data to track movement has been announced by South Africa.

Conversely, we know that an Israeli firm, Cellebrite, which used to sell jailbreaking software to police, now supplies similar tech for extracting contacts & location data.

 

Qatar: April saw the release of the EHTERAZ CT app, including functions for ‘following up on’ quarantiners; alerting users of contact with positive cases; and Alipay (China)-like colour-coded scheme for pos/neg status.

Later, an Amnesty investigation shows Qatar’s EHTERAZ CT app had key security flaw allowing access to users’ personal information, including location data.

 

Saudi Arabia: The ‘Tawakkalna’ app arrives in May, designed to manage movement restrictions rather than directly surveil for COVID. The app is reported to work by allocating “four hours per week” to people that they can use for essential trips.

 

June 16: Bahrain and Kuwait’s CT apps have been flagged by Amnesty International as excessively intrusive. They feed location data to a central server, meaning that highly accurate mapping of individual movements can very easily be performed.

 

 

‘Immunity Passports’

Apr 10 – Fauci notes that some kind of immunity IDs / cards for Americans is on the cards; UK & Italy are considering. UK startup Onfido has discussed an ‘immunity passport’ w/ US gov, claiming this personally identifiable system can be scaled up rapidly. The initial effort has come from German researchers at Helmholtz Centre for Infection Research, involving mailing out antibody tests en masse then providing some vaccination card. NZ is considering its own bluetooth ‘CovidCard’ no doubt inspired by Singapore & US moves.

May 4, we spot a DOD contract live, 25m, calling for “Wearable Diagnostic for Detection of COVID-19 Infection” that would be demonstrated within 9 months and then able to deploy at mass.

 

 

Drones & Robots

US: In Apr 3, a drone was spotted in Manhattan urging social distancing, but CBS describes it as run by a ‘volunteer force’. Despite concerns by the Gothamist, there remains no clear proven case of NYPD using drones for SD. Regular patrolling by cars is taking place, and $250-500fines given out.

Companies are also building and marketing their own drone solutions – e.g. a Digital Trends report on ‘Draganfly’ touting drones w/ thermal scanners that can detect temperature & visually identify coughing. Perhaps the PR was effective. On April 23, we hear that Westport, Connecticut is using its drones to detect high temp & close proximity and issue verbal warnings.

The robotics company Promobot advertises a new screening robot that takes temperatures – after its earlier Feb publicity stunt where a robot asked New Yorkers about their symptoms, and was kicked out for lacking permits. It’s your standard invasive FR for age, gender, plus a (likely inaccurate) temp check tool.

 

Madrid is also using drones to monitor movement, using the speaker to ‘order people home’. Lombardy authorities are reported as ‘using cellphone location data’ to assess lockdown compliance.

UAE has contracted ‘unmanned ground vehicles’ to be deployed at the Abu Dhabi Airports for disinfection work. Oman has also announced the use of drones to detect high temperature individuals, though details are scarce. ‘Stay at home’ drones that approach individuals outdoors and blast audio instructions has been announced in UAE, Jordan, Bahrain, Kuwait, Morocco as well.

Singapore is now using a Boston Dynamics drone with camera and loudspeaker to encourage social distancing in a trial.

singapore

 

In some cases, robots and other tech designed not specifically for COVID surveillance are being deployed for the pandemic response.

General healthcare-related robot/tech companies are donating/pitching their tools to overwhelmed hospitals: e.g. in UAE, a Softbank-backed company is offering ‘free healthcare bots’ that seem to be more for telemedicine related assistance rather than COVID surveillance in a direct sense. In the US, Google is now partnering with Mount Sinai (and others?) to send Nest cams to hospitals for patient monitoring.

The California startup Zipline has used autonomous drones to deliver medical supplies in Ghana.

 

 

Thermal Cameras & Public Space/Retail Surveillance

China has used thermal cameras in Wuhan as early as 21 January.

Numerous other countries have considered thermal screening at airports / ports of entry; e.g. Qatar claims all persons entering country are subject to thermal screening, and numerous US airports used it as early as February.

 

In US, thermal cameras have been explored by both state and corporate bodies:

“Athena Security”, Austin TX, promises ‘AI thermal cameras’ to detect fevers in crowded spaces, modelled on existing product for gun detection. About a week later, cybersecurity firm IPVM reports that Athena faked pretty much everything in its promo: Hikvision cameras rebranded as Athena, photoshopped interface, fake restaurant & customer testimonies.

Other grifters like RedSpeed USA have been advertising thermal scanner setups – just like Athena, with China-sourced hardware.

May 28: Companies like FLIR, who sell thermal cameras and sensors, have seen rise in demand and revenues, despite unanswered questions over efficacy; as experts in the Wired report notes, this risks false confidence around asymptomatic persons, and similar measures have proven largely ineffective in past outbreaks.

TSA is reported on May 15 to be preparing temperature scanning in airports, though details remain unclear. Some within TSA have raised concerns around logistics, accuracy, and procedures; “People with high temperatures will be turned over to officials with the Centers for Disease Control and Prevention, the administration official said.”

 

 

 

Facial Recognition & Smart tech / AI

April 28, 2020 – NBC reports that Clearview AI is “in talks with federal and state agencies” towards FR-based COVID detection. This info appears to come from Hoan Ton-That, the infamous Clearview founder with white supremacist ties, in a public bid to raise the company’s profile and fish for contracts.

There’s been a rush to collect masked faces to try and produce data for FR systems. Researchers have published a ‘covid19 mask image dataset’ to github, of 1200 images from IG, in April.

 

There’s plenty of vague talk around ‘using AI’ to improve COVID surveillance, but it is often unclear what exactly is the AI component. We might assume, as Arvind Narayanan has shown elsewhere, that many such claims to AI are (1) not actually AI, and/or (2) not actually effective.

Sometimes we get a little more detail. Dubai Police claims to use ‘smart helmets’ with IR cameras for temperature reading, facial recognition & car plate readers for on the ground surveillance. This follows similar reportsof smart helmets in use across Shenzhen, Chengdu & Shanghai – though actual efficacy remains unknown.

 

May 28: WaPo reports that data streams from wearables like Fitbit might be useful for early detection of flu-like symptoms. This is an old chestnut in decade+ old effort to show utility of wearable tech (e.g. occasional news stories about individuals catching heart attacks thanks to Fitbit). There are still no peer-reviewed studies showing clear replicable results, of course.

 

 

Workplace & School Surveillance

Workplaces are resorting to always-on webcams and other measures to try and maintain control as WFH becomes implemented. Some are exceptionally draconian, including keylogging.

PwC is getting in on the act, offering its own tracing tool that would track employees. It combines bluetooth with its own ‘special sauce’ of looking at BT/WiFi signals within each room to reduce inaccurate pings (e.g. people on other side of wall). They tout huge interest by clients – and privacy provisions full of gaping holes.

Companies are also exploring ‘smart cameras’ to check for masks and social distancing, e.g. Motorola, Camio (San Mateo), often targeting factories and offices. Proxxi (Canada) sells wearable bracelets instead that vibrate upon proximity; Ford and others are trying them too, and sometimes the idea is to combine it with CT.

 

Schools are joining in: online proctor services, where globally outsourced workers monitor students by webcams while they take tests & score their reliability as an anti-cheating measure, has exploded in popularity & inquiries with COVID. Tilburg U rector defends use of Proctorio: “Digital surveillance is indispensable to maintain the value of the diploma”, he says – and anyway, don’t worry, “we only use the most essential tools”. The lazy whataboutism trotted out to distract from the issue: hey, physical exams can be even more invasive, someone could look in your bag!

Similarly, Examity requires students to hand over personal info, provide remote access to the proctor, they monitor students’ keystrokes, and of course the easy translation of unscientific bullshit hunches into ‘common sense’ features: “closely watch the face of the student to see if there is something suspicious, like suspicious eye movements”. Schools are now training their students to swallow their concerns and critical thinking and to accept bad technology as a price of admission. One student says: “When you’re a student, you have no choice […] This is school in 2020.”

Wilfried Laurier maths dept has forced students to buy webcams – during, of course, a webcam supply shortage – to facilitate proctoring, arguing that “there are no alternatives”.

June 5: Proximity detectors are planned for deployment in an Ohio school district for COVID.

June 24: As many universities seek to find ways to open up for the fall semester and keep revenue flowing, they are turning to a wide array of surveillance technologies to try and encourage students – and coerce employees – to show up on campus. Brown is partnering with Alphabet-owned Verily and its Healthy at Work program, where all students & employees use apps to log symptoms, randomly assign individuals to be tested, and collect their data for further development.

 

Beyond the workplace, COVID is also becoming an opportunity to Trojan horse new surveillance measures. E.g. real estate & proptech, already a growing field post-2008. A Boston Review article mentions Flock Safety; Bioconnect and Stonelock, which use biometrics/FR to admit known individuals, and now with added bullshit about helping prevent COVID through such access control; Yardi, a ‘virtual landlord’ service that helps manage large rental populations, and now boasts a remarkably unnecessary Trojan horse:

In response to COVID-19 the company is offering communication services meant to allow families to check on loved ones in nursing home facilities while remaining socially distanced. Using its services, the company states, “family members can view the latest records about their residents, including vital signs, diagnoses and medication orders. This data is shared from Yardi EHR [electronic health records] in real time.” Here again we see a window into the expansive ambitions of one of the largest virtual landlord firms: extending proptech’s reach into personal health data and even electronic health records.

 

 

more to come.

 

 

Art in America piece w/ Trevor Paglen

I recently spoke to Trevor Paglen – well known for works like ‘Limit Telephotography’ (2007-2012) and its images of NSA buildings and deep-sea fibreoptic cables – about surveillance, machine vision, and the changing politics of the visible / machine-readable. Full piece @ Art in America.

Much of that discussion – around the proliferation of images created by and for machines, and the exponential expansion of pathways by which surveillance, data, and capital can profitably intersect – is also taken up in my upcoming book, Technologies of Speculation (NYUP 2020). There my focus is on what happens after Snowden’s leaks – the strange symbiosis of transparency and conspiracy, the lingering unknowability of surveillance apparatuses and the terrorists they chase. It also examines the passage from the vision of the Quantified Self, where we use all these smart machines to hack ourselves and know ourselves better, to the Quantified Us/Them which plugs that data back into the circuits of surveillance capitalism.

In the piece, Paglen also discusses his recent collaboration with Kate Crawford on ImageNet Roulette, also on display at the Training Humans exhibition (Fondazione Prada Osservertario, Milan):

“Some of my work, like that in “From ‘Apple’ to ‘Anomaly,’” asks what vision algorithms see and how they abstract images. It’s an installation of about 30,000 images taken from a widely used dataset of training images called ImageNet. Labeling images is a slippery slope: there are 20,000 categories in ImageNet, 2,000 of which are of people. There’s crazy shit in there! There are “jezebel” and “criminal” categories, which are determined solely on how people look; there are plenty of racist and misogynistic tags.

If you just want to train a neural network to distinguish between apples and oranges, you feed it a giant collection of example images. Creating a taxonomy and defining the set in a way that’s intelligible to the system is often political. Apples and oranges aren’t particularly controversial, though reducing images to tags is already horrifying enough to someone like an artist: I’m thinking of René Magritte’s Ceci n’est pas une pomme (This is Not an Apple) [1964]. Gender is even more loaded. Companies are creating gender detection algorithms. Microsoft, among others, has decided that gender is binary—man and woman. This is a serious decision that has huge political implications, just like the Trump administration’s attempt to erase nonbinary people.”

apple_treachery.jpg

Crawford & Paglen also have a longer read on training sets, Excavating AI (also source for above image).

 

Criticising Surveillance and Surveillance Critique

New article now available on open access @ Surveillance & Society.

Abstract:

The current debate on surveillance, both academic and public, is constantly tempted towards a ‘negative’ criticism of present surveillance systems. In contrast, a ‘positive’ critique would be one which seeks to present alternative ways of thinking, evaluating, and even undertaking surveillance. Surveillance discourse today propagates a host of normative claims about what is admissible as true, probable, efficient – based upon which it cannot fail to justify its own expansion. A positive critique questions and subverts this epistemological foundation. It argues that surveillance must be held accountable by terms other than those of its own making. The objective is an open debate not only about ‘surveillance or not’, but the possibility of ‘another surveillance’.

To demonstrate the necessity of this shift, I first examine two existing frames of criticism. Privacy and humanism (appeal to human rights, freedoms and decency) are necessary but insufficient tools for positive critique. They implicitly accept surveillance’s bargain of trade-offs: the benefit of security measured against the cost of rights. To demonstrate paths towards positive critique, I analyse risk and security: two load-bearing concepts that hold up existing rationalisations of surveillance. They are the ‘openings’ for reforming those evaluative paradigms and rigged bargains on offer today.