Surveillance Capitalism in the time of Covid-19: the possible costs of technological liberation from lockdown.

Picture 1

[Image source: Shropshire Star 20/4/20]

By Mark Whitehead

‘Social participation and individual effectiveness should not require the sacrifice of […] our autonomy, our decision rights, our privacy, and, indeed, our human natures’

(Zuboff, 2019: 347).

 The context

The relationship between technology and human freedom has always been contested and uncertain. Technology has been central to the liberation of humans from various forms of labour. It has also been associated with a loss of autonomy and a power to control our destinies. In his lecture ‘The Question Concerning Technology’, Heidegger suggested that the technological (broadly defined) represented, perhaps, the greatest limiting factor on the human ability to freely experience the world. It should come as little surprise then that technology should be such a prominent topic of debate in relation to the Covid-19 crisis.

It is difficult to remember—at least in living memory—an event that has so impinged on the everyday freedoms of people throughout the world. Social lockdown has restricted our freedom of movement, our ability to meet and congregate, and even how much exercise we can take. These infringements on freedom are, of course, not incompatible with liberal political tradition (where personal freedom can be restricted in order to prevent harm to others). Mass quarantines have, however, led to anti-lockdown rallies and protests, which have sought (however foolishly) to reassert people’s liberties.

Digital technologies have already served to limit the impacts of mass lockdown and preserve certain elements of personal liberty. Despite recent suggestions that social media platforms may be crossing a threshold between their social utility and costs, they now provide a vital way of us staying connected with each while in isolation (indeed Marketwatch reported that daily video calls on Facebook’s WhatsApp and Messenger Apps doubled during March, reaching levels normally only witnessed on New Year’s Eve. While, in Italy, overall Facebook usage levels went up by approximately 70%). At the same time Zoom, Microsoft Teams, and Skype  have enabled many of us to continue with our work. And, for many, including my family, lockdown without Netflix and Amazon Prime would seem unthinkable.

Now, however, there is keen interest in the role that digital technology can play in physically liberating us from lockdown. This interest has, in part, been stimulate by the digital techniques that have been used to monitor and control the spread of the Novel Coronavirus in places such as China, Singapore and South Korea. In Singapore, the TraceTogether app uses Bluetooth Technology to enable those who have come into contact with someone who has Covid-19  (however unwittingly) to be immediately alerted (Cellan-Jones, 2020). Meanwhile in China citizens are now required to scan a government QR code, which determines their likely exposure to the Coronavirus. The risk rating produced by this code is then used to determine whether someone can enter a public space or use public transport (Ghaffary, 2020). Given the apparent success of these digital initiatives, European states are showing an active interest in deploying contact tracing apps and digital warning systems (see Hern, 2020). It is in the context of this demand that Google and Apple have been collaborating on software that would enable contact tracing apps to be able to function across the operating systems of their phones. It was recently report by Bloomberg that France’s digital Minister Cedric O requested that Apple loosen it privacy settings to enable contact tracing data to be shared with public health authorities (Apple does not currently allow its Bluetooth functions to operate in background mode if the data being produced leaves the device).

It appears that other countries may deploy digital forms of social surveillance in slightly different ways. In the UK, for example, the government looks set to use a government-based app (apparently developed in liaison with GCHQ) that will provide a centralised index of social interaction (Sample, 2020). While this system appears to avoid fears associated with corporate involvement in contract tracing, it comes with its own dangers and limitations. Reflecting on the proposed UK system, Professor Lilian Edwards notes, “There’s an intrinsic risk in building any kind of centralised index of the movement of the entire population which might be retained in some form beyond the pandemic” (quoted in Sample, 2020). Meanwhile in the US, NBC reports that there has been governmental interest in the deployment of facial recognition systems to monitor social interactions. Utilising public cameras, available online images, and digital facial recognition technology, such a system would mean that even those without a social contact tracing app could be monitored for likely Coronavirus exposure. The US’s purported interest in facial recognition technology is perhaps the development that should give us most pause for thought. Facial recognition technology has long been seen as the endgame in the battle between personal freedom and digital surveillance (Naughton, 2019).

When you put all of these developments together, it is easy to see why tensions are emerging between the physical ability to be liberated from lockdown, and long-term privacy concerns about the right to be free from surveillance. Given the rapid rate of change in this area it is understandably difficult to assess the short and longer-term implications of these digital solutions. And there certainly appears to be a danger that we all get pulled into a consensual vortex of technological solutionism (Morozov, 2020). A sense of what may be at stake here is, perhaps, signalled most clearly in the United Nations’ recent report on the human rights implications of the pandemic (United Nations, 2020). The UN suggests that the use of AI and big data to tackle Covid-19 could threaten human rights globally (16). Furthermore, the UN expresses concern that the data surveillance techniques deployed within the current crisis could become normalised in the future.

With the stakes so high, and with so little time to process the various risks that must be balanced, it would be helpful if we had a ready-made theory to help us assess what we should do. The interesting thing is, we do, it is the theory of surveillance capitalism.

 

Surveillance capitalism – Lessons for Covid-19

The idea of surveillance capitalism was developed by the America scholar Shoshana Zuboff. In her 2019 book The Age of Surveillance Capitalism, Zuboff describes it as a ‘new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales’ (p.ix) (a detailed review of the book is available here). In a more critical context, she goes on to state that surveillance capitalism is a ‘rogue mutation of capitalism marked by concentrations of wealth, knowledge, and power unprecedented in human history’ (ibid.). In practical terms, surveillance capitalism involves the digital capture of online, and increasingly offline, human actions in order to facilitate the commercial exploitation of that behaviour. As an economic system, surveillance capitalism’s raw material is human behavioural and experience, expressed in digital form. Surveillance capitalism thus relies on the increasing digitisation of knowledge, and the ability to capture as much of that data as possible. The commercial operations associated with surveillance capitalism rely on the ability to predict our consumer needs (and provide us with targeted marketing) and to actively shape our decisions (such as voting in an election or referendum). The more data that surveillance capitalist enterprises such as Alphabet Google and Facebook can accumulate, the more accurate their predicts can become, and the more powerful their behavioural nudges are.

So, what can Zuboff’s account of surveillance capitalism tell us about the likely implications of digital social contact tracing?

 

Lesson 1: Surveillance capitalism has an historical track record of exploiting crises.

It is possible to trace the origins of surveillance capitalism to the 9/11 terrorist attacks. In the wake of 9/11, state authorities turned to the newly emerging big tech giants to support extended surveillance programmes. With governments able to access user information on platforms such as Google and Yahoo, a common interest emerged between big tech and the state in promoting the growing influence of such platforms. If governments required mass digital surveillance to support their anti-terror programmes, then they needed the infiltration of digital tech deeper into everyday life (whether that be internet use, Xboxes, or mobile phones). The states of exception that tech companies operated in following 9/11 in part explains why such giants have proved so difficult to regulate and control, even following the Snowden leaks and Cambridge Analytica scandal. But, of course, this is not the only reason they have successfully avoided regulation. The political connections, extra-territorial forms, and unintelligible algorithms and code associated with big tech have all served to prevent effective regulation.

If the state of exception produced by the 9/11 terror attacks enabled big tech empires to grow and escape regulation, could not Covid-19 usher in a new state of exception within which surveillance capitalism can deepen its power and influence? In a recent conversation between Zuboff and Naomi Klein, it was suggested that the current crisis could reflect an unhealthy fusion between the shock doctrine and surveillance capitalism. In this context it was claimed that surveillance capitalism could, on the one hand, offer part of the solution to the present crisis, while on the other exploit the crisis to expand its influence and power. This may seem hyperbolic. Big tech, however, gives us little reason to believe that having extended their infiltration they will cede the advances they have made following the passing of a crisis. Indeed, if the theory of surveillance capitalism is correct, the economic logic of big tech is predicted on the very resistance of any retreat from access to its most value asset –human experience.

 

Lesson 2:  Surveillance capitalism has mastered the art of bait and switch

A recurring motif in Zuboff’s account of surveillance capitalism is the tactic of bait and switch. Bait and shift is used by Zuboff to denote the various deceitful practices that are deployed by big tech companies to secure access the personal data. The primary baits are the fee-free services that make our lives so much easier. These baits are often supplemented by promises of privacy and data security. The switches occur when we are presented with those obscure changes in terms of service, which detrimentally reset privacy setting defaults. Further switches can occur when it is revealed, after purchasing some form of digital tech, that disabling data sharing can undermine the functionality of the product.

We are told that contact tracing apps will come with various privacy protections. These include sunset clauses that will limit data gathering to the Covid-19 crisis period. But, as perviously mentioned, if the history of surveillance capitalism reveals anything it is that once access to data has been gained, it is rarely relinquished. Through the skilful deployment of new default settings, functionality features, and obfuscating terms and conditions, it would not be difficult for surveillance capitalism to maintain the flow of social contact data long after the Covid-19 crisis has passed. Perhaps this is taking too dim a view of big tech companies who have, it must be acknowledged, changed some of their practices in light of the Novel Coronavirus (think of Facebook’s more careful curation of content and support for trusted sources). But, the history of surveillance capitalism should, at the very least, make us vigilant.

 

Lesson 3: What has been learned can’t be unlearned and will continue to inform the enhanced commercial and governmental use of personal data in the future

 One recurring theme within the discussion of social contact tracing apps is the reassurance that when personal information is accessed and shared, it will not be permanently tagged and stored against any identifiable citizen’s name. This reassurance suggests, again, that enhanced digital surveillance is merely a feature of the state of exception that is associated with the Covid-19 crisis, and will not undermine personal privacy in the long run. But this reassurance fails to appreciate the social dynamics of surveillance capitalism. Surveillance capitalism, and its associated systems of big data capture, algorithms and machine learning are predicted on the identification of social patterns across millions, often billions, of data points. The learning that goes on here is always, inevitably, removed from identifiable individuals. But this does but does not negate its threat to personal freedom and autonomy. The flipside of the surveillance capitalist system occurs when machine learning returns to the end user in the form of highly personalised prompts to action. What contact tracing apps will provide is an historically unprecedent insight into social interactions. When combined with other digital data, such as work productivity, purchase patterns, and biometrics, this will provide unparalleled insights into the social context for human action. Knowing what humans do, or do not do, in particular social settings, or when they come into contact with specific kinds of people, could be of great commercial and governmental value. It will open up new opportunities for what Zuboff terms behavioural actuation: or when data about human conduct is used as a basis to prompt future action (perhaps a well-timed add, web search result, or navigational nudge).

The ultimate destination of surveillance capitalism is a world within which big tech knows us better than we know ourselves. Such a situation promises to make our lives more convenient (with personalised web searches, digital nudges, and optimised thermostat settings). But to know someone better than they know themselves relies on the ever deepening of data gathering from everyday experience. Digital home assistants, for example, have enabled voice tone to become a surveillance capitalist data point (I am guessing that all of our recent video conferencing meetings are proving useful in this context too), while other forms of ambient computing will look to make facial expressions, blood pressure, and even gait and posture tools of behavioural prediction. It is clear that whatever the initial purpose of contact tracing apps, they will inevitably enhance big tech’s predictive power. Noticing what we do after we come in to contact with certain people could predict changes in jobs and even divorces (this is what Alex Pentland has described as a form of social physics). The problem with this kind of situation is that in the presence of the unprecedented accumulation of knowledge about ourselves (or at least our demographic equivalents), it becomes increasingly difficult for people to resist the behavioural prompts of surveillance capitalism.

Many will argue, that if contact tracing apps are run by government then our experiential data will be protected from the circuits of surveillance capitalism. This may be true, and perhaps, in the wake of Covid-19 we may see forms of state monopoly surveillance capitalism. But what if aggregate data eventually gets sold-off as part of an enterprising government privatisation scheme in the future (not exactly an unprecedented situation)? What if our health insurance becomes tied to the use of commercial contact apps? And, following the likely emergence of contract tracing app markets, are hastily developed government systems really going to defeat those produced by Google? I guess we will have to wait and see.

 

Lesson 4: This moment could be a vital point in the construction of an instrumentarian society.

According to Zuboff, surveillance capitalism is characterised by a distinctive ideological vision. Zuboff uses the rather ungainly term instrumentarianism to capture this ideology. Unlike totalitarianism, instrumentarianism is not interested in the labourious task of mastering hearts and minds. Instead it is an amoral system which seeks to govern society as it finds it: encouraging the beneficial patterns big data discerns, while subtly supressing actions it deems detrimental. Instrumentarianism is a kind of binary ideology which governs on the basis of correlating only what is observed and what is desired. Within this vision of society, there is no room for theories, only digitally observed reality. There is also no space for ambiguity, only the extent to which an observed action confirms to established rules. An example of instrumentarianism, which Zuboff often refers to, is the hypothetical smart car, whose engines are immediately disabled as soon as an insurance policy expires. Within this situation there is no gap for judgement, no room for social manoeuvre. It does not matter if the car is carrying someone to hospital or contains a single mother with children driving on a lonely road at midnight. Unlike human systems, which work with ambiguity and, often, give people the benefit of the doubt, instrumentarian systems only operate in 0 and 1s.

Of course, in a pandemic situation it can be argued that a heavy dose of instrumentarianism is precisely what we need. It does not matter the circumstances which have led to you coming into close contact with a likely carrier of the novel Coronavirus, only that you have. But it seems unlike that things will ever be this simple. To be effective contract tracing apps must be used by a significant portion of the population. In the UK it is now being argued that using the NHS’s app is a kind of civic duty. While I am not arguing that using this app is not the socially responsible thing to do, it seems unlikely that social contact tracing apps will only be used to monitor virus transmission. It seems likely that in many states using a social contact tracing app will itself be a requirement of going out into public spaces. But what happen when you forget to turn on your phone, or misplace your mobile? It is not difficult to image situations whereby citizens will be algorithmically scored on their app-use compliance and access to public space determined accordingly (particularly in authoritarian contexts). It is also possible to image a world where apps and mobile devices can be used to monitor how effectively we are social distancing at work. Landing AI has already been promoting its social distancing detector, while Amazon is using surveillance tech on workers in their warehouses. If employers mandate the use of social distancing devices at work will workers be graded on their skills at avoiding close contact with others? If they are, the instrumentarian logics that often go hand-in-hand with these forms of technology will care little about the human circumstances that may require social proximity, or the nature of the encounter.

What theories of surveillance capitalism ultimately claim is that that use of smart technology monitoring tends to results in instrumentarian systems within which trust in human judgement is undermined, and ambiguity is eliminated. While pandemic response may appear to necessitate such certitude, care must clearly be taken to ensure that these forms of technological cultures do not become the norm within our collective futures.

 

Back to the here and now.

One thing is for certain, things are moving fast. As I write this the Australian Government is deploying the CovidSafe app as part of its strategy to break lockdown. Meanwhile in the UK, a government app is being trialled on a Royal Air Force base, while the Isle of Wight has been identified as the test location for the wider application of contact tracing technology (a kind of study in digital island biography). Meanwhile  Tony Blair’s Institute for Global Change has suggested that the anti-liberal dangers associated with the application of smart technology are a price worth paying in the collective struggle against the novel Coronavirus. At the same time however, the UK Parliament’s Joint Committee on Human Rights has suggested that any roll-out of social contact tracing technologies needs enhanced data privacy protocols (Syal, 2020). While scientists and researchers working in the field of data privacy and cyber security have written open letters expressing concerns over the potential mission creep associated with contact tracing technologies (see here and here).

Digital technology is ongoing to be playing an important role in allowing our lives to return to some form of normality. But while partly liberating us from lockdown, it is crucial to be aware of the anti-liberal potential of such technologies. In a recent Independent Social Research Foundation research project I have been exploring the subtle compromises that people make in their interactions with smart technology. It appears that even when achieving relatively minor gains from such technology we are willing to sacrifice significant forms of personal privacy. Given how keen people will inevitably be to safely escape the constraints of lockdown, vigilance in clearly needed if data rights and privacy are not going to be carelessly cast aside. When it comes to social interactions with smart technology is clear that tangible short-term gains always tends to trump concerns over vague future costs. But beyond a call for vigilance, it is important to recognise that there is more at stake here. In a recent webinar discussion, Shosana Zuboff reminded us that our concerns around the emergence of a kind of “Covid-1984” should not focus primarily on the technological. The deeper issues are the economic logic and institutions that shape how smart technology is being used. Can we then image the use of digital technology to assist in the Covid-19 crisis without a surveillance capitalist imperative (see here)? Or indeed, could we build a collective smart tech response that was outside of the institutional influence of big tech? If we can, there is just a chance that we may catch a broader glimpse of a technological future that is primarily for public purpose and is controlled by those whose data the system depends upon.

References

Cellan-Jones, R. (2020) ‘Coronavirus: Privacy in a Pandemic’ BBC 2/4/20.

Ghaffary, S. (2020) ‘What the US can learn from other countries using phones to track the spread of Covid-19’ Vox 18/4/20 (https://www.vox.com/recode/2020/4/18/21224178/covid-19-tech-tracki…china-singapore-taiwan-korea-google-apple-contact-tracing-digital)

Hern, A (2020) ‘France urges Apple and Google to ease privacy rules on contact tracing’ The Guardian 21/4/20.

Naughton, J. (2019) ‘Why we should be very scared by the intrusive menace of facial recognition’ The Guardian 29/7/19.

Morozov, E. (2020) ‘The tech ‘solutions’ for the coronavirus take the surveillance state to the next level’ The Guardian 15/4/20.

Sample, I. (2020) ‘NHS contact tracing app ready to use in three weeks, MPs told’ The Guardian 28/4/20.

Sayal, R. (2020) ‘UK contact-tracing app could fall foul. Of privacy law, government told’ The Guardian 7/5/20.

Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, London: Profile Books.

United Nations (2020) Covid-19 and Human Rights: We are all in this together (United Nations, April 2020).

 

Thanks to the ISRF for their support, and Kelvin Mason for uncovering a wealth of source material.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s