ABi’s Rachel Lilley has been working for several years with the New Forest National Park to explore ways in which Behavioural Insights could be used to address the various challenges associated with Park management. She has just produced this report (alongside Mike King) which summarises the key insights and lessons from her work.
[Image source: Shropshire Star 20/4/20]
By Mark Whitehead
‘Social participation and individual effectiveness should not require the sacrifice of […] our autonomy, our decision rights, our privacy, and, indeed, our human natures’
(Zuboff, 2019: 347).
The relationship between technology and human freedom has always been contested and uncertain. Technology has been central to the liberation of humans from various forms of labour. It has also been associated with a loss of autonomy and a power to control our destinies. In his lecture ‘The Question Concerning Technology’, Heidegger suggested that the technological (broadly defined) represented, perhaps, the greatest limiting factor on the human ability to freely experience the world. It should come as little surprise then that technology should be such a prominent topic of debate in relation to the Covid-19 crisis.
It is difficult to remember—at least in living memory—an event that has so impinged on the everyday freedoms of people throughout the world. Social lockdown has restricted our freedom of movement, our ability to meet and congregate, and even how much exercise we can take. These infringements on freedom are, of course, not incompatible with liberal political tradition (where personal freedom can be restricted in order to prevent harm to others). Mass quarantines have, however, led to anti-lockdown rallies and protests, which have sought (however foolishly) to reassert people’s liberties.
Digital technologies have already served to limit the impacts of mass lockdown and preserve certain elements of personal liberty. Despite recent suggestions that social media platforms may be crossing a threshold between their social utility and costs, they now provide a vital way of us staying connected with each while in isolation (indeed Marketwatch reported that daily video calls on Facebook’s WhatsApp and Messenger Apps doubled during March, reaching levels normally only witnessed on New Year’s Eve. While, in Italy, overall Facebook usage levels went up by approximately 70%). At the same time Zoom, Microsoft Teams, and Skype have enabled many of us to continue with our work. And, for many, including my family, lockdown without Netflix and Amazon Prime would seem unthinkable.
Now, however, there is keen interest in the role that digital technology can play in physically liberating us from lockdown. This interest has, in part, been stimulate by the digital techniques that have been used to monitor and control the spread of the Novel Coronavirus in places such as China, Singapore and South Korea. In Singapore, the TraceTogether app uses Bluetooth Technology to enable those who have come into contact with someone who has Covid-19 (however unwittingly) to be immediately alerted (Cellan-Jones, 2020). Meanwhile in China citizens are now required to scan a government QR code, which determines their likely exposure to the Coronavirus. The risk rating produced by this code is then used to determine whether someone can enter a public space or use public transport (Ghaffary, 2020). Given the apparent success of these digital initiatives, European states are showing an active interest in deploying contact tracing apps and digital warning systems (see Hern, 2020). It is in the context of this demand that Google and Apple have been collaborating on software that would enable contact tracing apps to be able to function across the operating systems of their phones. It was recently report by Bloomberg that France’s digital Minister Cedric O requested that Apple loosen it privacy settings to enable contact tracing data to be shared with public health authorities (Apple does not currently allow its Bluetooth functions to operate in background mode if the data being produced leaves the device).
It appears that other countries may deploy digital forms of social surveillance in slightly different ways. In the UK, for example, the government looks set to use a government-based app (apparently developed in liaison with GCHQ) that will provide a centralised index of social interaction (Sample, 2020). While this system appears to avoid fears associated with corporate involvement in contract tracing, it comes with its own dangers and limitations. Reflecting on the proposed UK system, Professor Lilian Edwards notes, “There’s an intrinsic risk in building any kind of centralised index of the movement of the entire population which might be retained in some form beyond the pandemic” (quoted in Sample, 2020). Meanwhile in the US, NBC reports that there has been governmental interest in the deployment of facial recognition systems to monitor social interactions. Utilising public cameras, available online images, and digital facial recognition technology, such a system would mean that even those without a social contact tracing app could be monitored for likely Coronavirus exposure. The US’s purported interest in facial recognition technology is perhaps the development that should give us most pause for thought. Facial recognition technology has long been seen as the endgame in the battle between personal freedom and digital surveillance (Naughton, 2019).
When you put all of these developments together, it is easy to see why tensions are emerging between the physical ability to be liberated from lockdown, and long-term privacy concerns about the right to be free from surveillance. Given the rapid rate of change in this area it is understandably difficult to assess the short and longer-term implications of these digital solutions. And there certainly appears to be a danger that we all get pulled into a consensual vortex of technological solutionism (Morozov, 2020). A sense of what may be at stake here is, perhaps, signalled most clearly in the United Nations’ recent report on the human rights implications of the pandemic (United Nations, 2020). The UN suggests that the use of AI and big data to tackle Covid-19 could threaten human rights globally (16). Furthermore, the UN expresses concern that the data surveillance techniques deployed within the current crisis could become normalised in the future.
With the stakes so high, and with so little time to process the various risks that must be balanced, it would be helpful if we had a ready-made theory to help us assess what we should do. The interesting thing is, we do, it is the theory of surveillance capitalism.
Surveillance capitalism – Lessons for Covid-19
The idea of surveillance capitalism was developed by the America scholar Shoshana Zuboff. In her 2019 book The Age of Surveillance Capitalism, Zuboff describes it as a ‘new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales’ (p.ix) (a detailed review of the book is available here). In a more critical context, she goes on to state that surveillance capitalism is a ‘rogue mutation of capitalism marked by concentrations of wealth, knowledge, and power unprecedented in human history’ (ibid.). In practical terms, surveillance capitalism involves the digital capture of online, and increasingly offline, human actions in order to facilitate the commercial exploitation of that behaviour. As an economic system, surveillance capitalism’s raw material is human behavioural and experience, expressed in digital form. Surveillance capitalism thus relies on the increasing digitisation of knowledge, and the ability to capture as much of that data as possible. The commercial operations associated with surveillance capitalism rely on the ability to predict our consumer needs (and provide us with targeted marketing) and to actively shape our decisions (such as voting in an election or referendum). The more data that surveillance capitalist enterprises such as Alphabet Google and Facebook can accumulate, the more accurate their predicts can become, and the more powerful their behavioural nudges are.
So, what can Zuboff’s account of surveillance capitalism tell us about the likely implications of digital social contact tracing?
Lesson 1: Surveillance capitalism has an historical track record of exploiting crises.
It is possible to trace the origins of surveillance capitalism to the 9/11 terrorist attacks. In the wake of 9/11, state authorities turned to the newly emerging big tech giants to support extended surveillance programmes. With governments able to access user information on platforms such as Google and Yahoo, a common interest emerged between big tech and the state in promoting the growing influence of such platforms. If governments required mass digital surveillance to support their anti-terror programmes, then they needed the infiltration of digital tech deeper into everyday life (whether that be internet use, Xboxes, or mobile phones). The states of exception that tech companies operated in following 9/11 in part explains why such giants have proved so difficult to regulate and control, even following the Snowden leaks and Cambridge Analytica scandal. But, of course, this is not the only reason they have successfully avoided regulation. The political connections, extra-territorial forms, and unintelligible algorithms and code associated with big tech have all served to prevent effective regulation.
If the state of exception produced by the 9/11 terror attacks enabled big tech empires to grow and escape regulation, could not Covid-19 usher in a new state of exception within which surveillance capitalism can deepen its power and influence? In a recent conversation between Zuboff and Naomi Klein, it was suggested that the current crisis could reflect an unhealthy fusion between the shock doctrine and surveillance capitalism. In this context it was claimed that surveillance capitalism could, on the one hand, offer part of the solution to the present crisis, while on the other exploit the crisis to expand its influence and power. This may seem hyperbolic. Big tech, however, gives us little reason to believe that having extended their infiltration they will cede the advances they have made following the passing of a crisis. Indeed, if the theory of surveillance capitalism is correct, the economic logic of big tech is predicted on the very resistance of any retreat from access to its most value asset –human experience.
Lesson 2: Surveillance capitalism has mastered the art of bait and switch
A recurring motif in Zuboff’s account of surveillance capitalism is the tactic of bait and switch. Bait and shift is used by Zuboff to denote the various deceitful practices that are deployed by big tech companies to secure access the personal data. The primary baits are the fee-free services that make our lives so much easier. These baits are often supplemented by promises of privacy and data security. The switches occur when we are presented with those obscure changes in terms of service, which detrimentally reset privacy setting defaults. Further switches can occur when it is revealed, after purchasing some form of digital tech, that disabling data sharing can undermine the functionality of the product.
We are told that contact tracing apps will come with various privacy protections. These include sunset clauses that will limit data gathering to the Covid-19 crisis period. But, as perviously mentioned, if the history of surveillance capitalism reveals anything it is that once access to data has been gained, it is rarely relinquished. Through the skilful deployment of new default settings, functionality features, and obfuscating terms and conditions, it would not be difficult for surveillance capitalism to maintain the flow of social contact data long after the Covid-19 crisis has passed. Perhaps this is taking too dim a view of big tech companies who have, it must be acknowledged, changed some of their practices in light of the Novel Coronavirus (think of Facebook’s more careful curation of content and support for trusted sources). But, the history of surveillance capitalism should, at the very least, make us vigilant.
Lesson 3: What has been learned can’t be unlearned and will continue to inform the enhanced commercial and governmental use of personal data in the future
One recurring theme within the discussion of social contact tracing apps is the reassurance that when personal information is accessed and shared, it will not be permanently tagged and stored against any identifiable citizen’s name. This reassurance suggests, again, that enhanced digital surveillance is merely a feature of the state of exception that is associated with the Covid-19 crisis, and will not undermine personal privacy in the long run. But this reassurance fails to appreciate the social dynamics of surveillance capitalism. Surveillance capitalism, and its associated systems of big data capture, algorithms and machine learning are predicted on the identification of social patterns across millions, often billions, of data points. The learning that goes on here is always, inevitably, removed from identifiable individuals. But this does but does not negate its threat to personal freedom and autonomy. The flipside of the surveillance capitalist system occurs when machine learning returns to the end user in the form of highly personalised prompts to action. What contact tracing apps will provide is an historically unprecedent insight into social interactions. When combined with other digital data, such as work productivity, purchase patterns, and biometrics, this will provide unparalleled insights into the social context for human action. Knowing what humans do, or do not do, in particular social settings, or when they come into contact with specific kinds of people, could be of great commercial and governmental value. It will open up new opportunities for what Zuboff terms behavioural actuation: or when data about human conduct is used as a basis to prompt future action (perhaps a well-timed add, web search result, or navigational nudge).
The ultimate destination of surveillance capitalism is a world within which big tech knows us better than we know ourselves. Such a situation promises to make our lives more convenient (with personalised web searches, digital nudges, and optimised thermostat settings). But to know someone better than they know themselves relies on the ever deepening of data gathering from everyday experience. Digital home assistants, for example, have enabled voice tone to become a surveillance capitalist data point (I am guessing that all of our recent video conferencing meetings are proving useful in this context too), while other forms of ambient computing will look to make facial expressions, blood pressure, and even gait and posture tools of behavioural prediction. It is clear that whatever the initial purpose of contact tracing apps, they will inevitably enhance big tech’s predictive power. Noticing what we do after we come in to contact with certain people could predict changes in jobs and even divorces (this is what Alex Pentland has described as a form of social physics). The problem with this kind of situation is that in the presence of the unprecedented accumulation of knowledge about ourselves (or at least our demographic equivalents), it becomes increasingly difficult for people to resist the behavioural prompts of surveillance capitalism.
Many will argue, that if contact tracing apps are run by government then our experiential data will be protected from the circuits of surveillance capitalism. This may be true, and perhaps, in the wake of Covid-19 we may see forms of state monopoly surveillance capitalism. But what if aggregate data eventually gets sold-off as part of an enterprising government privatisation scheme in the future (not exactly an unprecedented situation)? What if our health insurance becomes tied to the use of commercial contact apps? And, following the likely emergence of contract tracing app markets, are hastily developed government systems really going to defeat those produced by Google? I guess we will have to wait and see.
Lesson 4: This moment could be a vital point in the construction of an instrumentarian society.
According to Zuboff, surveillance capitalism is characterised by a distinctive ideological vision. Zuboff uses the rather ungainly term instrumentarianism to capture this ideology. Unlike totalitarianism, instrumentarianism is not interested in the labourious task of mastering hearts and minds. Instead it is an amoral system which seeks to govern society as it finds it: encouraging the beneficial patterns big data discerns, while subtly supressing actions it deems detrimental. Instrumentarianism is a kind of binary ideology which governs on the basis of correlating only what is observed and what is desired. Within this vision of society, there is no room for theories, only digitally observed reality. There is also no space for ambiguity, only the extent to which an observed action confirms to established rules. An example of instrumentarianism, which Zuboff often refers to, is the hypothetical smart car, whose engines are immediately disabled as soon as an insurance policy expires. Within this situation there is no gap for judgement, no room for social manoeuvre. It does not matter if the car is carrying someone to hospital or contains a single mother with children driving on a lonely road at midnight. Unlike human systems, which work with ambiguity and, often, give people the benefit of the doubt, instrumentarian systems only operate in 0 and 1s.
Of course, in a pandemic situation it can be argued that a heavy dose of instrumentarianism is precisely what we need. It does not matter the circumstances which have led to you coming into close contact with a likely carrier of the novel Coronavirus, only that you have. But it seems unlike that things will ever be this simple. To be effective contract tracing apps must be used by a significant portion of the population. In the UK it is now being argued that using the NHS’s app is a kind of civic duty. While I am not arguing that using this app is not the socially responsible thing to do, it seems unlikely that social contact tracing apps will only be used to monitor virus transmission. It seems likely that in many states using a social contact tracing app will itself be a requirement of going out into public spaces. But what happen when you forget to turn on your phone, or misplace your mobile? It is not difficult to image situations whereby citizens will be algorithmically scored on their app-use compliance and access to public space determined accordingly (particularly in authoritarian contexts). It is also possible to image a world where apps and mobile devices can be used to monitor how effectively we are social distancing at work. Landing AI has already been promoting its social distancing detector, while Amazon is using surveillance tech on workers in their warehouses. If employers mandate the use of social distancing devices at work will workers be graded on their skills at avoiding close contact with others? If they are, the instrumentarian logics that often go hand-in-hand with these forms of technology will care little about the human circumstances that may require social proximity, or the nature of the encounter.
What theories of surveillance capitalism ultimately claim is that that use of smart technology monitoring tends to results in instrumentarian systems within which trust in human judgement is undermined, and ambiguity is eliminated. While pandemic response may appear to necessitate such certitude, care must clearly be taken to ensure that these forms of technological cultures do not become the norm within our collective futures.
Back to the here and now.
One thing is for certain, things are moving fast. As I write this the Australian Government is deploying the CovidSafe app as part of its strategy to break lockdown. Meanwhile in the UK, a government app is being trialled on a Royal Air Force base, while the Isle of Wight has been identified as the test location for the wider application of contact tracing technology (a kind of study in digital island biography). Meanwhile Tony Blair’s Institute for Global Change has suggested that the anti-liberal dangers associated with the application of smart technology are a price worth paying in the collective struggle against the novel Coronavirus. At the same time however, the UK Parliament’s Joint Committee on Human Rights has suggested that any roll-out of social contact tracing technologies needs enhanced data privacy protocols (Syal, 2020). While scientists and researchers working in the field of data privacy and cyber security have written open letters expressing concerns over the potential mission creep associated with contact tracing technologies (see here and here).
Digital technology is ongoing to be playing an important role in allowing our lives to return to some form of normality. But while partly liberating us from lockdown, it is crucial to be aware of the anti-liberal potential of such technologies. In a recent Independent Social Research Foundation research project I have been exploring the subtle compromises that people make in their interactions with smart technology. It appears that even when achieving relatively minor gains from such technology we are willing to sacrifice significant forms of personal privacy. Given how keen people will inevitably be to safely escape the constraints of lockdown, vigilance in clearly needed if data rights and privacy are not going to be carelessly cast aside. When it comes to social interactions with smart technology is clear that tangible short-term gains always tends to trump concerns over vague future costs. But beyond a call for vigilance, it is important to recognise that there is more at stake here. In a recent webinar discussion, Shosana Zuboff reminded us that our concerns around the emergence of a kind of “Covid-1984” should not focus primarily on the technological. The deeper issues are the economic logic and institutions that shape how smart technology is being used. Can we then image the use of digital technology to assist in the Covid-19 crisis without a surveillance capitalist imperative (see here)? Or indeed, could we build a collective smart tech response that was outside of the institutional influence of big tech? If we can, there is just a chance that we may catch a broader glimpse of a technological future that is primarily for public purpose and is controlled by those whose data the system depends upon.
Cellan-Jones, R. (2020) ‘Coronavirus: Privacy in a Pandemic’ BBC 2/4/20.
Ghaffary, S. (2020) ‘What the US can learn from other countries using phones to track the spread of Covid-19’ Vox 18/4/20 (https://www.vox.com/recode/2020/4/18/21224178/covid-19-tech-tracki…china-singapore-taiwan-korea-google-apple-contact-tracing-digital)
Hern, A (2020) ‘France urges Apple and Google to ease privacy rules on contact tracing’ The Guardian 21/4/20.
Naughton, J. (2019) ‘Why we should be very scared by the intrusive menace of facial recognition’ The Guardian 29/7/19.
Morozov, E. (2020) ‘The tech ‘solutions’ for the coronavirus take the surveillance state to the next level’ The Guardian 15/4/20.
Sample, I. (2020) ‘NHS contact tracing app ready to use in three weeks, MPs told’ The Guardian 28/4/20.
Sayal, R. (2020) ‘UK contact-tracing app could fall foul. Of privacy law, government told’ The Guardian 7/5/20.
Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, London: Profile Books.
United Nations (2020) Covid-19 and Human Rights: We are all in this together (United Nations, April 2020).
Thanks to the ISRF for their support, and Kelvin Mason for uncovering a wealth of source material.
I would like to acknowledge the financial support of the Independent Social Research Foundation, who are funding the research project (Re-Thinking Freedom in a Neuroliberal Age) that this blog is associated with: http://www.isrf.org/about/fellows-and-projects/mark-whitehead/
Geography and Nudges: Real, Virtual and Metaphorical
As a geographer with an active interest in the world of Behavioural Public Policy and nudges I often have to explain the links between my discipline and these areas of research. Recently, however, I have noticed a shift in discussions about certain forms of Behavioural Public Policy towards geographical concerns. In some instances, this shift is, admittedly metaphorical. A case in point emerges from Cass Sunstein’s recent volume On Freedom (Sunstein 2019). In this book Sunstein utilises the notion of “navigability” to explain the value of nudges. According to Sunstein, it is precisely because it is so difficult to navigate an optimal decision-making path through our world (with temptations, sludge (the use of nudges for more pernicious ends), confusion, obfuscation, and exploitatively designed choice environments everywhere) that we need nudges to act like our behavioural Sat Nav. In other words, nudges can be summed us as routing suggestions: it is not that you have to go this way, but it is going to best for you if you do (although for an alternative discussion of the behavioural economics of Sat Navs you may want to read Rory Sutherlands recent book Alchemy (2019) – Sat Navs’ flaws appear to emanate from their rigid commitment to logical calculations of distance and time, which cannot account for the psycho-logics of human mobility needs).
Beyond the metaphorical realm, however, geography appears to be becoming an increasingly important consideration within emerging forms of hyper-nudges. According to Yeung hyper-nudges are nudges that fuse together behavioural and data sciences (Yeung 2016). Yeung observes that:
‘Hypernudging relies on highlighting algorithmically determined correlations between data items within data sets that would not otherwise be observable through human cognition alone (or even with standard computing support (Shaw 2014)) thereby conferring ‘salience’ on the highlighted data patterns, operating through the technique of ‘priming’, dynamically configuring the user’s informational choice context in ways intentionally designed to influence her decisions’ (2016: page 8)
Interestingly the notion of hyper-nudging takes its name from the scaling-up of nudges through social media networks and smart technologies to deliver population level effects. But, as this quote from Yeung reveals, hyper-nudging can also be thought of as micro-nudging—to the extent that it involves the delivery of personally salient nudges to individual in real time (seeDow Schull (2016)). To these ends hyper-nudges would include Facebook’s Voter Megaphone Project (through which voting in elections in promoted on the basis of indicating who in social networks has already cast their vote), and the various forms of personal feedback that people receive from self-tracking devices (see (Lanzing 2018).
The transfer of nudging into the digital world may, at one level, be interpreted as it becoming less relevant to geographical concerns: as nudges migrate from the changing of actual environments to digital ones. But, the reverse appears to be true. The reason for this is that the emergence of hyper-nudging is directly coupled to the embedding of the digital into the everyday spaces of life. The emergence of smart devices (including mobile phones, smart, thermostats, the onboard computing capabilities of cars inter alia), couple with the embedding of a bewildering array of monitoring devices into the physical infrastructures of everyday life, is the very basis of the hyper-nudge. It is the Internet of Things (or the wideware) of the contemporary digital age that enables the data on which hyper-nudges depends to be gathered (perhaps relating to our particular penchant for running), and appropriate nudges to be delivered (perhaps a message to let us know of a particularly good deal on running shoes that is available in a shop you are in close proximity to). In a strange way, while the preliminary stages of the internet embodied clear demarcations between the realms of virtual and real space, it appears that in the future computing will be so deeply embedded in the world around us that it will be inherently geographical.
Of course, the fusion of the digital and the geographical has already been described in the pioneering work of certain digitally-oriented geographers (see Thrift and French, 2004). In their account of the automatic productions of space and the associated technological unconscious, Thrift and French consider how software is insinuating itself into our everyday lives and offers forms of local intelligence that is reshaping our worlds. Thrift and French draw particular attention to how the work of software in urban space tends to operate below the threshold of the representational, and the particular political and epistemological implications this presents to urban studies (Thrift and French (2002)) 312, Quoting Hansen, 2000: 17). Thrift and French interpret the spread of software into everyday life as the emergence of a form of distributed cognition in and through which our everyday environments become contexts for increasingly diverse forms of knowledge production and non-human analysis. While Thrift and French’s analysis is portentous of the age of surveillance capitalism we now find ourselves in (Zuboff 2019), it was written in a time before the proliferation of smart devices, cloud computing, and social media platforms. As such, while Thrift and French’s work identifies the spatial implications of governmentality by software, it did so in an age before the emergence of the hyper-nudge. To these ends, there is clearly scope to reconsider the ways in which data and the behavioural sciences are insinuating themselves into the spaces of everyday life, and the particular geographical implications of these processes. In what remains of this short reflection, I will consider the role of geography in the developing story of hyper-nudging. I will also consider how geography can provide us with a framework and language to develop novel critical perspectives on, and grounds for resisting, these emerging processes.
Unpacking the geographies of hyper-nudging
One does not have to look very hard to discern the emerging geographies of hyper-nudging. It is important, however, to identify three key processes that are associated with the spatial hyper-nudge. First are the processes I describe as spatial behavioural surveillance (catchy, I know). Spatial behavioural surveillance is different from more general forms of digital surveillance to the extent that it is not limited to the virtual world. Spatial behavioural surveillance involves the gathering of behavioural data in geographical situ. Spatial behavioural surveillance can operate on an aggregate (changing patterns of driving behaviour monitored through embedded kerbside devices) and more personal levels (as GPS activated smart watches indicates to Strava the particular routes we like to cycle to work along, or photographs are “geo-tagged” in Facebook). Spatial behavioural surveillance involves both locational behavioural surveillance (identifying what behaviours happen where) and the surveillance of geographical behavioural routines (identify the spatial routes and patterns of everyday life).
The surveillance of geographical behavioural routines: Strava Heatmap for my local area (including some of my own particular routes)
The second set of processes associated with digital hyper-nudging is geographical digitization. Unlike spatial behavioural surveillance this process is not primarily interested in behaviours in space, as enabling the geographical (in its entirety) to have a digital form. The transferal of the geographical into the digital is perhaps expressed most obviously through the Google Earth and Google Street View projects, in which satellite and ground-level surveillance facilitate the production of digitally adaptable maps of the world. Geographical digitization can operate at large and small scales. So, while Google Earth and Google Street View involve the orchestrated extraction of geographical data at large scales, Google Glass offered more personalised, and invasive, forms of surveillance, whereby anyone wearing Google glasses could help to map the world for Google in real time. But geographical digitization can take an even more sinister form. Zuboff (2019) describes the capacity of iRobot’s autonomous vacuum cleaner to develop detailed floor plan information of homes while cleaning rooms (235). It is claimed that there could be quite a market for household floor plan data in the future (although I must admit to finding it difficult to understand what the commercial benefits of this data would be – “answers on a postcard please”).
Whether generating the God’s-eye perspective of Google Earth, the more dynamic mappings of Google Glass, or iRobot’s more intimate floor plan data, geographical digitization provides the spatial co-ordinates in and through which hyper-nudging can be most effectively mobilised: in order to be able to hyper-nudge geographically, you first need to have a digital version of the geographical world. It is these digital coordinates that enable the activation of geographically salient knowledge, not only at the right time, but also in the right place!
Google Street View Camera in action (Tech Guide)
The third and final dimension of the geographies of hyper-nudging is the spatial hyper-nudge. The spatial hyper-nudge is essentially a non-coercive prompt to action that is based on algorithmically determined correlations of personal and collective data sets that are able to predict what you would like to do next and route your behaviour accordingly. Spatial hyper-nudges are distinct from more general forms of hyper-nudge to the extent that they are determined by spatial context (as opposed to the webpage that you are on), and target spatially salient behaviours. Spatial hyper-nudges could be used to maximise the behavioural possibilities of particular settings: for example, letting you know that a car park you are passing has available spaces, that a swimming pool is currently fairly quiet, or that a nearby restaurant is loved by one of your friends). But they could also be used to actively route you in certain directions. It has been suggested, for example, that the game Pokemon Go has been using play as a way of guiding people to particular, fee-paying, commercial establishments (such as McDonald’s) where Pokemon will be waiting (Zuboff, 2019). Google’s Sidewalk Labs is also promoting a new approach tackling traffic congestion in cities, which combines AI and smart tech to guide people to available public and private parking spaces (a kind of Airbnb for parking); this tech can also, however, guide traffic wardens to lucrative areas of cities (Guardian, 2016).
While a systematic analysis of the particular ways in which geography and nudging are, or could be, combined in spatial hyper-nudging is still to be complete, the possibilities are intriguing. We may only just be seeing the beginnings of the ways in which spatial hyper-nudging can deploy the behavioural tools of defaults (assumed best places), social influence (the spatial behaviours of those that you know), or status quo bias (you have been here before, so why not go again). What is clear is that smart tech, the Internet of Things, and machine learning, will enable nudging to enter the spaces of our everyday lives in ways it never has before.
Critical geographical perspectives on the hyper-nudge.
In this final section I want to offer some critical perspectives on hyper-nudging, which are specifically signalled by a geographical perspective. Much critical analysis of hyper-nudging focuses on the question of privacy (Yeung 2018). Concern over privacy emerges because of the large volumes of personal data that must be harvested to feed the algorithms that support hyper-nudges. Critically, however, Lanzing (2018) that the hyper-nudge involves a new horizon of privacy concerns as infringements on informational privacy (perhaps pertaining to demographic profiles) are joined by those pertaining to decisional privacy. Decisional privacy is distinct from informational privacy as it does not just tell us about a person (data which can be used to predict behaviour), it reveals the actual way a person behaviours in a given context. Access to decisional data (which can be both promoted and harvested by the smart devices that facilitate hyper-nudges) opens-up new realms of behavioural experiments and controlled trials at previously unattainable scales. The operation of such trials, often without meaningful forms of consent, raises a series of troubling ethical issues (Jones and Whitehead, 2018). Concerns over decisional privacy have two primary geographical dimensions. First, whether it be through the biometric reconnaissance of wearable tech, or the domestic surveillance of the smart home, hyper-nudges have enabled the behavioural governance of decisions to enter the everyday spaces of life in ways that analogue nudges never could. So, whether it be your smart fridge nudging you to consume less food, or your smart car encouraging you to drive more sensibly (and secure better insurance premiums), hyper-nudging changes the geographical scope of soft paternalism and the monitoring of related behaviours. Second, the GPS activated technologies that are associated with hyper-nudging do not only capture decisional actions in particular places, they can also monitor spatial action itself. Hyper-nudge technologies can compromise decisional privacy to the extent that they can monitor and mould our spatial decision-making and the routes we take (see above). While not directly related to hyper-nudging, one of the most striking recent violations of spatial behavioural privacy has been perpetrated by Uber. In its infamous ‘Rides of Glory’ analysis, Uber was able to map users one-nightstands (without their meaningful consent) on the basis of historic spatial patterns in customers use of the ride sharing app. What Uber’s ‘Rides of Glory’ analysis reveals is the ability of GPS enabled smart tech to not only reveal private spatial behaviours, but also to use patterns of spatial movement as proxies for the prediction of actual behaviours. This inferential capacity means that observed spatial behaviours can be used to reveal, and potentially mould, the behaviours that are occurring in between geographical movements.
A geographical perspective on hyper-nudging also highlights its exploitative potential. By being able to simultaneously cross-reference “geo-tagged” data concerning location, biometrics, and historical behavioural patterns, hyper-nudging has the potential to be able to exploit people’s spatial behavioural vulnerabilities in previously unimaginable ways. Being able to nudge you in the direction of a fast-food restaurant on the basis of the basis of time of day, proximity to a fee-paying eating establishment, biometric data on your level of hunger, and historical data on your penchant for hamburgers, could enable nudges to wield new forms of emotional power. This, of course, is the realm of mobile life-pattern marketing (see Zuboff, 2019: 242-245). As such it is perhaps best termed “hyper-sludging” (the use of nudge techniques for commercial gain, see above). Nonetheless, it is clear that the proliferation of ‘context aware data’ facilitates the exploitation of human spatial vulnerabilities in enhanced ways. While this may improve the effective of nudges, it is likely to significantly reduce our ability to be able to resist them. We can log off from a computer, but it is much more difficult to opt out of the internet of thing.
Biometrics and context aware data: Personalised, pop-up advertising in the Film Minority Report (2002, 20th Century Fox, screen capture)
So far, we have seen how a geographical perspective on hyper-nudging can help to provide critical vantage points on how big data and behavioural science are insinuating themselves within the spaces of everyday life. It is also clear, however, that geography offers some scope for collective resistance to hyper-nudging. According to Zuboff (2019), the growth of the surveillance practices associated with hyper-nudging have progressed through the geographical logic of trespass. The logic of trespass has seen the big data industry moving as far as it can into the personal spaces of existence until political or legal opposition is encountered (at which point such opposition is strenuously resisted). Whether it be in relation to the extraction and sale of data concerning online behaviour, or the photographing of private homes as part of Google’s Street View initiative, trespass has provide a lucrative model of economic accumulation (particularly when individuals are often unaware of acts of trespass and feel disempowered to do anything about it). The ethos of trespass is tightly connected to the Californian Ideology that undergirds the big data industry and suggests that only by doing things ‘without permission’ can the creative liberalism of the tech industry be realised (Barbrook and Cameron 1996). It appears, however, that when trespass takes on a physical geographical mode of action that opportunities for resistance are enhanced. In 2010 for example, residents of London Road, a cul-de-sac in Milton Keynes (UK) come together to prevent a Google Street View car from gaining access to the street. Local Councillor Edward Butler-Ellis explained the motivation behind the spatial protest on the following terms:
“The fact is they should have asked or at least let people know that they were photographing their houses. What really gets me is people have to opt out of being on it when they should have to opt in. A lot of older people without the internet are unaware that they are able to opt out of this.” (quoted in Barnett and Beaumont 2010).
Protests against Google in Kreuzberg Germany (Sean Gallup, Getty Images)
While London Road would eventually be captured on Street View, community action has seen many on similar streets and communities seeking to opt out of the forms of digital surveillance that provide the virtual infrastructure for hyper-nudging. The question remains, however, as to whether the notion of trespass can offer a spatial discourse in a through which more opposition can be generated to the embedding of digital surveillance and hyper-nudging into homes and communities. While there is no guarantee that resistance to the apparatuses of hyper-nudging will be more sustained in the real world as opposed to the digital world, it is clear that as digital surveillance moves from the virtual to the real that new opportunities for spatial resistance will emerge.
Barbrook, R. and Cameron, A. (1996). “The Californian Ideology.” Science and Culture 6(1): 44-72.
Barnett, E. and Beaumont, C. (2010) “Buckinghamshire village in Street View fight against Google” The Guardian March
Dow Schull, N. (2016). “Data for life: Wearable technology and the design of self-care.” BioSocieties: 1-17.
Jones, R. and Whitehead, M. (2018) “Politics done like science’: Critical perspectives on psychological governance and the experimental state” Society and Space 36: 313-330
Lanzing, M. (2018). ““Strongly Recommended” Revisting Decisonal Privacy to Judge Hypernudging in SElf-Tracking Technologies.” Philos. Technol.
Sunstein, S. R. (2019). On Freedom. Oxford.
Thrift, N. and. French, S. (2002). “The automatic production of space.” Transactions of the Institute of British Geographers 27: 309-335.
Yeung, K. (2016). “‘Hypernudge’: Big Data as a Nodue if Regulation by Design’ ” TLI Think! Paper 28/2016.
Yeung, K. (2018). “Five fears about mass predictive personalization in an age of surveillance capitalism ” International Data Privacy Law 8(1): 258-269.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London, Profile Books.
Reflecting on Homo Politicus –
Provisional thoughts on the scope and implications of Behavioural Politics
Behavioural economics has undoubtedly enabled us to better understand why our liberal economic systems are failing, but this does not mean it is necessarily best placed to interpret the fault lines running through our political systems as well…
The basis and implications of re-thinking homo-politicus
The creative fusion of economics and psychology within the behavioural economics movement has had significant impacts on the way in which we think about the nature of personal decision-making. No longer do we seriously think of the consumer as a careful thinker, operating in oceans of social isolation, and devoted to unseemly acts of self-interest. Rather we are beginning to acknowledge the general ‘irrationalities’ that characterise human behaviour, and are reforming our models of human decision-making, and modes of behavioural governance, accordingly. The political implications of the behavioural science are, comparatively, underdeveloped. While behavioural economics had led to a proliferation of so-called Behavioural Public Policies globally, such policies are primarily about how behavioural insights can be used to generate more effective policies. But what are the actual political implications of behavioural science insights into the human condition?
To put things another way, if behavioural economics has given us pause for thought in relation to our depiction of homo economicus, what are its implications for homo politicus? The thesis I want to explore here goes further than this, however. The particular issues that a recast understand of homo politicusraises—both in relation to the political life of citizens, and the broader functioning of liberal democratic systems—require the more general study of behavioural politics. Of course, the field of behavioural politics already exists, and has its own colourful history (see below). In its current form, however, behavioural politics is a fairly implicit affair operating within the interdisciplinary shadow of behavioural economics. As a consequence of operating in this shadow, the figure at the centre of the contemporary behavioural politics imaginary tends to be a citizen consumer, whose everyday choices and challenges are interpreted through the lens of economic rationalities. This tends to mean that even where behavioural politics perspectives are evident, they are primarily interested in the nature of the political decisions of individual citizens, and less with the existential political implications of the behavioural science.
Although behavioural politics may operate in the shadow of its economics sibling now, it is important to acknowledge its historical antecedents. One of the defining moments in the history of behavioural politics came in the late 1930s with the work of German psychologist Kurt Lewin at the Iowa Child Welfare Centre. During a series of experiments Lewin and his graduate student (Ronald Lippitt) used different leadership techniques to produce political atmospheres—authoritarian, laissez-faire, and democratic—and to orchestrate the political disposition of groups. In Lewin’s attic laboratory we see the application of behavioural insights to form the conditions under which political practices and norms can be established—with subservience emerging from authoritarian situations, and deliberation and cooperation from democratic scenarios. What we can discern in Lewin’s experiment is the fleeting production of homo politicus, albeit within a contrived psychological experiment. But, for Lewin the experiments were significant because they offered a basis for believing in the ontological existence of democratic life (Lezuan and Calvillo, 2014). These are important experiments to the extent that they demonstrated the potential power of behavioural science within the very genesis of political systems. These experiments would of course be a precursor to the wider application of psychological insights to serve political goals in the post second world war period. The birth of ‘mind control’ science and psych-ops. was, of course, a very particular fusion of psychology and politics (which was primarily concerned with strategic military advantages) and is not directly relevant to our discussion of behavioural politics here (apart from further emphasising the vulnerabilities of homo-politicus).
Despite the historical significance of Lewin’s psycho-political experiments, I am not primarily interested here in psychology as a foundational impetus for democratic politics. Rather I am concerned with the impacts (both actual and potential) of the behavioural sciences on how we understand actually existing liberal political practices. But for now, let us consider what is at stake in pronouncing a behavioural politics perspective on the world. The work of Julie Cohen (2012) provides a helpful step-off point for a discussion of behavioural politics. According to Cohen, the liberal political and legal subject is defined by three key attributes:
- Anautonomous rights-bearing subject, who is able to exercise those rights independently and regardless of whatever context they may find themselves within.
- A capacity for rational deliberation, which is again independent of context, and is based upon the ready availability of the truth.
- A transcendent subjectivity, which is independent of the materiality of the body
(see Yeung, 2016: 17 for a more detailed discussion of these characteristics)
While fanciful, these assumptions are central to the liberal system of freedom, equity, and social stability. Of course, the behavioural sciences question the assumptions of autonomy, context independence, rationality, and immateriality associated with liberal subjectivity. Within behavioural economics, for example, the model of human liberal subjectivity is effectively reversed: as autonomy is combined with an appreciation of social influence and unconscious prompting; context and choice environments are seen as prominent variables within human decision-making; the irrational is recognised as a key factor within observed behaviours; and the materiality of our bodies and environments are foregrounded. While much of this is now accepted wisdom, my concern is what difference does this make for how we understand the behavioural dynamics of liberal democratic society. It is in the context of this question that I think a specifically behavioural political perspective can offer important analytical insights.
It is worth considering why greater critical attention has not been given to the figure of homo-politicus. The answer to this question can, perhaps, be discerned in the one of the most prominent moments when homo politicus has been subject to sustained critique. The publication of B.F. Skinner’s 1972 Beyond Freedom and Dignity embodied a direct behaviourist attack on liberal assumptions of subjectivity. In this controversial volume, Skinner targets homo politicus (in his words the “inner man”, or “autonomous man” (sic)) as the hollow subject of Western philosophy and democracy. According to Skinner,
‘The function of the inner man (sic) is to provide an explanation which will not be explained in turn. Explanation stops with him. He is not a mediation point between past history and current behaviour, he is a centrefrom which behaviour emanates’ (original emphasis) (1972: 14)
Furthermore, Skinner observes,
‘He [the inner man] initiates, originates, creates, and in doing so he remains, as he was for the Greeks, divine. We say he is autonomous—and, so far as a science of behaviour is concerned, that means miraculous’ (ibid: 14).
In locating homo politicus within the ancient norms of Greek philosophy, Skinner’s volume may indicate why it has been so difficult to critically scrutinise related subjective assumptions: to do so would appear to destabilise the ancient democratic norms which coevolved with this figure. But it is, of course, perfectly possible to hold firm to the normative prerogatives of democracy, which seeks to maintain justice and dignity within the human condition, while questioning its subjective foundations.
The nascent field(s) of Behavioural Politics
While those working on what I would term behavioural politics would not necessarily self-identify with the term, it is possible to discern at least three branches of this inchoate movement.
I refer to the first branch as those working on Applied Behavioural Politics.Applied Behavioural Politics itself takes two main forms. First, is work that seeks to apply the insights of the behavioural sciences to public policy issues (i.e. climate change, public health) (see Oliver 2013) (this body of work tends to go by the name of Behavioural Public Policy). Second, is research that focuses on how behavioural insights can address specific political problems (i.e. low voter turn-out; lack of civic participation) (see John, 2011).
The second branch can best be described as Critical behavioural politics. Related work in this area (to which I have contributed) has explored the potential negative impact of behavioural public policies on political life. Related work in this area has considered the ethical implications of applying of the behavioural sciences (often targeted at the collective unconscious) and their implications for personal autonomy and political accountability (see Jones et al 2013; Leggett, 2014; Lepenies et al 2018).
The third branch is Analytical Behavioural Politics. Related work in this area is primarily concerned with the implications of the behavioural sciences for the underlying assumptions and practices of liberal democratic society. Analytical behavioural politics is concerned with the impacts of behavioural public policies, but also considers the broader implications of behavioural insights for how we might think about democracy, freedom, state intervention, and citizenship (see Button, 2018; Sunstein, 2019; Whitehead et al 2018).
In this remainder of this post, I will consider the application of analytical behavioural politics and its potential implications.
Mobilising Behavioural Politics – on Freedom and Bounded Democracy
While the scope for studying behavioural politics is broad (and would certainly at this moment be relevant within analyses of identity politics and popularism, for example), in this section I will focus on its particular pertinence to questions of freedom and democracy. Perhaps the clearest statement of the parameters of an analytical behavioural politics for freedom and democracy is offered by Yuval Noah Harari. In a recent piece for The GuardianHarari reflects upon the myths of freedom that are central to liberal democracies. According to Harari ‘the liberal story is flawed [because] it does not tell the truth about humanity’ (2018). Harari observes that liberalism is founded on a belief that humans have free will and that political systems should be constructed in ways that preserve the freedom of that will. While the liberal call to preserve free will in part rests on a desire to protect human dignity, it is also predicted on what Tobias (2005) calls ‘Rational Agency Freedom.’ In the terms of Rational Agency, it is not just that humans have an inherently free will (to choose what they will), but that freedom is a gateway to unlocking forms rationality, which are unknown, and largely unknowable, to governing authorities. The presumption of rationality is what makes liberal freedom, in its fullest form, desirable. But, Harari argues,
‘Unfortunately, “free will” isn’t a scientific reality. It is a myth inherited from Christian theology. Theologians developed the idea of “free will” to explain why God is right to punish sinners for their bad choices and reward saints for their good choices. If our choices aren’t made freely, why should God punish or reward us for them? According to the theologians, it is reasonable for God to do so, because our choices reflect the free will of our eternal souls, which are independent of all physical and biological constraints’ (2018).
For Harari then, the idea of free will—and associated presumptions of rationality—have been inherited from Christian theology to provide liberalism with a secular basis for attributing rights and responsibilities. But, as behavioural science has consistently revealed, there is no scientific evidence to suggest the existence of what Skinner described as an inner person, who is free from physical and biological constraints. From a behavioural politics perspective, Harari’s reflections are significant because they reveal the political dangers associated with presumptions of free will and Rational Agency Freedom. In an age of big data, smart tech, and biometric information Harari outlines the emerging opportunities for governments and corporations to manipulate human behaviour at scale (this is what Zubuff (2019) has described as surveillance capitalism). For Harari, ‘[i]f governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will’. In this context, behavioural politics is best thought of as an analytical perspective that focuses on the political realities facing homo sapiens rather than the myths of homo politicus.
The most extensive body of work in the field of behavioural politics has been developed by Cass Sunstein. In his 2014 volume Why Nudge, Sunstein challenges the epistemic foundations of Mill’s Harm Principle, which has been so important to delimiting legitimate and illegitimate forms of governmental actions in liberal society. A central part of Sunstein’s argument rest on the idea that if choice can be preserved (through nudging techniques) then government’s may have a role in regulating harm-to-self actions (or internalities) as well as harm-to-others (externalities). More recently, Sunstein has reflected on the implications of the behavioural sciences for liberal norms of freedom. In On Freedom Sunstein (2019) suggest limits on prevailing liberal assumption about freedom on two grounds: 1) on basis of the evidence that suggests that the clear antecedent preferences associated with acting freely are not as common as we might think; and 2) that the world in which we live is structured in such a way that acting in our own best interests (without guidance) is often difficult. Sunstein ultimately argues that established notions of liberal freedom need to be revised to reflect the cognitive and contextual realities that make navigatingthrough our lives freely so difficult. As an intervention in behavioural politics, Sunstein’s work is significant because it recognises the constitutional disjuncture that exists between liberal political practices and structures, and the behavioural realities that inhibit the achievement of freedom in ostensibly free societies.
A further intervention within analytical behavioural political of note is provided by Mark E. Button. Button’s work is significant for connecting the discovery of bounded human rationality within the behavioural sciences with the subsequent ‘bounding of democracy’ within the political sphere (2018). Reflecting on the practices of a behaviourally-informed “nudging-state”, Button raises concerns that while such forms of intervention may be justified in terms welfare, they have potentially negative connotations for political agency and civic capacity (2018: 1035). Button is concerned that the forms of (soft) paternalism that are associated with behavioural public policy can erode political agency. In particular, Button reflects upon the tendency of behavioural policies to emanate from unelected behavioural experts, and to focus on individual as oppose to collective action. In a telling reflection, Button observes,
‘Today’s behavioralists, in contrast to their academic ancestors in the 1950s and 1960s, are less interested in explaining political behavior (as individual and collective phenomena) and far more concerned with orchestrating private (often consumer-oriented) behaviors to serve individual and social welfare ends’ (2018: 1037)
It is here that we begin to see the costs of approaching political problems from a behavioural economics perspective. Button argues that emerging systems of behavioural government fail to perceive subjects as citizens engaged in public life and collective acts of freedom. While making decision-making easier (by often removing the cognitive burden of having to think and deliberate about them) may make sense in relation to more economically-oriented actions, when it comes to politics getting more actively involved in, and even contesting, courses of action is rather the point of being political.
Within the work Button we sense what is missing within more behavioural economic approaches to questions of freedom. While the behavioural economist may be satisfied that the preservation of individual choice is enough to safeguard freedom, behavioural politics, almost inevitably, raises broader concern over consent, legitimacy, and collective forms of action. Ultimately, Button observes that “[c]itizens in their public capacity as agents of political freedom are missing from the latest integration of behavioral science and public policy” (2018: 1040)
Analytical behavioural politics thus draws critical attention to the democratic consequences of certain constructions of citizenship within behavioural state actions. It does so, however, while acknowledging that the behavioural perspective has much to offer accounts of the political. As Button astutely observes,
‘One of the advantages of taking the behavioral sciences seriously within the design and conduct of democratic deliberative practices is that we can purchase greater psychological realism without sacrificing democratic aspirationalism’ (2018: 1043)
Psychological realism without sacrificing democratic aspirationalism captures the essence of the analytical behavioural politics I have sought to outline here. Just because behavioural economics has enabled us to better understand why our liberal economic systems are failing, does not mean it has the answers to the failings of our political systems as well. Indeed, it could be argued that behavioural economics (in the form of behavioural public policy) could be contributing to some of the problems of liberal democratic societies. Could behavioural politics offer fresh insights that can illuminate emerging political developments (including declining rates of political participation, identity politics, the changing norms of personal freedom) and offer new ways of invigorating liberal political systems? May be.
Button, M. E. (2018). “Bounded Rationality without Bounded Democracy: Nudges, Democratic Citizenship, and Pathways for Building Civic Capacity ” Perspectives on Politics 16(4): 1034-1052.
Cohen, J.E. (2012) Configuring the Networked Self (Yale University Press, New Haven)
Lezuan, J. and Calvillo, N (2014) “In the Political Laboratory: Kurt Lewin’s Atmospheres” Journal of Cultural Economy 7: 434-457
Harari, J.N. (2018) “The myth of freedom” The Guardian
John, P., Cotterill, S., Moseley, A, Richadson, L., Smith, G., Stoker, G. and Wales, C. (2011) Nudge, Nudge, Think, Think (Bloomsbury, London).
Jones, R., Pykett, J. and Whitehead, M. (2013) Changing Behaviours: On the Rise of Psychological State(Cheltenham, Edward Elgar).
Leggett, W. (2014) “The politics of behaviour change: Nudge, neoliberalism, and the state” Policy and Politics 42: 3-19.
Lepenies et al 2018
Oliver, A. ed. (2013) Behavioural Public Policy (Cambridge, Cambridge University Press).
Sunstein, C. R. (2019).On Freedom(Princeton University Press, Oxford).
Sunstein, C.R. (2014) Why Nudge? The Politcs of LIbertarian Paternalism(Yale University Press, London)
Tobias, S. (2005). “Foucault on Freedom and Capabilities ” Theory, Cultire and Society 22: 65-85.
Yeung, K. (2016). “‘Hypernudge’: Big Data as a Nodue if Regulation by Design’ ” TLI Think! Paper28/2016.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. (London, Profile Books).
This blog was written thanks to support from the Independent Social Research Foundation’s Mid Career Fellowship programme 2019
‘Just like unlocking the human genome helped identify genetic traits that allow for personalized medical advice, we can think of machine learning as the next step in unlocking a “behavior genome.” By factoring in personality traits, situational features, and timing, we can better persuade people who want to be persuaded’ (Risdon, 2017).
Introducing data, datafication and nudges
For some time now, I have been interested in the actual and potential connections that exist between nudges and smart tech. By nudges I am referring to techniques, inspired by the behavioural sciences, that use subtle prompts to encourage desired behaviours (for example encouraging the payment of those troublesome tax bills by letting people know that others normally pay their tax in full). By smart tech I am referring to that bewildering array of digital infrastructure, algorithms, social media platforms, and wearable technology that is able to tell us more and more about ourselves (and the activates of others) in salient ways, and in real time. There are already myriad examples of how nudges can use social mediavectors to encouraged behavioural goals, and that wearable technologycan capture data that can inform nudge techniques and be a vector for nudging itself.
As I have started to look more carefully at these emerging connections, I have been drawn to a deeper set of alignments that undergird the nudge/smart tech amalgam. These deeper associations relate to the processes of datafication, or more specifically, what van Dijck terms dataism (2014). Dataism is an ideology that has emerged in the wake of the capacity to gather increasingly vast quantities of data on human behaviour and social life. Datafication reflects the technical and sociological processes that have enabled dataism: namely the proliferation of new technological capacities to sense, monitor and record everyday life; and the sociological propensity to plug-ourselves in to such data production systems, and to open up our social life to related dataveillance. Dataism is about more than technical processes though: it propounds a subtle shift in our ontological and epistemological universe (see Beer, 2018). This shift is characterised by a suggestion that data (in all of its inevitably reductionist forms) is ontological: that is to say it not only gives us indicators as to what is going on in the world, but it is a core feature of the nature of reality in itself. The epistemological component of dataism emerges from the analytics industry, which suggests that when properly processed data can not only reveal the shape of reality, it can explain it! What I provisionally consider here is the difference that thinking about connections between nudges and smart tech as an amalgam of behavioural science and dataism can make to critical analyses of the field. I think what is fundamentally at stake is the difference between thinking of smart tech as being an upscaling vehicle for the delivery of nudges and understanding the fusion of dataism and behavioural science as a more fundamental shift in collective understandings of political, economic and social life.
Emerging synergies: From the Age of Data Utility to Augmentation
There is already a body of industry-based reflections on the connections between nudge and dataism. In a fascinating piece in the Behavioural Scientist Chris Risdon (2017) (head of behavioral design at Capital One) succinctly explains the potential and significance of fusions of the behavioural and data sciences. According to Risdon, in an era of algorithmic machine learning, smart tech facilitates enhanced scalingand matchingof behavioural interventions. In other word, smart tech can facilitate the wide-spread up-scaling of nudges (which could now reach large cohorts very quickly). At one and the same time the gathering of big sets of data concerning human behaviour, combined with algorithmic machine learning, means that nudges can increasingly be matched to those who are most in need, or indeed most susceptible, to them. In a 2015 Deloitte Review Report Guszcza outlined a broader set of dialectics that could emerge as behavioural and data science combine:
- The predictive analytics of big data will offer ways of delivering nudges to the most relevant parties at the most salient times. According to Guszcza, this will mean that nudges will no longer have to be delivered in the form of a one-size-fits-all approach at population levels(avoiding the unintended reactance, spill-over effects, and inefficiencies this would involve).
- The use of behavioural insights within data science could facilitate a shift from predictive analytics (what a person is likely to do in the future) to the shaping of behaviour (what Yueng (2016) describes as ‘big data driven decision guidance techniques’). Risdon (2017) has suggested this shift reflects an epochal move in data analytics from the Age of Utility(where data helps to make our lives easier/smarter), to the Age of Augmentation (when data can actually be used to more effectively shape our conducts).
- In addition to behavioural science enabling data science to shape human conduct, data itself could help to produce products of behaviour change: in the form of highly personalised forms of data feedback.
Ultimately the multiple potential fusions of data and behavioural sciences appear to promise a shift from staticnudges (the kinds that are built into the design of buildings, forms, or defaults), to more dynamicsystems (which are highly personalised and liable to machine learning adaption) (see Yeung, 2016).
Deepening the relationship: Performance Enhanced Nudging
There are other interesting features of the emerging relationships between nudge and data science that: 1. suggest that their fusion will extend and deepen in the coming years; and 2. indicate terrains of future critical analysis. The potentially extended and deepened interaction between nudges and data science is based on a series of natural synergies that they appear to have. First, both nudge and the data science movement are in part inspired by an understanding of the human condition in relation to the qualified-self. The qualified self of behavioural science is based on a perception of the human that recognises the cognitive restrictions and limited forms of willpower exhibited by people. Within data science the notion of the qualified-self recognises the limits of knowledge that are associated with embodied senses: limits which can, of course, be overcome within the dispassionate monitoring of biological and social life promised by the quantified-self (Davidson, 2015). Second, there remains significant, if still relatively under-develop, synergies between the behavioural insights on which nudges are based and the technological potentials of data science. For example, the behavioural sciences behind nudge techniques recognise that a key to stimulating desired behaviour is salience(namely that a behavioural prompt is relevant to the target audience/individual). Dataism promises new horizons of salience science, as data is used to provide increasingly personalised prompts and feedback that can be targeted at the most influential times. Furthermore, one of the most powerful behavioural insights associated with nudges is the recognition it gives to the power of social influences (and, in particular, the power of peer-to-peer pressure and herd instincts). More static nudges have, in the past, attempted to change behaviours by informing households how their patterns of energy consumption compare with neighbourhood averages. In an age of algorithmically fine-tuned social media channels, social influence can now be utilised at ever greater scales and with increasing levels of salience. Rather than knowing how your behaviours compare with anonymous local residents, you can see how your conduct relates to named friends and peer networks. A final, if often neglected synergy, between the behavioural and data sciences relates to the process of de-datafication (Dow Schüll, 2016). One of the limits associated with datafication, and the related Quantified Self Movement, is that while it can produce data of increasing levels of intensity, data is not always the best way of promoting behavioural change. It is in this context that future collaborations between nudge technicians and the big data industry will increasingly see behavioural science exploring ways in which data streams can be de-datafied in order to make their meaning more socially relevant. This could pave the way for the proliferation of what Dow Schüll (2016) has described as highly personalised micro-nudges.
Mapping a critical terrain.
If all of this suggests that the behavioural and data sciences are likely to become functionally integrated in the future, this integration suggests significant terrains of future critical analysis. The more cyber utopian interpretations of nudges and dataism have as yet not been matched by a fully formed critical response. There is, of course, much critical work on questions of data-surveillance, privacy, and data mining (see van Dijck,2014; Beer, 2018; Zuboff, 2015). But this work has tended to focus on the ways in which dataism can predict future behaviours (particularly in relation to patterns of consumption), as opposed to shape behaviour. While the potential of behaviour modification is often intimated in analyses of the predictive potential of dataism, there remains only limited systematic critical reflection on this opportunity space. One exception has been the pioneering work of Yeung on hypernudging(2016). According to Yeung,“Big Data driven nudging is […] nimble, unobtrusive and highly potent, providing the data subject with a highly personalised choice environment – hence I refer to these techniques as ‘hypernudge’. For Yeung the hypernudge is the idealised meeting place of data and behavioural science: the point at which these sciences find an optimal collaborative form of expression. It is, for want of a better term, nudging on steroids. But if the hypernudge reflects an unprecedented escalation of the reach and impact of both the behavioural and data sciences, it is clear there is much to critically scrutinise in this emerging action space. Nudging has been critiqued as a form of opaque manipulation of human action, and a largely uncontestably intrusion by the psychological state into people’s everyday lives (Whitehead et al 2017). The hypernudge tends to up the critical stakes. The fairly static exploitation of the collective unconscious within nudges is combined with the intellectually opaque territory of algorithmic government (Zuboff, 2015). Furthermore, the governmental orchestration of behaviour change begins to move from the public sphere into the proprietorial realms of data brokers and platforms associated with the corporate world. According to Yeung these developments are exposing “[t]he inability of the liberal political tradition to grasp how commercial applications of Big Data driven hypernudging implicate deeper societal, democratic and ethical concerns” (Yeung, 2016: 17).
For some time now nudge techniques have provided policy-makers and behaviour change entrepreneurs with a basis for challenging liberal norms of freedom. In particular nudges have provided the grounds upon which the harm-to-othersprinciple—that has been the historical yardstick against which legitimate government intervention into citizens’ everyday lives has been determined—could be challenged. If personal freedom can be preserved (largely through the maintenance of some form of choice), then advocates of nudge suggest that there is renewed scope for governmental intervention in harm-to-self issues (such as personal health and financial matters). Within these nudge assumptions is an often overlooked, but not insignificant, challenge to liberal norms regarding the basis of personal freedom. But static nudges can still feel well-intentioned and fairly harmless. While hidden within the choice architectures of everyday lives, they were still subject to the usual checks and balances of liberal government and appear reasonably easy to resist. But the fusion of nudges and dataism clearly change the nature of the balance between behaviourism and freedom. Hypernudges are more potent, opaque, and persistent than their static counterparts. They have the ability to be constantly revised and reapplied, while hiding behind the proprietorial armature of emerging forms of surveillance capitalism (see Zuboff, 2016). Hypernudging also has the ability to reach new scales of influence that traditional nudges could only dream of.
While there are now critical frameworks of analysis that have begun to make sense of the extractive economic logics of dataism (see for example Zuboff’s (2015) theory of surveillance capitalism), critical political analyses of the fusion of the behavioural and data sciences are limited. Building on Yeung’s pioneering work on hypernudges, I think that there are several lines of inquiry that a critical political analysis of the dialectics between nudges and dataism could take:
- At the most obvious level, there is the direct impact that nudges and dataism are having on democratic elections (from Facebook’s Voter Megaphone Project, to the operations of Cambridge Analytica).
- There are important questions of algorithmic accountabilitythat the hypernudge raises. If the fusion of behavioural and data science is likely to involve new public private partnerships, how are the opaque operations of proprietorial algorithms going to be held to democratic account.
- The operations of hypernudges raise important questions concerning implicit exchanges and transactions in personal freedom. The Big Data Industry’s economic model is premised on the exchange of personal privacy for free data services. But in the world of hyper- and micro-nudges these exchanges of personal freedom become more complicated. We are not only trading privacy, but autonomy, as our data can be fed back to us through choice architectures and prompts that seek to actively shape our decision-making and conduct. It is important to ask critical questions about the conditions under which these exchanges of autonomy for data and other services (perhaps reduced health insurance premiums if we allow ourselves to be quantified and nudged) are made, and the extent to which they corrode actually existing liberal freedom.
- As dataism provides greater certainty about our actions, and nudges are able to shape our conduct more directly, it is also important to consider the combined impacts of these process on trust and freedom within the contractual interactions of everyday life. According to Zuboff (2015), trust between different social actors is a critical condition for the effective functioning of society and the preservation of freedom. Without trust totalitarianism may become inevitable. Trust is, of course, a condition of freedom, and the ability to reasonably err from a social agreements made in trust is a vital condition for human autonomy and flourishing. But what if the certainty of data science and the power of hypernudges does not require trust, just compliance (achieved through full surveillance and behavioural manipulation)? What then for human freedom?
This list of research areas is, of course, by no means exhaustive. It is, however, indictive of the significant scope and ethical importance of research in this area. It is also important to recognise that the fusion of data and behavioural sciences does not inevitably lead to a form of big data cyber-dystopia. The work of Acquisti, Brandimarte, and Loewenstein (2015), has pursued a more progressive dialogue between the behavioural sciences (particularly behavioural economics) and data science. Their work has sought to apply the insights of the behavioural sciences to better understand peoples’ susceptibility to a loss of online privacy. It is, perhaps, ironic that behavioural science can simultaneously provide the basis for undermining freedom in an age of smart tech, as well as a framework for understanding human frailties to online privacy exploitation that can be used to guard against the dangers of dataism.
It is to be hoped that the coming years will see both the growth of critical studies of the interactions between behavioural and data sciences, and the more progressive fusion of these two powerful epistemological projects.
Acquisti, A. Brandimarte, L and Loewenstein, G. (2015) ‘Privacy and human behaviour in the Information Age’ Science: Vol. 347 no. 6221 pp. 509‐514
Beer, D. (2019) The Data Gaze (London, Sage).
Davidson, J. (2015) ‘Plenary address e A year of living ‘dangerously’: Reflections on risk, trust, trauma and change’ Emotion, Space and Society 18: 28-34.
Dow Schüll, N. (2016) ‘Data for life: wearable technology and the design of selfcare’ Biosocieties 1-17
Guszcza, J. (2015) ‘The last mile problem: How data science and behavioural science can work together ‘ Deloitte Review Issue 16: 65-79.
Risdon C. (2017) ‘Scaling Nudges with Machine Learning’ Behavioural Scientist October.
van Dijck, J. (2014) ‘Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology’ Surveillance and Society 12: 197-208.
Whitehead, M et al. (2017) Neuroliberalism: Behavioural Government in the 21stCentury (Abingdon, Routedge).
Yeung, K. (2016) ‘’Hypernudge’: Big Data as a Mode of Regulation by Design’ TLI Think! Paper 28/2016.
Zuboff, S. (2015) ‘Big other: surveillance capitalism and the prospects of an information civilization’ Journal of Information Technology 30: 75-89.