Facial recognition cameras are bringing the datafication of the internet to the real world, threatening our right to privacy and public space.
Last month it was revealed by the Financial Times that facial recognition cameras had been used to identify pedestrians in the Granary Square area of the new Kings Cross complex in London between 2016 and 2018.
Argent, the developer and asset manager charged with the design and delivery of the site, admitted to the use of two CCTV cameras equipped with biometric technology to map facial features. These cameras then ran this information through a database supplied by the Metropolitan Police Service to check for matches.
The estate at Kings Cross is a privately owned complex, but one that is used by thousands of members of the public every day. In addition to over 2,000 homes, the area hosts a variety of shops, hotels and music venues, as well as the world-renowned Central Saint Martins School of Art.
Rightly, there has been a public outcry over the covert use of this technology. The Information Commissioner’s Office (ICO) has launched a subsequent investigation, whilst London Mayor Sadiq Khan has written to the development’s CEO seeking an explanation.
Although it has garnered significant press attention, Kings Cross is just one example of how facial recognition technology is being rolled out across “public” spaces in London. Last month the Evening Standard revealed that planning permission has been granted by the City of London Corporation for the implementation of advanced surveillance cameras in The Barbican Centre, where 16 of the new 65 cameras will be capable of recognising faces, and possess an invasive two-way audio feature – “potentially allowing controllers to listen in”.
Meanwhile similar projects have been approved at Liberty’s department store in Soho, and Hay’s Galleria on the South Bank. The Financial Times also exposed proposals for the installation of privately owned facial recognition cameras across the 92 acre estate at Canary Wharf.
The same piece noted: “convenience stores such as Budgens, and supermarkets – including Tesco, Sainsbury’s and Marks and Spencer – all have cameras that are already, or soon will be, capable of facial recognition”.
Just as we have been normalised to the 500,000 CCTV cameras operating across London today, facial recognition may soon become a ubiquitous norm of everyday life in the 21st century.
So what’s the logic?
Despite the claims of private companies that this technology is introduced solely ‘to help ensure public safety’, research from the independent not-for-profit activist group Big Brother Watch shows that, on average, the current biometric cameras identify individuals incorrectly over 90 percent of the time.
In its research, Big Brother Watch found that the Metropolitan Police’s facial recognition matches were 98 percent inaccurate, whilst the same technology used by South Wales Police failed to find a correct match on 91 percent of occasions.
So if the current technology isn’t useful for identifying criminals, why are companies rolling it out across the capital? Do private companies currently have such insurgent security issues that the testing of facial recognition technology is a necessary breach of individual privacy?
Another explanation for this insistence is the potential monetisation of valuable data inherent in surveillance technology.
In our modern economy, data has become a prized economic resource. Once aggregated and refined, collections of data can create powerful predictive models of human behaviour, which provide valuable information to instruct business decisions.
In her recent book, ‘The Age of Surveillance Capitalism’, Shoshana Zubouff has popularised discussions of how this plays out in our modern digital economy. She explains how the value of this new resource has led to a race to extract data, with tech companies utilising digital platforms as a medium of user engagement, and thus a mechanism of data generation.
This has led to covert surveillance techniques in the online marketplace, where sites used by members of the public are owned and surveyed by private companies. It has now become commonplace to hear of Google using your individual searches to sell targeted ads, Twitter promoting content on your feed based on who you follow, or Facebook data being scraped to enhance political campaigns.
But while the centrality of data to the business models of tech companies is well-documented, the collection of data in privately owned physical space is a relatively unexplored phenomenon.
Just as tech firms control much of the publicly-used internet, ownership of open space is increasingly being taken on by private corporations. In 2017 The Guardian published a map of what it called the growth of ‘pseudo-public’ spaces in London – ‘open and publicly accessible locations that are owned and maintained by private developers or other private companies’.
In the same way that the ownership of online platforms is used as space to collect personal information, these physical spaces could soon become the real-world data mines of private firms.
Here, visual surveillance plays a leading role. If a company is able to identify even the rough demographic of pedestrians in areas such as Kings Cross, it can sell this as valuable information to businesses, enabling informed decisions on issues such as location, opening hours or advertisements.
More advanced facial recognition technology may be able to identify the individual, and advise companies looking to tailor their products in real-time.
Once perfected, the ultimate potential here is for facial recognition to match an individual to their digital online profile, connecting the physical data with its digital counterpart. This would enable fine-grain control for firms, altering displays such as digital advertisements in response to detailed information on their particular audience.
In short, just as big tech companies utilise data to tailor our interaction with the digital world, facial recognition technology presupposes a panoptic physical world of administered perception.
Sounding like a far-fetched dystopian nightmare? It isn’t.
A recent report from the Carnegie Endowment for International Peace found that ‘AI surveillance technology is spreading at a faster rate to a wider range of countries than experts have commonly understood’.
According to the report’s executive summary ‘at least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes. This includes: smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries)’
The most prominent distribution of this technology is by Chinese companies, which supply AI surveillance technology in sixty-three countries. Thirty-two countries are supplied with similar technology by companies based in the US.
There are many leading examples of how this surveillance technology is used for datafication.
In the US, a company called Cooler Screens has embedded products with sensors and digital screens to individually target advertisements at customers. Walgreens is now rolling out this technology, which seeks to analyse a customer’s age, gender, time spent browsing, and even emotional responses.
Similar technologies were discovered in malls in Canada in 2018, when the media outlet CBC reported that facial recognition had been used to predict the approximate age and gender of customers. The technology was only discovered when ‘a visitor to Chinook Centre in south Calgary spotted a browser window that had seemingly accidentally been left open on one of the mall’s directories’.
Perhaps most relevant for exploring the datafication of facial recognition is the app FindFace. FindFace, which launched in Russia in 2016, made it possible to find an individual’s social media profile simply by capturing an image of their face. In 2016, the Independent reported that ‘FindFace’s creators are working with Moscow police to integrate their software into the city’s CCTV camera network, so authorities will be able to detect wanted suspects as they walk down the street’. FindFace is no longer open for public use, but a similar company SearchFace is now operating in Russia.
Each of these cases demonstrates the capability of surveillance cameras to advance mechanisms of datafication into the real world, analysing, contextualising and commodifying our physical data.
The threat this poses to our right to privacy is obvious, but it also undermines the agency of the individual by denying the notion of truly public space. At its heart, this marketisation threatens to dispossess humans of their independence in the name of ‘convenience’, and by doing so challenges the very notion of individual freedom.
So what is to be done?
In the UK, the use of facial recognition technology to monitor members of the public for commercial purposes is illegal without prior notice. This is in accordance with the General Data Protection Regulation from the EU, which attempts to roll back on the datafication of the social commons by mandating private companies to obtain explicit consent from individuals prior to the collection of sensitive personal data.
In cases such as Kings Cross however, the quoted intention is not commodification but securitisation: ‘to help the Metropolitan Police and British Transport Police prevent and detect crime in the neighbourhood’.
According to a current high court ruling issued in Cardiff earlier this month, the use of facial recognition by the police is permissible, despite judges acknowledging that this technology interferes with fundamental privacy rights.
As it stands, there are no checks and balances on the use of facial recognition by private firms, if they are issued as part of a security strategy.
Yet as the prevalence and sophistication of surveillance technology becomes unearthed, many are beginning to challenge this reality.
In July, the Commons Select Committee on Science and Technology published a report by the Biometrics Commissioner and the Forensic Science Regulator which called for ‘a moratorium on the current use of facial recognition technology’ and noted that ‘no further trials should take place until a legislative framework has been introduced and guidance on trial protocols, and an oversight and evaluation system, has been established’.
In a similar vein, the human rights group Liberty has launched a petition for the Home Secretary to ban the use of facial recognition technology in public spaces. They also intend to appeal the high court ruling against the lawful use of biometric surveillance by the police.
In the United States, Bernie Sanders is the first 2020 Democratic nominee to promise a ban on all facial recognition technology by the police, putting pressure on others to take a tougher stance against covert surveillance.
These responses are the start of a much-needed backlash against invasive technology, and the potential datafication of the public sphere. Just as a fierce struggle has retrospectively begun over the human right to autonomy in the digital commons, a battle must now begin over our individual rights in physical space.
We must draw definitive red lines when it comes to the extraction and use of all kinds of personal data, whilst fundamentally reimagining the use and control of privately owned public spaces.
To democratically channel the power of modern technology, we need new models of data and land ownership fit for our age. Failing this, our valuable open spaces like Kings Cross may yet become the real-world data mines of 21st century surveillance capitalists.
This article has been republished under a Creative Commons license with attribution to the author Freddie Stuart and Open Democracy.