With five million CCTV cameras so prevalent on so many urban centres, Britain is supposed to be one of the most surveilled countries in the world (actually, it isn’t: China and the US are way ahead). But what if those CCTVs started getting smart… and report you if you didn’t wear your mask when going into a shop in Lockdown conditions?
That’s just one of the many privacy arguments the UK has been having with itself these past few months–privacy arguments made even more contentious by the suggestion that the British state is getting more and more interested in the potential of AI, facial recognition and machine learning to monitor and control citizens.
Social distancing
The CCTV fear was fanned by certain UK newspapers off the back of news last month that a company called CCTV.co.uk will use use AI to determine whether a person walking towards the shop doors is wearing a mask in a bid to help staff tackle such ‘difficult’ customers. In parallel, another UK firm, Vivacity, which has deployed over 1,000 sensors in cities including London, Oxford and Manchester, has reportedly been asked to switch sensors originally introduced to track the flow of traffic, cyclists and pedestrians and monitor how roads are being used to instead help monitor COVID social distancing.
Though its CEO reassured a local Kent newspaper that his firm’s tech has only been trained to be able to recognise what a pedestrian looks like as opposed to a cyclist or a van or truck, with the aim of creating a set of statistics on how behaviour is changing in terms of how people are staying close together or apart, the idea that robots could be used this way stokes ‘Big Brother’ fears that are never far beneath the surface in Orwell’s native land.
Fraudulent claims
At the same time, many experts think the government hasn’t been using enough AI. There are growing concerns that some of the government schemes to help UK firms cope with the impacts of Covid might be subject to a large number of fraudulent claims: the country’s official public spending watchdog, The National Audit office, says the between £15 billion and £26 billion pounds could end up being lost to fraud and error.
That’s unacceptable, say some British commentators, and AI should have been used to analyse the data and highlight potential fraud at scale, to both save the government money but also demonstrate how the technology can be used to help ensure taxpayer money is only allocated to genuinely needy cases. And for Andy Pardoe, founder and managing director of a new UK-headquartered AI business development consultancy Pardoe Ventures, while there may indeed be teething troubles with British law enforcement using Artificial Intelligence, longer-term pressures mean its introduction in some way has to be inevitable:
“Fundamentally, law enforcement now has significantly more data to search, including social media and online accounts that mean manual checks and searches are now becoming impossible. Plus, we have seen a number of trial cases being dismissed due to evidence coming to light at court that was not found during discovery and case building, so we will just have to use AI to perform initial searches so as to highlight the most important elements for a human case officer to check.”
Ethics
Such a move, he believes, would reduce the wasted time and effort but also improve the quality of cases taken to court. But those same courts may end up trying cases of suspects identified not by diligent Police work, but software. A leak of of meetings of the West Midlands Police and Crime Commissioner’s Ethics Committee contains details of a machine learning algorithm allegedly capable of predicting which low-level offenders in a database of 200,000 individuals are the ones most likely to commit “high harm” crimes in the future: the notes state this is a pilot for a potential national service.
‘Widespread concern across the UK law enforcement community’
Using AI to predict and single out people who haven’t committed a crime yet sounds not just very Minority Report, but a step too far for many critics–and not just those you might expect to be naturally suspicious of these trends, like privacy campaigners, but policymakers. For instance, in February the Royal United Services Institute, an independent think tank, produced a report on the increased use of data analytics and algorithms in English and Welsh law enforcement that warned, “The project has revealed widespread concern across the UK law enforcement community regarding the lack of official national guidance for the use of algorithms in policing, with respondents suggesting that this gap should be addressed as a matter of urgency.”
A little thing called COVID has delayed an official response to the plea. Yet again, these early scandals remind both fans and critics alike that an AI system is only ever as good as the data it works with–which means they are prone to data bias and limited by previous historic events to learn from. “Without proper corrective actions when building these AI systems,” Pardoe cautions, “ethical problems can manifest that both reduce the confidence and the acceptance of these technologies by the general public.”