Artificial intelligence (AI) has taken the world by storm, becoming a marketing buzzword and hotly commented subject in the press. But it’s certainly not all hype. Over the last few years there have been several important milestones in AI, in particular in terms of image, pattern and speech recognition, language comprehension, and autonomous vehicles. Advancements such as these have prompted the healthcare, automotive, financial, communications, and many more industries to adopt AI in pursuit of its transformative potential. But what about the law enforcement community? How can AI benefit law enforcement and why might this be dangerous?
Law enforcement is an information-based activity. Intelligence, evidence, and leads are gathered, processed, and acted upon by police officers in order to prevent or control crime. Information — or data — on human behaviour is thus fundamental for law enforcement and the ability of AI tools to rapidly acquire, process, and analyse massive amounts of such data makes AI a perfect partner for law enforcement.
AI can, for instance, help law enforcement to detect the suspicious behaviour of shoplifters, identify and issue fines to online scammers, locate stolen cars, or analyse text-based evidence to identify potential intelligence, as well as, of course, to autonomously patrol the roads or skies in unmanned vehicles.
In 2018, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the International Criminal Police Organization (INTERPOL) organised a global meeting on the opportunities and risks of AI and robotics for law enforcement. The meeting illustrated that, although AI is a new concept for the law enforcement community and there are gaps in expertise, many national agencies are already actively exploring the application of AI to enhance crime prevention and control.
Perhaps one of the most tantalising and controversial applications of AI for law enforcement is what is known as “predictive policing” — the prediction of potential criminal activity before it occurs. Predictive policing is often considered the ‘Holy Grail’ in the fight against organised crime, enabling law enforcement to transcend its traditionally reactionary approach to crime and become more proactive. To take a leaf out of the pages of The Art of War, ‘foreknowledge’ is “the reason the enlightened prince and the wise general conquer the enemy whenever they move and their achievements surpass those of ordinary men”.
In spite of the technical complexity of this cutting-edge technology, the concept is quite well known due to the prominent role it plays in several works of science fiction. Perhaps most famously, it featured in Steven Spielberg’s Minority Report, in which a specialised police department uses visions of precognitive people to prevent crimes and to arrest future offenders before the commission of the act. The movie is based on the Philip K Dick book by the same name, which was released in 1956 – coincidentally, the very same year John McCarthy publicly presented the new field of what he referred to as “artificial intelligence” at the Dartmouth Conference.
Unlike Minority Report, the real version of predictive policing doesn’t involve “precogs” identifying who will commit a crime. Instead, data collected by police departments about the type, location, date and time of past crimes is fed to and analysed by AI algorithms to generate a forecast of when, where, and what types of crimes are most likely to occur. Using these insights, law enforcement can thus optimise its resources by deploying police when and where they may be most needed.
Although no country has put in place a national predictive policing programme as of yet, predictive policing tools have been developed and deployed in several cities across the globe. For instance, in the United States, the company Palantir has developed and tested predictive policing tools in cities such as Chicago, Los Angeles, New Orleans, and New York as far back as 2012. Another company, PredPol, has also developed a predictive policing tool that has been deployed in approximately 40 agencies across the United States since 2012. Outside the US, police departments in countries such as China, Denmark, Germany, India, the Netherlands, and the United Kingdom are reported to have tested or deployed predictive policing tools on a local level. Japan has also announced its intention to put a national predictive policing system in place in the run-up to the 2020 Tokyo Olympics and a predictive policing programme was recently approved in the United Kingdom that could be rolled out to all national police forces in the near future. This list is certainly not exhaustive and is likely to grow as AI becomes more advanced and law enforcement becomes more familiar with its potential.
There is also a lot of interest in exploring advancements in machine vision, such as facial recognition, in connection with predictive policing. This combination could further enhance the capabilities of law enforcement to prevent crimes by enabling them to, not only identify when and where they may be most needed, but also to analyse footage collected through surveillance cameras, body cameras, and drones to identify potential offenders in a crowded space or even predict who may commit a crime based upon facial expressions that might indicate guilt.
Although the jury is still out on the effectiveness of predictive policing in reducing crime rates, most would generally agree that a tool to help law enforcement combat crime more efficiently is probably in the interests of society as a whole. While this may make predictive policing a logical area for research and development, it is important not to get too swept up in the promise of this technology. There are serious issues below the surface.
At the top of the list is the risk that if the data used to train predictive policing tools comes from biased policing (explicitly or implicitly), then the resulting forecast will also bear this bias. Data bias was the focus of the 2016 ProPublica investigation into an AI tool known as COMPAS, which was used by judges to support decision-making on the likelihood of criminals re-offending. The investigation concluded that the data appeared to be biased against minorities.
A similar bias in a predictive policing tool could, for instance, change how law enforcement sees the communities they patrol and influence important decisions such as whether to make arrests or use force. Bias may also lead to the over-policing of certain communities, heightening tensions, or, conversely, the under-policing of communities that may actually need law enforcement intervention but do not feel comfortable in alerting the police.
Unfortunately, however, precisely how these tools generate a prediction and how law enforcement agencies act upon these predictions are, all too often, not transparent. The effects of all this can potentially be extremely damaging, jeopardizing not only the rule of law and fundamental human rights, but even undermining faith in law enforcement as an institution. For instance, cases have already been brought against US police departments in Chicago, Los Angeles, New Orleans, and New York regarding the use of Palantir’s predictive policing tool citing accusations of bias and the lack of transparency in its use.
Predictive policing can be a game-changing technology, affording law enforcement the opportunity to turn the tide on crime for the first time in history. But, if the data that drives this technology is biased, the risks will trump any benefits. It is still very much early days with respect to predictive policing, but it is increasingly important for law enforcement to address issues such as this and ensure that its use of predictive policing is fair, accountable, transparent, and explainable. Moreover, those that will be affected by such tools should be given a say in their development and deployment. As it is a community that would arguably tend to favour security over interests such as privacy, law enforcement is the edge case with respect to the ethical use of AI. If law enforcement can take leadership on the ethical use of AI tools such as predictive policing, other communities will follow.
•••
Odhran James McCarthy wrote this article as acontributor to AI & Global Governance, an inclusive platform for researchers, policy actors, corporate and thought leaders to explore the global policy challenges raised by artificial intelligence.
The opinions expressed in this article are those of the authors and do not necessarily reflect the opinions of the United Nations University.