The rise of artificial intelligence (AI) involves an unprecedented combination of complex dynamics, which poses challenges for multilateral efforts to govern its development and use. Global governance approaches will need to strike the right balance between enabling beneficial innovations, and mitigating risks and adverse effects. Global governance has a role to play in developing standards for balancing the benefits and risks of deploying AI technologies, taking due care to ensure citizens are aware of their rights and protections.
Multilateral effort will have to take into account the wide variation of “risk appetite” across countries regarding the trade-off between regulation and innovation. Some citizens, governments, and industry leaders are more inclined than others to sacrifice citizens’ data, privacy, fair treatment, safety, or security.
Consumers’ tolerance for risk varies too, due to divergences in value systems, but also to socio-economic realities on the ground. For example, a 2017 PwC study found that respondents in Nigeria, Turkey, and South Africa — where the urgency of development and access to basic services prevails — were roughly twice as willing as respondents in the UK, Germany, and Belgium to have a major surgery performed by an AI robot, despite present day limitations.
AI global governance approaches will also have to grapple with the inconsistencies within national-level regulatory frameworks and regimes. Against the backdrop of a global race among private companies and states to develop and control AI technology, some states have been laxer than others in enforcing standards and precautions for ethics and safety. States that are more responsive to citizen and consumer demands for safety, ethics, and privacy protection, such as European states, may lag in AI innovation in the short term.
Given the intensity of competition for AI dominance as well as the digital market dynamics where online platforms, data, and social media economics have created a “winner-takes-all” paradigm driven by the prevalence of network and scale effects, existing incentive structures risk triggering a global race to the bottom in standards and precautions for ethics and safety. The time-consuming nature of the policy process will not facilitate needed adjustments and agile regulation.
The intergovernmental panel model: a way forward?
AI global governance approaches will need to borrow and adapt from other governance regimes including climate change, internet governance, arms control, international trade, and finance. Government, industry, entrepreneurs, academia, and civil society will all need to be involved in the debate around values, ethical principles, design of international agreements, and their implementation and monitoring.
A relevant example and starting point to develop an inclusive, legitimate process for AI global governance is the Intergovernmental Panel on Climate Change (IPCC). Under the auspices of the United Nations, the IPCC set a widely acknowledged example of a large multi-stakeholder platform driven by science for international consensus-building on the pace, dynamics, factors, and consequences of climate change. The IPCC has served as the foundation for designing, implementing, and enforcing global governance and related policies that ultimately culminated in the Paris Agreement. Given the high systemic complexity, uncertainty, and ambiguity surrounding the rise of AI, its dynamics and its consequences — a context similar to climate change — creating an IPCC for AI, or “IPAI”, could help build a solid base of facts and benchmarks against which to measure progress.
At the beginning of December 2018, Canada and France announced plans to establish an IPAI, modeled on the IPCC. President Emmanuel Macron had proposed the creation of the IPAI while releasing the French AI strategy in March, ambitiously entitled “AI for Humanity“ to signal France’s global aspiration. The IPAI is envisioned to inform dialogue, coordination, and pave the way for efficient global governance of AI where common ground can be found despite intense competitive dynamics.
Like the IPCC, over time the IPAI would gather a large, global, and interdisciplinary group of scientists and experts. It would differ from — and add value to — other existing mechanisms such as the Government Group of Experts (GGE) on Lethal Autonomous Weapon Systems that are held as part of the UN Convention on Certain Conventional Weapons (CCW) as it would be much broader in its mandate and size.
While the IPCC is, by and large, a success story for large-scale multi-stakeholder governance processes, it is not without its flaws. The IPCC’s process of dealing with uncertainties — something that will inevitably plague an IPAI — has been criticised for its lack of precision in its attribution of certainty, ambiguity due to the role value judgments play in assessing its application, and a lack of sensitivity towards political, ethical, and cultural contexts when synthesizing scientific knowledge. In order for an IPAI to garner multilateral governance support, it will need to address these concerns that, at times, stifled effective consensus among such a diverse group of stakeholders, adversely affecting its legitimacy.
Given the deep and global epistemic crisis we are going through — where the authority of science, expertise, information, and representation is being severely questioned — an IPAI will also need to innovate operational processes to achieve more transparency, openness, and inclusion of civil society without imploding.
A concrete area of improvement on IPCC that is aligned with this goal revolves around the level of scrutiny given to the so-called ‘grey literature’ (i.e., non-peer reviewed or not published in scientific journals) in their assessments. This is particularly crucial in the context of the AI literature: while much of the grey literature includes reports by national academies and legitimate works, there are also articles that engage in fear-mongering and popular culture references that generally lack intellectual rigor or scientific merit.
In addition to an inclusive and legitimate process, global governance approaches will need to deploy a smart and coherent combination of ‘soft’ and ‘hard’ instruments to address AI. Soft instruments, including industry standards, codes of conduct, norms, and ethical principles, are flexible enough to adapt as technologies as their impacts on society evolve. The Institute of Electrical and Electronics Engineers’ (IEEE) Ethically Aligned Design principles is a highly relevant example of industry standards aimed at fostering safe and ethical autonomous and intelligent systems. Hard instruments, such as binding legislation, are also crucial to level the playing field and anchor technological change in a value system. For example, the EU General Data Protection Regulation (GDPR) has created a rigorous legal regime applicable to all organisations that collect, store, process, and circulate personal data of European citizens. Over time, it could mature into a global ‘gold standard’ law.
In the context of the complex dynamics surrounding the rise of AI, multilateral approaches are needed to level the playing field internationally, raise ethical and safety standards, and orient new technologies toward broad societal benefit. AI is rapidly and significantly transforming societies and global governance has a central role to play in ensuring its development plays out for good. Innovative and inclusive processes, drawing on a mix of hard and soft instruments, will be key to ensuring that multilateral solutions can rise to the challenge.
Nicolas Miailhe wrote this article as a contributor to AI & Global Governance, an inclusive platform for researchers, policy actors, corporate and thought leaders to explore the global policy challenges raised by artificial intelligence.
The opinions expressed in this article are those of the author and do not necessarily reflect the opinions of the United Nations University.