Artificial Intelligence and Global Governance: Supporting the Ties That Bind

2018•10•15 David Danks Carnegie Mellon University

Artificial Intelligence (AI) and robotic systems are rapidly beginning to impact our lives, in both visible and invisible ways and for both good and ill. Unsurprisingly, there are many analyses that attempt to understand these impacts, perhaps with an eye towards shaping the technologies in more positive ways. For example, how many people’s jobs will be displaced by increased robotics in a manufacturing sector? How might healthcare be improved, particularly in under-resourced regions, through the use of AI diagnostic systems? What types of interfaces enable an autonomous technology to learn the professional or personal goals of the human user?

These issues, while obviously important, overlook a key question: how do (or will) AI and robotic systems change our human relationships with one another? We justifiably place enormous importance on interpersonal relationships with members of our family, workforce, community, and society at large. AI systems do not only affect us as isolated individuals, but also have the potential to either enhance and expand, or constrain and damage, the relationships in our lives.

Real and hypothetical examples are easy to find. Consider the ways in which social and political norms of debate have been harmed in recent years by the proliferation of automated and semi-autonomous bots on social networks. In a more supportive direction, consider the ways in which some robots are able to do mundane work so teams can focus on key challenges, thereby improving their ability to work together effectively. AI systems do not just have the ability to affect us individually, but their more significant impact could be on our relationships with one another.

Consider the very real possibility that some doctors may soon be required to use AI systems for diagnosis and treatment decisions. If the AI system performs demonstrably better than most doctors, then there will be natural pressures from many parties to require doctors to use the more accurate AI system, rather than their own clinical judgment (in fact, calls for exactly such a requirement have increased after some notable recent demonstrations of AI superiority in diagnostics). In this case, though, a doctor risks becoming merely an information broker between the patient and the AI technology. If your doctor is simply a conduit for information transfer, then there seem to be few reasons for you to trust your doctor. That is, even though medical AIs have the potential to improve individual (short-term) health outcomes, they also have the potential to significantly damage patient-doctor trust — the foundational interpersonal relationship of healthcare. Improved diagnostic accuracy might come at a heavy cost.

As a more positive example, consider a home healthcare robot to assist an elderly parent. The strain of caring for a parent can threaten or damage familial ties, precisely because of the role-shifting that must occur. If a robot could perform many of these caretaking tasks, though, then the individual and his or her parent could potentially (re)build a deep, meaningful relationship. This kind of robot does not threaten an interpersonal relationship, but instead can help people to maintain an existing relationship or rebuild a damaged one.

We are fundamentally social beings: our interests, as well as our ability to advance our interests, are bound up in our relationships with other people. Similarly, many of our fundamental human rights depend deeply on our interactions, connections, and engagements with other people; arguably, many of those rights are constituted partly by those relationships. If AI technologies threaten those relationships, then they threaten core human rights. This conclusion holds even if the AI technology increases my own individual capabilities. The ethical and social value of AI technology depends on more than just the ways that I interact with the system. Its role in supporting and enhancing, or alternatively threatening and undercutting, our human-to-human relationships can be equally important.

While these observations are interesting, one might reasonably wonder why they are directly relevant to the UN. One connection is the potential impact on fundamental human rights. Discussions about regulation of, and policy for, AI and robotics have largely occurred at the level of nation states or local communities (with the notable exception of autonomous weapons systems). If the impacts of AI were limited to narrow economic or political effects, then this focus on local governance might be sensible. However, since the impacts of AI on interpersonal relationships has the potential to directly challenge some human rights, there is a clear reason for engagement from the UN.

A second connection with the UN arises from the many different forms of interpersonal relationships across the planet. Proper design, development, and deployment of AI technologies requires some understanding of the diversity of ways that people can engage and live with one another. The UN is ideally placed to facilitate efforts to understand different varieties of relationships, and so help to educate developers, local policymakers, and the general public on the ways that AI can affect them, not just individually but also as parts of many different relationships.

AI and robotic technologies have the promise to bring great benefits, as well as great costs. One key question that we must ask throughout this technological revolution is: how can we obtain technology that advances our interests and goals? While this question is increasingly being asked, the analyses too often focus solely on positive and negative impacts for the individual. Instead, we must broaden the scope of our inquiry to also understand the ways in which these technologies can dramatically shift, and perhaps even break, key relationships with members of our families, cities, nations, and global communities.

•••

Dr David Danks wrote this article as a contributor to AI & Global Governance, an inclusive platform for researchers, policy actors, corporate and thought leaders to explore the global policy challenges raised by artificial intelligence. The opinions expressed in this article are those of the author and do not necessarily reflect the opinions of the United Nations University.

Author

David Danks

Carnegie Mellon University

Dr David Danks is the L.L. Thurstone Professor of Philosophy and Psychology and Head of the Department of Philosophy at Carnegie Mellon University. He has used a range of interdisciplinary approaches to address the human and social impacts when autonomous capabilities are introduced into technological systems, whether self-driving cars, autonomous weapons, or healthcare robots. Dr Danks has actively collaborated with multiple industry groups and government agencies. His earlier work on computational cognitive science resulted in his book, Unifying the Mind: Cognitive Representations as Graphical Models, which developed an integrated cognitive model of complex human cognition. Dr Danks is the recipient of a James S. McDonnell Foundation Scholar Award (2008), and an Andrew Carnegie Fellowship (2017).