This month, Amazon.com Inc. shut down a project it had been developing for four years — a recruitment tool driven by machine learning. The concept was a simple and appealing one: at its core, the project aimed to develop an algorithm that would sort incoming job applications to isolate the short list for managers to use in making their final selections.
Anyone who has been involved in such a process knows that isolating the top five or ten resumés from dozens of applicants is a time-consuming job. Any process that brings logic and speed to this stage of the recruitment chain can only be a good thing.
But algorithms are only as good as the data used to drive them, and in the Amazon case the machine “learning” was based on patterns in applications submitted to the firm in the previous ten years. Since Amazon, like all tech firms, is male-dominated, the algorithm taught itself that success equated with male-oriented activities and language, so phrases like “women’s chess club captain” or the names of all-women colleges, were penalised by the algorithm in ways that technical fixes could not remedy.
This episode exemplifies many of the appealing and unappealing features of the data-driven era that is rapidly coming upon us. The appeal is intuitive. If tasks, even complex ones, can be completed rapidly, accurately, and effectively by a tool that improves itself over time, then a range of tedious and expensive chores currently handled by humans can be outsourced, not to cheaper humans far away, but to software embedded right in your office.
However, there are also three broad sets of issues that give pause for thought and where globally agreed norms and rules — in a word, governance — is called for.
Towards an ethical framework
First, algorithms, whether static or of the machine-learning sort, are not value-free. The data underlying them and the formulae that make them function, think, and transform over time embody the biases of history and that of their designers.
This means that algorithms should be subordinated to the same kind of universal ethics regime that governs human and state behavior: something similar to the Universal Declaration of Human Rights (UHDR). Though the UDHR is often violated, it creates an aspirational global standard. It acts as a guide to draft national and sub-national legislation, as a framework to assess (and sometimes “name and shame”) its application, and ultimately, as with the International Court of Justice, to penalise its violation.
The recent G7 statement on artificial intelligence (AI) serves as a good starting point for a more global discussion on the ethos that we want driving transformative technologies in general and AI in particular.
On taxing AI
Earlier in the summer, Amazon was in the news for another feature of footloose multinational firms — the taxes they pay. The high-tech economy, of which AI is a part, is driven by proprietary technology, most of which is created in a few pockets around the world. It is the nature of the innovation economy to privilege first movers, strategic behavior and economies of agglomeration.
As a result, despite many unknowns about what the future holds, one thing is clear: absent of significant public policy responses, income and wealth disparities will worsen, between countries and within them.
Since so much intellectual property is generated by multinational firms, the profits from technological advances will (correctly) accrue to them. Although the trend in recent years has been towards consumption-based taxation and away from corporate taxation, the high-tech economy heralds an era that reverses this trend, with good reason.
An interesting variant of this idea is Bill Gates’ proposal to tax the owners of robots. This is not a radical proposal, grounded as it is in the notion that in a rentier economy, the source of rent (be it land or soft capital like intellectual property) provides a rich, efficient and economically and socially justifiable basis to levy taxes. The focus on appropriate taxation of rents of what are large, powerful, and agile multinational firms puts tax base erosion and profit shifting in focus and places a greater onus on cross-border tax cooperation.
A path to development for all
Third, even among the less dire scenarios of the impact of AI on jobs, it is clear that large swaths of the labour market will be affected by the application of AI, either to jobs entirely or to many of the tasks that make up a job. Not surprisingly, jobs characterized as routine, be they blue collar or white collar, are most vulnerable.
Since the first industrial revolution almost 250 years ago, the route to prosperity and development for countries has been to specialize in low-end manufacturing (or agriculture) and progressively move up the value chain. Of course, supportive policies and context matter, and the East Asian “miracle” of the post-WWII era is not an exact replica of what, for example, Germany and France did to catch up to the United Kingdom a century earlier.
But the core feature of absorbing large numbers of workers into mechanized production as the first step to development is common throughout modern economic history. AI upends this model, and the next generation of developing countries will not have the path to development that has prevailed to date.
Finding a route to development in the face of AI (not to mention climate change and the challenges and opportunities that it presents to newly developing countries) is the defining economic, social, and security issue of our age. Here, too, global cooperation has a role to play via, for example, models of technology transfer or the provision of global public goods like a clean environment and financial system stability.
The future of global governance
Any global governance regime will face massive obstacles, especially for AI as it is the kind of technology that naturally resists strictures and structures. Breaking up the challenges into more manageable sub-sets, as I have done above by noting the different ways that ethics, finances, and development-related issues could be taken forward, could help us chart a way forward.
And the onerous task facing global governance should not discourage the myriad of steps countries can take on their own at the national level — for example, reforming pension and social safety net systems and education sectors — to prepare for an AI-driven era. Thinking of global governance alongside national initiatives is a good reminder that with new technologies, as with old, global cooperation and local decision-making go hand-in-hand.
Rohinton P. Medhora wrote this article as a contributor to AI & Global Governance, an inclusive platform for researchers, policy actors, corporate and thought leaders to explore the global policy challenges raised by artificial intelligence.
The opinions expressed in this article are those of the author and do not necessarily reflect the opinions of the United Nations University.