Artificial Intelligence in Africa is a Double-edged Sword

2019•01•16 Clayton Besaw and John Filitz One Earth Future

Africa is currently experiencing a demographic boom that is largely young and urban. Unlike Germany with a median age of 47.1, the US at 38.1, or China at 37.7, the median age in Africa is 19.5. In addition, this demographic of African youth is expected to double to 225 million by 2055. By 2100, Africa will be home to three of the largest cities in the world: Lagos, Nigeria is projected to be home to 88 million inhabitants, followed by Kinshasa, Democratic Republic of Congo, at 83 million and Dar Es Salaam, Tanzania, at 73 million inhabitants.

Meeting the rising expectations of growth on the continent will require innovative approaches to address governance challenges faced by African countries. At the same time, the 2018 Ibrahim Index of African Governance notes that although governance on the continent is improving, it is not keeping pace with the expectations of the mainly young and urban population.

African states are therefore facing pressures to deliver services to rapidly growing metropolitan areas while simultaneously improving performance on issues including stubborn poverty, continued political instability, as well a plethora of security threats including transnational organised crime and political violence.

One tool that has been touted as a potential solution is artificial intelligence (AI). Machine learning and associated computer-facilitated governance support has been described as the fourth industrial revolution. It could also be a useful tool for augmenting state capacity in weak and fragile state contexts. As with many disruptive technologies, however, AI can be used for positive or negative ends. Delivering the promise of positive AI will require good systems of governance.

Positive AI and negative AI in the African context

So far, developments in AI have been predominately driven by private sector technology actors, but growing interest by African governments has seen the start of conversations around “AI strategies” for growth and governance across the continent. AI is not typically applied to a defined problem in a neutral way. Navigating the complexities of AI application calls for a typology of positive AI and negative AI in the governance context. Positive AI is the use of such systems for broad social benefit. Conversely, negative AI is used for social division, suppression, or even violence.

Positive AI applications in Africa have garnered most of the media coverage. Start-ups in Ghana and Nigeria are addressing doctor shortages and the lack of medical access for rural Africans. They have begun to use AI to empower doctors and leverage growing mobile phone ownership as a vehicle for collecting data, improving administrative efficiency, and to expand treatment coverage.

In both Kenya and Nigeria, AI focused start-ups have begun working on agricultural planning, reducing financial transaction costs, and improving public transportation access and efficiency. Education has also been a focus of start-ups like M-Shule and Tuteria, which provide accessible and extensive training and learning platforms to help teachers in the classroom. Governments in AI-rich countries like Ghana, Nigeria, Kenya, and South Africa have taken a supportive but cautious approach. Monetary support for AI research and development, alongside the promotion of STEM education, has taken priority over AI’s integration within government agencies. It will likely remain that way for the near future.

While the above positive applications seek to solve gaps in development, the power of AI to augment skills and resource deficits can also potentially be harnessed by challengers to the state and by states that seek to suppress political opposition. Deep fakes, or the creation of artificial videos, voice recordings, and data, could be used to emphasise existing ethnic and religious divisions and to attack nascent democratic institutions. For example, imagine a scenario in which a supporter of Boko Haram could fabricate an inflammatory audio recording attributed to governmental authorities in an effort to stoke religious division. Such tactics may prove difficult to manage during contentious elections in transitioning democracies, especially when combined with popular social media platforms.

Alongside artificial misinformation, governments may also seek to use AI to further suppress and monitor political opposition or marginalised groups. With the help of China, the Zimbabwean government has begun collecting individuals’ facial imagery to be used by existing monitoring and facial recognition applications. These applications have human rights advocates worried about potential misuse once the system comes online.

Finally, like the proliferation of telecommunication technology, negative AI applications may help to lower the cost associated with violence by both non-state and state entities alike. Cyber-intelligence gathering, automated or augmented small arms, and AI-powered drones could all serve as vehicles in which to conduct progressively more violent operations at a lower cost.

How can global governance institutions help mitigate negative AI in Africa?

To curtail the potential for negative AI, a brain trust at the nexus between global, regional, and local governments should seek to establish a common framework for effective governance of internet-enabled technologies. Such a framework should ideally attempt to curtail the malignant use of the internet, data, and AI applications.

In early 2018, Microsoft formally launched its campaign for a Digital Geneva Convention to advance industry and civil society efforts to reduce “the dangers of malicious and escalatory state behavior in cyberspace”. The African Union Commission in early November 2018 presided over the Seventh Internet Governance Forum on the theme of “Development of the Digital Economy and Emerging Technologies in Africa”. This forum focused on several key areas, including protecting human rights while harnessing the potential of internet-enabled economies.

Most significantly in 2018 was the Paris Call for Trust and Security in Cyberspace. The Paris Call specifically identifies several target areas for governance including protecting the integrity of the internet, preventing interference in electoral processes, and clamping down on “online mercenary activities and offensive action by non-state actors”.

The African continent will be forced to face unique challenges with negative AI. Weaker governance structures, transitioning democratic norms, and highly salient ethnic and religious divisions will likely struggle to minimize the damage associated with negative AI. This requires strategies that are inclusive of African researchers and experts who can recognise both threats and opportunities for positive use.

Intergovernmental organisations should seek to partner with AI entrepreneurs and corporate actors in Africa while simultaneously providing training to government officials in AI and digital forensics. If officials can recognise the nuanced nature of AI use, they may be better able to recognise and support that technology with positive use in mind. Additionally, the effective use of digital forensics may help governmental institutions more quickly identify fabricated media so as to remove inflammatory content and de-escalate violence after exposing the fraudulent content.

If African governments can operate within a common governance framework for AI, while simultaneously engaging with local academic and entrepreneurial experts, it is possible to mitigate the potential for negative AI while recognising and enabling those actors that seek to use it for the social good. While such an outcome is unlikely to be easily implemented, it is paramount that global and regional governance institutions seek to enable and strengthen pro-social African actors who will be at the forefront of this coming technological revolution.


Clayton Besaw and John Filitz wrote this article as contributors to AI & Global Governance, an inclusive platform for researchers, policy actors, corporate and thought leaders to explore the global policy challenges raised by artificial intelligence.

The opinions expressed in this article are those of the authors and do not necessarily reflect the opinions of the United Nations University.



Clayton Besaw

One Earth Future

Clayton Besaw currently serves as a political events forecaster within the One Earth Future (OEF) Research forecasting track. His work at OEF is concerned with the development of forecasting and machine learning systems for predicting conflict events such as coups, election violence, and intrastate conflict outcomes. Clayton’s research background explores patterns in conflict and political violence, with a focus on extremist recruitment and behavior. The results of this work have been published in The Journal of Conflict Resolution, and Conflict Management and Peace Science.

John Filitz

One Earth Future

John Filitz is a researcher with One Earth Future. His experience includes policy and institutional development, and research on varied topics. His current research centers on transnational organised crime and cyber security. He holds a Master’s degree in Development Studies and a Bachelor’s degree in Political Science and Economic History from the University of KwaZulu-Natal, South Africa. John is currently studying towards a Master of Science in Information Assurance at Regis University.