by Kalypso Nicolaidis and Michele Giovanardi
Full article to be released in September 2022
Technology at a crossroad: how can we use new technologies for a more peaceful world?
The latest developments in Ukraine have been a wakeup call for western public opinion. European citizens, and especially young generations, realize more starkly than ever that peace is not forever, but instead a delicate business entrusted to all, from the highest echelons of government to every citizens. Beyond the more than sixty active conflicts in the world in 2022 (ACLED 2022), our global economic and political order has become increasingly fragile and fragmented, heralding the emergence of a multi-order world.
But all is not bleak. To different layers of power struggles and governance correspond different opportunities for peacebuilding initiatives. Here we argue that emerging technologies represent unprecedented opportunities to empower citizens and build transnational processes of peace from the micro to the macro levels, and that these opportunities come with crucial challenges that we need to study and address in order to mitigate the risks and unleash the disruptive potentials of emerging technologies for peace. We label this agenda ‘Global Peace Tech.’
The diffusion of technologies to connect people worldwide in the last thirty years brought with it great hopes for democratisation, emancipation, transparency, freedom, education, and peace. Tech-optimism defined the first wave of expansion of the internet in the late nineties and early two-thousands, culminating in the Arab Spring, mass protests in Israel and Spain and the global diffusion of the Occupy movement initiated in New York (Khondker 2011). Yet, already at this stage some observers realised that this was a double-edged sword. In his book ‘The Net Delusion: How Not to Liberate the World’, Evgeny Morozov argues that the hopes for the democratizing power of the Internet were being replaced by its effective use by authoritarian governments to suppress free speech, hone their surveillance techniques, disseminate cutting-edge propaganda, and pacify their populations with digital entertainment (Moroviz 2011). The optimistic attitude towards the digital’s potential for peace and democratisation gave way to more pessimistic accounts. The very same digital platforms that were supposed to enhance peace by connecting people, democratising information, fighting stereotypes and creating communities of trust across borders, turned out to foster institutional mistrust, disinformation, discrimination, polarisation, hate speech, online and offline violence, organised crime and transnational terrorism (Conway 2017).
The twinned opportunities and risks of technologies pervade the history of humanity. Today, besides the use of digital platforms for peacebuilding, mediation, and grassroot participation in peace processes, tech for peace across borders include the employment of digital application for early warning systems, the use of digital identities and matching algorithms for refugees’ management, big data and predictive analytics for conflict prevention, satellites and drones’ deployment for smart border control, blockchain, crypto and smart contracts for humanitarian aid. The potential is even greater when we consider the social impact of emerging technologies at large. Supported by new techniques in deep learning and multi-layer convolutional neural networks, the harnessing of big data and artificial intelligence (AI) has recently enabled researchers to develop technologies of unprecedented power, including computer vision, speech recognition and natural language processing. The potential benefits of these technologies for society include the possibility of reshaping political structures and the human condition itself, leading some to label their emergence a fourth revolution and the dawn of a new era for humanity. Facial recognition algorithms could significantly increase the general level of security, improving the efficiency of police forces to arrest criminal fugitives or find kidnapped children. Pattern recognition solutions are expected to enable autonomous vehicles, resulting in fewer road fatalities together with more efficient, inclusive and ecological mobility. Embedded in personal connected objects or combined with medical apparatus, stochastic algorithms and image processing solutions also promise great advancements in predictive medicine, allowing more efficiency and personalization in medical care. Other valuable social benefits include the identification of distressed people on social networks, the promotion of empathy in human-computer interactions (especially between senior people in retirement homes and embodied robots) thought emotion recognition solutions, automated translations promoting easier interactions between peoples and cultures thanks to natural language processing software and personal virtual assistants for cheaper, more efficient and personalized public services.
Against this backdrop, pundits and members of the publics alike have recently become more acutely aware of the dark side of tech, voicing an aspiration everywhere to “take back control.” For one, a number of scandals have recently revealed the extent of the dangers underlying these new technologies. While Cambridge Analytica showed how large-scale automated misinformation campaigns often orchestrated from abroad could permit powerful political interference and hit democracies at their heart, in many countries, platforms like Facebook have helped spread hate speech – as with mobilisation against Rohingya Muslims uncovered by the UN’s investigation in Myanmar (Solon 2018). Yilun Wang and Michal Kosinski’s “gaydar” can be used by any government of the 73 countries considering homosexuality as a crime (in 13 of which it is a capital offence) for discriminatory ends. Deepfake’s montages raise concerns about identity theft, from which massive trust issues in public information may ultimately result, as no photo, video or sound record can any longer be trusted. Other notorious examples include Northpointe’s racially- biased algorithm COMPASS, used by US courts to assess a defendant’s likelihood to backslide. No wonder that techno-optimism has turn into techno-pessimism.
The issues at stake are numerous and complex, distributed across the innovation circuit from fundamental research, and the management of data bases (consent in the data collection, control over its access, portability, and erasure, etc.) to the development of algorithmic systems (biases, minimum accepted rate of accuracy, system’s integrity and safety, etc.), as well as their intended or unintended applications (emotional recognition for psychological manipulation, facial recognition for mass surveillance, etc.), and the indirect consequences they may have on individuals and societies (public trust weathering, filter bubbles, algorithmic governmentality, etc.).
In sum ever more sophisticated manipulation techniques risk shering a world replete with the ‘mining of our lives’ ( Zuboff, 2020) and the ‘hijacking of our minds’ (as ‘Time well spent’ founder Tristan Harris would have it). The increasing concentration of power, be it corporate or political, created by technological advantages and the absence of adequate regulation to govern the interactions they allow will ultimately lead to the loss of our individual sovereignty. Taken together, our innovative techne allows for an unprecedented coup from above, an assault on democracy by way of subverting the very idea of what it means to be an individual (Zuboff 2019, Moroviz 2013, Wu 2017, O’Neil 2016, Pasquale 2015, Maragh-Lloyd 2020, Turkle 2016). These patterns are heightened when the military-defence and intelligence industries develop products that not only will be out of our control (as with lethal autonomous weapons systems – LAWS) but also progressively infiltrate our daily lives, as when the biggest arms sellers to the Middle East and North Africa also produce the surveillance technology used to monitor borders, and the IT infrastructure to track population movements (Akkerman 2016).
We are thus at a turning point. Publics are increasingly pressuring democratically elected governments to address these dangers, increasingly cooperatively, including through regulation. But traditional institutions struggle to address these issues for several reasons. First, the challenge of regulating a set of very recent and fast changing technologies with potential applications in a vast range of domains, which hampers the delimitation of a precise regulatory scope. Premature regulation could then end up being quickly outdated and inefficient. Second, the high level of technicality of the field calls for a strong expertise that regulators do not possess. Third, the great power of technology companies undermines the capacity of states to enforce national regulation without international coordination, resulting in “law shopping”, and a loss of states’ sovereignty. Finally, the motivation for states to regulate suffers from a critical dilemma. While governments recognize their duty to establish an appropriate regulatory framework to protect their citizens’ rights, they also understand the necessity to support the development of these technologies, including by exercising caution against over-regulation within the context of a fierce international race – and nowhere is this race fiercer than in the realm where quantum technology, AI and big data intersect. A fear of hampering national research and haemorrhaging researchers pervades domestic regulatory approaches.
In this context, a fierce debate has emerged over who may be the most legitimate authority to regulate the spectrum of issues related to new technologies. Even as we recognize the need for regulation, we also need to consider the risks of overregulation and abusive paternalism. Although valuable freedoms that people still enjoy on the internet proceed from the failure to reach an international consensus over a common governance, Lawrence Lessig and others have argued that the diversity of regulatory modes necessary includes not only national and international laws, but also encompasses mechanisms such as social pressure, financial signals and may be embedded into computer code themselves (Lessig 2009).
But in contrast with the early promise of ‘tech’ to effect change without the state and institutions, it has become clear that institutions and the real people who steer them will be central to redressing the balance between the benefits and risks of new technologies (e.g., vaccinations aren’t effective at preventing outbreaks without a public health service to educate and administer; laptops in classrooms aren’t effective without teachers; digital labour is exploitative without unions and regulation). This is all the truer at the international level as we consider the ambivalent transnational political effect of emerging technologies, and in particular their effect on transnational processes of peace and conflict. Are emerging technologies contributing to more peaceful interactions between societies across borders? How can emerging technologies contribute to global peace? What are the main obstacles to unleashing their potential?
Clearly, while it is evident that connectivity instead of bringing us together in a globalised world is in fact tearing us apart and weaponised as a tool of power politics (Kello 2020, Leonard 2021), this is not the end of the story. Technologies can be employed and regulated to foster peaceful processes in numerous ways as exemplified by the numerous PeaceTech initiatives around the world. By introducing Global PeaceTech as a new field of social inquiry in the context of International Relations and Global Affairs, we aim to analyse the global context in which these initiatives are embedded and interconnected in order to draw prescriptive lessons.
This paper offers an overview of this agenda in three parts: Part I explores the relationship between technology, peace and war in the IR literature. Part II offers alternative definitions and examples for PeaceTech. Part III sets out a new research agenda in Global PeaceTech, introducing core analytical concepts and research methods, and discussing its potential political and societal impact. We conclude by presenting a series of example of relevant research areas as a reference for further research in Global PeaceTech.