Autonomous Weapons and Terrorism: The Ethical and Strategic Implications of the Fourth Industrial Revolution's Impact on Modern Warfare
The rise of AI-driven autonomous weapons is transforming warfare, enhancing precision but raising ethical concerns. Terrorist organizations' access to these technologies could spark asymmetric warfare. Strengthening international regulations, promoting ethical AI development, and fostering global cooperation are essential to address these threats while ensuring global security.
Analysis
By Utkarsh Ajay Ingale
Today's world is experiencing a transition between the third and fourth industrial revolutions, with Artificial intelligence and machines leading at the forefront of this shift. In 2021, a book titled "The Human Machine Team" was released by Brigadier General Yossi Sariel. The author in the book makes a case for designing a technology that could process massive amounts of data to generate thousands of potential targets to strike in the heat of war. Such technology would resolve 'the human bottleneck for processing massive data and locating new targets, enhancing decision-making. This thing is becoming a reality with the emergence of autonomous weapons, including drones and AI-driven weaponry.
AI-driven technologies are not only shaping industries and economies but also transforming the nature of warfare. With improved efficiency and precision, it is now weaponized in a way that challenges traditional warfare norms and expresses significant ethical concerns. One of the most pressing issues is the use of automatic weapon systems by terrorist and extremist organizations.
The proliferation of such technology among terrorist organizations raises significant concerns. Employing AI-driven autonomous weapons for precise and lethal attacks may usher in an unprecedented era of asymmetric warfare, granting non-state actors unparalleled capabilities to disrupt the global security paradigm.
Fourth Industrial Revolution and Autonomous Weapons:
The increased use of autonomous and artificial intelligence-driven weapon systems marks the transition between the third and fourth industrial revolutions. Autonomous weapons systems are progressing from science fiction to designer drawing boards and, subsequently, to the battlefield. However, no universally accepted definition of autonomous weapon systems exists.
A group of governmental experts on LAWS define it as, "Autonomous Weapons Systems (AWS) are systems that, upon activation by a human user(s), use the processing of sensor data to select and engage a target(s) with force without human intervention."
Autonomous weapon systems have developed significantly after World War II. Advancements in automation technologies led to the emergence of defensive autonomous systems worldwide. The earlier example includes Phalanx CIWS, developed in the United States during the 1970s. Furthermore, these technologies spread into other countries after the development of the Russian Arena, Israeli Trophy, and German AMP-ADS. Missile defense systems like Iron Dome, Iron Beam, and David's Sling also demonstrate precise autonomous striking capabilities.
Apart from developments in defense systems, the integration of artificial intelligence has significantly enhanced the lethality of autonomous offensive weapon systems. Israel acknowledged in 2017 that it is developing military robots that are as tiny as flies. Other examples include the US Navy's ghost fleet of unmanned ships and the British deployment of unmanned vehicles and robots in the army in 2019. The use of Kargu 2 drones by Turkey in Libya was the first instance of the use of killer robots in warfare. Similarly, Israel also used AI-driven weapon systems called Lavender and Gospel in hunting down Hamas targets in Gaza.
The presence of autonomous and potentially lethal weapon systems poses significant ethical and moral dilemmas regarding their deployment. Furthermore, it raises concerns about the potential for abuse by non-state actors, including extremist and terrorist groups, who may utilize these sophisticated and deadly capabilities for malicious purposes
Strategic Implications of Autonomous Weapon Systems:
The strategic implications of autonomous weapons are particularly alarming in the context of terrorism, where their proliferation could significantly alter the global security landscape. In January 2024, the Iran-aligned Khatib Hezbollah group detonated an explosive-laden drone on a Tower 22 American outpost in Jordan. Similar attack tactics were noticed when Hamas used drones on October 7 to strike observation towers to facilitate its incursions into Israel. ISIS also has used commercial autonomous drone systems to attack Peshmerga soldiers in northern Iraq and against the Iraqi army in Mosul in 2017. This example highlights the readiness of terrorist organizations to adopt new technologies in their operations and the strategic implications of such technologies, where they pose a significant challenge to the global security landscape. While evolving from traditional tactics of suicide bombings and hijacking, artificial intelligence would give impunity to carry out sophisticated and lethal operations with minimal risk of losing personnel and resources. The integration of artificial intelligence into weapon systems could potentially spark an arms race and begin a new era of asymmetric warfare where terrorist organizations will have unprecedented capabilities to carry out sophisticated operations and disrupt the global security landscape.
Ethical Concerns:
The absence of Human judgment in critical life-and-death situations raises ethical concerns about the use of Autonomous weapons. Designed to function on pre-programmed algorithms, it lacks the capability for nuanced understanding and moral reasoning. The ethical dilemma is whether machines devoid of empathy and moral discernment should be trusted with decisions determining human survival. If an autonomous weapon commits a war crime, responsibility becomes blurred- whether it falls on the programmer, manufacturer, or military commander. Thus, a lack of accountability raises questions about the moral permissibility of autonomous weapons in modern warfare. This risk is illustrated by a 2020 UN report on the use of a Turkish-made autonomous drone, Kargu-2, in Libya, which allegedly "hunted down' retreating soldiers without human intervention. Israel used the Lavender and the Gospel AI systems to hunt down Hamas targets during its Gaza military campaign. It killed nearly 15000 suspected targets during the first two months of the campaign, further raising questions on ethical challenges in deploying such technology in warfare.
Navigating the Threat:
Several measures can be taken to navigate the threat posed by the proliferation of autonomous weapon systems into the hands of extremist and terrorist organizations.
Firstly, concerted efforts to strengthen international regulations are necessary. Framing a universally accepted definition and prioritizing the development of binding treaties to govern the use of such weapons could provide accountability and ethical standards.
Secondly, enhancing the detention and prevention measures is crucial. Intelligence agencies must invest in advanced technologies capable of identifying and neutralizing autonomous weapons before they get deployed to terrorist organizations.
Thirdly, promoting ethical AI development is essential. AI researchers and designers must be encouraged to incorporate ethical considerations when making autonomous weapon systems. Certain fail-safes and safeguards must be provided to ensure the ethical use of these systems.
Lastly, as the threat posed by autonomous weapons is a global challenge, thus fostering global cooperation for a unified response is required. Countries must collaborate on intelligence sharing, joint research initiatives, and coordinated efforts to prevent the proliferation of these technologies to terrorist groups.
Conclusion:
The emergence of Autonomous weapons represents a double-edged sword in modern warfare. While these technologies offer significant benefits in terms of precision and efficiency, they also pose grave risks of potential misuse by extremist and terrorist organizations. This situation requires an urgent regulatory framework and international cooperation. As the world stands on the brink of a new era in warfare, it is imperative to address the risk posed by the potential misuse of autonomous weapons while harnessing their potential benefits for global security.
Disclaimer: This paper is the author’s individual scholastic contribution and does not necessarily reflect the organisation’s viewpoint.
Utkarsh Ajay Ingale is a post-graduate student of politics and international relations at Pondicherry University, with a specialized focus on terrorism and conflict studies.