Regulating Lethal Autonomous Weapons Systems (LAWS) in a Fractured Multipolar Order

Regulating Lethal Autonomous Weapons Systems (LAWS) in a Fractured Multipolar Order

Analysis

By Sharath Kumar Kolipaka

By late December 2025, international efforts to regulate Lethal Autonomous Weapons Systems (LAWS) had reached a familiar deadlock this time in Geneva. At the heart of the stalemate is the Group of Governmental Experts (GGE) operating under the Convention on Certain Conventional Weapons (CCW). Over the past year, the GGE made what initially appeared to be meaningful progress. Delegations worked through a “rolling text,” a draft framework that marked a shift from abstract debate toward actual legal language. The proposal introduced a two-tier approach: a complete prohibition on autonomous systems incapable of distinguishing between civilians and combatants, alongside strict regulatory controls on other systems to ensure what diplomats refer to as “meaningful human control.”

Yet despite this forward momentum, the process has effectively stalled. The decision-making rules at CCW’s which are based on consensus, have once again proven to be its greatest constraint. In actuality, it enables a small number of highly developed nations to obstruct or permanently postpone any progress toward a legally binding agreement. Because of this, even proposals with broad support find it difficult to move past discussion.

When the UN General Assembly's First Committee took an unexpected action in November 2025, it was evident how frustrated they were. It passed a historic resolution calling to negotiate a legally enforceable LAWS agreement by the Seventh Review Conference in 2026, departing from Geneva's glacial pace. 156 nations overwhelmingly supported the resolution, indicating that a large portion of the global community is no longer prepared to wait for a consensus that might never be reached.

The big geopolitical differences are also exposed by this vote. Only five nations strictly rejected the resolution, notably the United States and Russia. Their resistance sends a clear message: leading military powers remain unwilling to allow international law to constrain the rapid integration of artificial intelligence into their armed forces. For these states, strategic and technological advantage appears to outweigh concerns about legal or ethical limits.

Often labelled “killer robots,” LAWS represent a profound and deeply controversial shift in how wars may be fought. The distinction lies in agency. In conventional drone operations, a human operator remains “in the loop,” retaining the final authority over the use of lethal force. Fully autonomous weapons, by contrast, place humans “out of the loop.” Using sensor inputs, pattern recognition, and algorithmic decision-making, these systems can identify and engage targets independently, often in fractions of a second. Whether deployed as swarms of micro-drones or as automated weapons platforms, the entire kill chain unfolds without direct human intervention.

For many ethicists and legal scholars, this marks the arrival of what they describe as “algorithmic warfare.” It is a transformation in which human judgment, moral responsibility, and contextual interpretation of international humanitarian law risk being replaced by probabilistic calculations and binary code. The concern is not merely technological, but fundamentally human: once machines decide who lives and who dies, accountability becomes blurred and the moral foundations of warfare itself are called into question.

We are currently in what experts call the "pre-proliferation window," the final moment in history before these weapons become as common and unmanageable as small arms. The 2026 deadline is increasingly seen as the "finish line" for global diplomacy; if a treaty is not reached by then, the speed of innovation in military AI driven by the very powers currently blocking the UN's progress will likely make any future regulation obsolete before the ink is even dry.

The divide at the United Nations is not merely a legal disagreement but a reflection of how different nations view the future of their national security. The voting patterns of 2025 suggest that for the world's major military powers, the decision to support or oppose regulation is closely tied to their current stage of technological development and their specific strategic goals. While the majority of nations seek a preemptive ban to avoid a new arms race, the "Big Three" the United States, Russia, and China have adopted positions that protect their ability to innovate and deploy these systems according to their own timelines.

The United States and Russia have positioned themselves as the primary opponents of a binding treaty because they have already integrated autonomous capabilities into their long-term defence doctrines. For these nations, the refusal to support a ban is a matter of protecting massive financial and structural investments. The United States' “Replicator” effort, as well as Russia's development of large autonomous platforms such as the S-70 Okhotnik, demonstrate the notion that artificial intelligence is required to overcome traditional battlefield limits such as human mistakes and communication jamming. China, on the other hand, chose to abstain, reflecting a more cautious "strategic ambiguity." China strives to position itself as a responsible global actor by supporting a prohibition on the use of these weapons while continuing to develop them, while also privately seeking to close the technological gap with the West. China spoke in favour of regulating the LAWS but abstained from voting, which allowed China to dodge the diplomatic fallout from a "No" vote while continuing its rapid advancements in mass-produced drone technology. 

The gap between diplomatic rhetoric and military reality is most visible in the specialised budgets and secret testing grounds of the major powers. For the U.S., Russia, and China, the decision to stall or abstain from UN regulations is not just a policy preference; it is a defensive move to protect the billions of dollars already sunk into hardware and software that is currently being prepared for the field. As we move into 2026, the financial scale of this arms race has reached a level where a total ban would result in significant economic and strategic losses for the frontrunners.

The United States leads the world in the development of sophisticated, high-intelligence autonomous systems. For the 2026 fiscal year, the Pentagon has requested a record $14.2 billion for AI and autonomous research. A core priority remains the "Replicator" program, which received $1 billion in 2025 to fast-track the deployment of thousands of expendable autonomous drones and surface vessels. The U.S. focus is on superior software and "black box" algorithms that allow weapons to operate in environments where GPS and communications are completely severed a capability that currently depends on high-end semiconductors that the U.S. and its allies closely control.

Russia’s strategy is driven by the immediate need for survivable tech on the front lines. Their primary autonomous platform is the S-70 Okhotnik-B ("Hunter"), a heavy stealth drone designed for autonomous deep-strike missions. While Russia does not release specific AI budget figures, its total military spending for 2025 reached a record $145 billion. Russia views autonomy as a necessity to counter Western electronic warfare; by removing the human pilot from the loop, they create a weapon that cannot be "disconnected" by jamming.

While China is often perceived as being behind the U.S. in high-end semiconductor development, the "brains" of the weapon they have compensated for by leveraging their status as the world’s industrial hub. China’s state-backed military AI investment is estimated at $15 billion annually, focused on "Massive Autonomy." In 2025, they unveiled the "Jiu Tian", a massive drone carrier designed to launch hundreds of autonomous units simultaneously. China abstains at the UN because it allows it to maintain the moral high ground while its factories continue to bridge the technical gap through sheer volume and rapid iteration.

The nation that joined China in abstention, most notably Turkey, which is known for its manufacturing of UAVs, which performed well in recent battles and grabbed global attention, this move might be a strategic move to make sure they don’t have diplomatic fallout with the countries voting for the ban, but also to secure its future in the autonomous weapons market.  

For New Delhi, the focus is on Strategic Autonomy. India manages some of the world’s most contested and high-altitude borders, where maintaining a massive human presence is logistically difficult. In February 2025, India launched the Autonomous Systems Industry Alliance (ASIA) in partnership with the United States. This alliance, which pairs Indian giants like Mahindra with U.S. leaders like Anduril, is designed to fast-track India’s capacity to build its own "sovereign" AI. By abstaining at the UN, India is protecting its right to develop these tools to monitor its frontiers and ensure it can compete as a modern global power without being dependent on foreign hardware. India votes in support of the resolution.

Similarly, for Israel, technology has always been the "great equaliser" that offsets its small geographic size and population. Israel’s defence doctrine is built on maintaining a Qualitative Military Edge (QME). In late 2025, they accelerated the deployment of the "Iron Beam", a laser system that uses autonomous targeting to neutralise incoming threats at a speed no human operator could match. Israel abstains from a total ban because its national survival is increasingly tied to these defensive, "human-out-of-the-loop" systems. To Israel, a restrictive treaty is not just a legal hurdle; it is a potential threat to its ability to defend its citizens from high-speed, modern attacks. Israel voted against the resolution.

The 2026 Seventh Review Conference in Geneva represents a decisive "moment of truth" for international diplomacy. The "pre-proliferation window" is no longer a distant warning but a narrow gap that is rapidly being closed by the sheer velocity of military AI development. The massive capital already committed by the U.S. and Russia, combined with China’s industrial ability to flood the market with autonomous hardware, has created a momentum that is difficult for any non-binding UN resolution to slow. 

Ultimately, the choice facing the international community is no longer about whether these weapons will exist, but whether they will be governed by the moral principles of human oversight or left to the indifferent efficiency of algorithmic warfare. If the 2026 deadline passes without a legally binding treaty, the "rolling text" currently being debated will likely serve as a historical record of a missed opportunity rather than a functional barrier. As the window for regulation nearly shuts, the world stands at a crossroads: we must either establish meaningful human control now or accept a future where the laws of war are written in machine code rather than human conscience.

Disclaimer: This paper is the author's individual scholastic contribution and does not necessarily reflect the organization's viewpoint.

Sharath Kumar Kolipaka completed his master’s in Diplomacy, Law and Business from Jindal School of International Affairs, specializing in peace and conflict studies. He is a Research Fellow at The New Global Order.