top of page

AI, Autonomous Weapons, and International Humanitarian Law: Can Law Keep Up with Machines?

Author: Dóra Fekete


War has always been one of the great accelerators of technological development. From gunpowder to nuclear weapons, every era has equipped states with new tools, while international law has typically responded belatedly, attempting to adapt after the fact. By the third decade of the twenty-first century, however, a qualitative shift has occurred. With the emergence of artificial intelligence (AI), it has become conceivable for the first time that decisions concerning the use of lethal force may be made without direct human judgment. This development lies at the centre of the contemporary debate on autonomous weapon systems. The issue in 2026 is no longer whether such systems exist, but whether international humanitarian law (IHL)—a body of law fundamentally grounded in human judgment and responsibility—can meaningfully regulate a form of warfare increasingly shaped by algorithmic decision-making. The core principles of IHL—distinction, proportionality, and precaution—presuppose moral and legal assessment. A military commander must evaluate not only whether an attack is militarily necessary, but also whether the anticipated incidental civilian harm is acceptable in light of the concrete and direct military advantage anticipated. The challenge posed by AI-driven systems is whether such context-sensitive evaluations can be translated into computational processes, or whether they inevitably require human judgment that resists formalisation.


Accountability, Control, and the Problem of Responsibility


Robotic hand with intricate white fingers holding a small object. Blurred background with a dark floor and figures walking. Technical setting.

Among the most pressing legal challenges posed by autonomous weapons is the problem of accountability. If an autonomous system causes civilian harm, the question of who bears responsibility becomes increasingly complex. Traditional doctrines of state responsibility and individual criminal responsibility rely on identifiable human decision-makers. When key decisions are delegated to self-learning algorithms, the chain of responsibility becomes obscured.

The doctrine of command responsibility, a cornerstone of both international humanitarian and international criminal law, presupposes the existence of effective human control over military operations. Yet the notion of control becomes deeply problematic when applied to adaptive systems whose behaviour may not be fully predictable even to their operators. This creates what many commentators describe as an accountability gap, in which violations may occur without a clear subject of legal responsibility. Institutions such as the International Committee of the Red Cross (ICRC) and various United Nations expert bodieshave repeatedly warned that this erosion of accountability threatens the normative foundations of humanitarian law. If responsibility is diffused among software developers, military planners, and political authorities, the capacity of IHL to function as a system of restraint and accountability is significantly weakened.


Regulation, Human Control, and the Future of IHL

A drone hovers mid-air near a waterfall, with blurred water in the background. The setting is serene, in muted grey and white tones.

It is therefore unsurprising that autonomous weapons have been the subject of sustained debate within the United Nations, particularly under the framework of the Convention on Certain Conventional Weapons (CCW). For many years, discussions were dominated by the claim that existing IHL is technology-neutral and therefore sufficient to regulate emerging weapons systems. By 2026, however, this position appears increasingly inadequate.

In response, growing attention has been directed toward the concept of “meaningful human control.” Rather than advocating an outright ban, this approach seeks to ensure that humans retain a decisive role in the selection of targets, the timing of attacks, and the use of force. Meaningful human control requires not merely human presence, but understanding, anticipatory judgment, and a genuine capacity to intervene.

Yet this concept is not without its critics. The inherent vagueness of what constitutes “meaningful” control leaves room for divergent interpretations and political compromise. There is a real risk that the concept may function less as a constraint on autonomy and more as a rhetorical device that legitimises increasingly automated forms of warfare.

The challenge extends beyond weapons themselves. AI-based surveillance and targeting systems, including facial recognition and predictive analytics, are reshaping how individuals are identified as threats. These practices place significant strain on the principle of distinction, particularly where probabilistic assessments substitute for direct human evaluation. Moreover, documented algorithmic biases raise concerns that such systems may disproportionately affect certain civilian populations. At this intersection, the boundaries between international humanitarian law and international human rights lawbecome increasingly blurred. As several UN reports have noted, the deployment of AI in armed conflict raises not only legal but also deeply ethical questions about fairness, dignity, and the delegation of coercive power to machines.


Conclusion

By 2026, it is evident that the debate over autonomous weapons is not merely about technological innovation, but about the credibility of international humanitarian law itself. If the law is unable to articulate clear limits on the delegation of lethal decision-making to machines, it risks losing its normative authority. Ultimately, the regulation of autonomous weapons will serve as a test case for how the international legal order confronts algorithmic power more broadly. The central question is not whether machines will be used in warfare, but whether the international community is prepared to affirm, in legal terms, that responsibility for life-and-death decisions cannot be automated.


You can also read about:


Reference list:

Comments


  • White Facebook Icon
  • White Twitter Icon
  • White Instagram Icon
bottom of page