The increasing spread of AI-supported Autonomous Weapon Systems (AWS) on the battlefield triggers a structural questioning of military strategies and international law. The development of these technologies and their use in battlefields threatens the basic principles of classical war law, such as human control, chain of responsibility and proportionality. Therefore, the gap between legal regulations and technological improvements is widening dramatically. Today, the legitimacy of AWSs has become directly related not only to technical security issues but also to the boundaries of human rights law, international humanitarian law and international criminal law.
AWS systems can perform target detection, decision-making and engagement without human intervention. These systems are experimented with various prototype aircraft (e.g. kamikaze drones), land systems and naval platforms. Many countries, pivotally the United States (US), China, Russia, Israel and South Korea, invest in R&D processes in this field. However, increasing autonomy of AWSs presents concerns about the controllability of armed conflicts, the prevention of civilian casualties, and military ethics. In particular, the abandonment of “humans in the control loop” blurs humans’ moral and legal responsibility over war.
One of the most debated issues regarding AWSs within international humanitarian law is how these systems will ensure the principles of proportionality and distinction. The Geneva Conventions and additional protocols stipulate that a clear distinction should be made between civilians and combatants in war and that disproportionate force should not be used against military targets. Nevertheless, the fact that AWSs make these distinctions with algorithms is unreliable due to both technical limitations and data bias. For example, it is still a serious area of uncertainty for an autonomous system to distinguish between a child and a combatant or to analyse the potential civilian presence near a military target correctly. This creates a crisis not only technical but also direct legal liability and the applicability of the rules.
Another issue is the question of who will be responsible for the damages caused by AWSs. In traditional war law, the violation of a soldier or commander brings individual criminal and state liability. However, the fact that the decision-making mechanism in AWSs is an algorithm makes it easier to directly impose this responsibility on neither the developer company nor the commanders and the state. This situation opens a new era in international criminal law that questions the validity of the concepts of criminal capacity, intent and fault. Who will be responsible for the actions of an autonomous system? Without a clear answer, using AWSs will remain a grey area regarding international law.
One of the first comprehensive initiatives to discuss the compatibility of AWSs with international law was the “United Nations Office for Disarmament (UNODA)” in 2013 and the “Lethal Autonomous Weapons Systems (LAWS)” intergovernmental expert meetings held in Geneva in 2017. These meetings created a sharp division between states that demanded a complete ban on AWSs and those that were open to defending and regulating these systems. While countries from the Global South (Brazil, Pakistan, Mexico and Argentina) demanded a ban on AWSs, the USA, Israel, Russia and China advocated regulation only within ethical working spaces. This division makes selecting a universal legal power for AWSs difficult and creates a normative gap in international society.
In this context, some states call for a “preemptive ban”. This policy determines the legal framework before technological developments are completed, which limits potential menaces before they emerge. However, this approach is particularly contradictory with the strategic interests of major powers because AWS has become a preferred tool as it offers low-cost and rapid war capacity while reducing human and political costs. In addition, since private companies largely develop AI-based AWSs, whether non-state actors will be subject to the international law framework is a separate topic of discussion.
The utilisation of AWS systems also creates a new paradigm regarding international relations theories. According to the realist perspective, AWSs increase the absolute security and deterrence capacity of major powers; however, this situation also opens the door to asymmetric wars and unethical engagements. On the other hand, liberal theories argue that AWSs can reduce conflicts if they are kept under an auditable, transparent and legal regime. The productive approach highlights that AWSs are not just weapons but also symbolic tools, transforming ethical norms and the nature of war. Thus, the expansion of AWSs is a multidimensional entity that changes international norm-making operations, balances of power, and the nature of war.
The first regulatory moves towards AWSs seem more likely at the regional rather than international level. Some European Union (EU) countries claim these weapons are unethical, and they demand a ban. For example, Germany and Austria have openly taken a view against weapons systems that are beyond human control. Some members of the African Union are also concerned about the uncontrolled spread of AWSs in conflict zones, and they draw attention to the fact that violations of humanitarian law will increase. Especially in fragile regions such as the Sahel. In this respect, the anti-AWS stance is actually shaped by normative principles such as global justice, equality, and the protection of the law of war.
The digitalisation of warfare may make AWSs an option in future conflicts and a norm. At this point, the growing gap between the human-centred law of war and the machine-centred dynamics of engagement is a legal crisis and an ethical turbulence. The challenge facing the international community is neither to regulate these weapons alone nor to ban them alone; the real issue is to limit technology while preserving humans’ role, responsibility and moral position in war. Otherwise, a war order that excludes human responsibility could bring about a dystopia that threatens both the law of war and international order.
As a result, the rise of AI-supported autonomous weapons systems is not only a technological advancement but a multidimensional challenge in terms of international law, strategy, ethics and human rights. How these weapons will be harmonised with international law, in which normative framework states will act and who will determine the new order of war are questions that lawyers, diplomats, strategists and the conscience of humanity seek common answers to. In this context, establishing a global normative framework to be developed on AWSs will be one of the fundamental areas that will shape the international order of the next decade.