Autonomous Weapon Systems and its Fundamental Flaw
DOI:
https://doi.org/10.60935/mrm2025.30.2.30Keywords:
Autonomous Weapon Systems, Artificial Intelligence, Human Rights, Accountability, RegulationAbstract
Highly autonomous weapons can make split-second decisions about life and death without any human involvement thereby avoiding human accountability in the decision-making process. Accountability is an essential component for the proper functioning of the law. All law is premised on human agency. Thus, human agency is essential to accountability. The lack of human agency poses a challenge to the regulation of artificial intelligence.
Using Autonomous Weapon Systems (AWS) as an example, this research paper will explore the challenge of regulating highly “intelligent” and “autonomous” AI-incorporated weapons using a sociolegal methodology employing doctrinal, theoretical and comparative methods of research. While incorporating AI into weapons is not inherently harmful, the paper concludes that it is impossible to regulate “fully” AWS (which incorporates sophisticated AI into the weapon system) because human agency is absent in the “decision” to apply “lethal force”, which undermines accountability. Furthermore, even when human involvement is present, it occurs at different stages of the process and does not necessarily include the decision-making phase. Thus, it is submitted that AWS carry with it the fundamental flaw that it cannot be regulated by law.
Published
Issue
Section
License
Copyright (c) 2026 Ruvini Katugaha

This work is licensed under a Creative Commons Attribution 4.0 International License.