Autonomous Weapon Systems and its Fundamental Flaw

Authors

  • Ruvini Katugaha University of the West of England

DOI:

https://doi.org/10.60935/mrm2025.30.2.30

Keywords:

Autonomous Weapon Systems, Artificial Intelligence, Human Rights, Accountability, Regulation

Abstract

Highly autonomous weapons can make split-second decisions about life and death without any human involvement thereby avoiding human accountability in the decision-making process. Accountability is an essential component for the proper functioning of the law. All law is premised on human agency. Thus, human agency is essential to accountability. The lack of human agency poses a challenge to the regulation of artificial intelligence.

Using Autonomous Weapon Systems (AWS) as an example, this research paper will explore the challenge of regulating highly “intelligent” and “autonomous” AI-incorporated weapons using a sociolegal methodology employing doctrinal, theoretical and comparative methods of research. While incorporating AI into weapons is not inherently harmful, the paper concludes that it is impossible to regulate “fully” AWS (which incorporates sophisticated AI into the weapon system) because human agency is absent in the “decision” to apply “lethal force”, which undermines accountability. Furthermore, even when human involvement is present, it occurs at different stages of the process and does not necessarily include the decision-making phase. Thus, it is submitted that AWS carry with it the fundamental flaw that it cannot be regulated by law.

Author Biography

Ruvini Katugaha, University of the West of England

The author is a doctoral student at the University of the West of England, UK, with a research interest in International Humanitarian Law. Her Ph.D. research is focused on regulating autonomous weapon systems in armed conflicts.

Downloads

Published

2026-02-17

Issue

Section

Contributions