University of the West of England
DOI: https://doi.org/10.60935/mrm2025.30.2.30
Highly autonomous weapons can make split-second decisions about life and death without any human involvement thereby avoiding human accountability in the decision-making process. Accountability is an essential component for the proper functioning of the law. All law is premised on human agency. Thus, human agency is essential to accountability. The lack of human agency poses a challenge to the regulation of artificial intelligence.
Using Autonomous Weapon Systems (AWS) as an example, this research paper will explore the challenge of regulating highly “intelligent” and “autonomous” AI-incorporated weapons using a sociolegal methodology employing doctrinal, theoretical and comparative methods of research. While incorporating AI into weapons is not inherently harmful, the paper concludes that it is impossible to regulate “fully” AWS (which incorporates sophisticated AI into the weapon system) because human agency is absent in the “decision” to apply “lethal force”, which undermines accountability. Furthermore, even when human involvement is present, it occurs at different stages of the process and does not necessarily include the decision-making phase. Thus, it is submitted that a fully AWS carries with it the fundamental flaw that it cannot be regulated by law.
Keywords: Autonomous Weapon Systems, Artificial Intelligence, Human Rights, Accountability, Regulation
Citation: Katugaha Ruvini, Autonomous Weapon Systems and its Fundamental Flaw, in: MRM 30 (2025) 2, S. 87–96. https://doi.org/10.60935/mrm2025.30.2.30.
Permissions: The copyright remains with the authors. Copyright year 2026.
Unless otherwise indicated, this work is licensed under a Creative Commons License Attribution 4.0 International. This does not apply to quoted content and works based on other permissions.
Received: 2025-09-02 | Accepted: 2026-01-05 | Published: 2026-02-17
Contents
Artificial intelligence (AI) has come a long way in the last decade. Its integration in our day-to-day lives has reached a point where AI has an impact on human rights. Academics are widely divided on the impact AI has on human rights, though all acknowledge its benefits and challenges. Yet, all contend that, as with everything else in our daily lives, AI-related technology does require some form of regulation. The key question is, however, whether it can be regulated. While most AI-related technology can be regulated (e.g., during manufacturing, procurement, use, etc.), there appears to be an emerging category of AI-incorporated technology that may prove difficult, if not impossible, to regulate due to its nature. One such category is the incorporation of AI in the context of weapons, particularly Autonomous Weapon Systems (AWS).
There is no universally agreed-upon definition of AWS, as States disagree on what constitutes an AWS. For this research paper, the approach will be to outline the elements that constitute an AWS rather than relying on an existing definition. These elements are: a high level of autonomy, a lack of or minimal human control, the ability to make lethal decisions, and the unpredictability of those decisions from a human perspective.1 Thus, AWS are weapon systems capable of acting autonomously with minimal or no human control in the lethal decision-making process. They mainly include two types: semi-autonomous weapon systems (SAWS) and fully autonomous weapon systems (AWS). SAWS are weapon systems that maintain some form of meaningful human control in the lethal decision-making process, whereas fully AWS lack such control. The reason for emphasizing meaningful human control is that a lack thereof leads to a lack of human judgment in the decision to employ lethal force. This paper focuses solely on fully AWS, as almost all of them incorporate AI, thereby granting the weapon system the capacity to be ‘highly intelligent’ (depending on the type of AI) and thus allowing it to be deemed an ‘intelligent’ weapon system. In this paper, ‘intelligent’ refers to AI that enables the weapon to perform certain high-level cognitive functions (typically performed by humans) without human involvement or control. It should be emphasized that there is a growing trend in most AWS (though not all) to incorporate machine learning, allowing the AWS not only to operate independently from a human but also to ‘think’ and act independently or autonomously and ‘learn’ on its own. AI incorporated into everyday objects like cars and phones may have positive implications, and using AI for military purposes might not necessarily be negative per se. It is important to note that this paper does not take the position that incorporating AI into weapon systems is inherently negative, nor does it argue that the use of highly ‘intelligent’ AWS is detrimental. As Scharre states, “many military applications of AI are uncontroversial—improved logistics, cyber defences, and robots for medical evacuation, resupply, or surveillance—however, the introduction of AI into weapons raises challenging questions”.2 One challenge is that as AI becomes more sophisticated (or ‘intelligent’), it also becomes more autonomous. While greater autonomy may seem acceptable and beneficial in most cases, it could be disastrous on a battlefield.
Therefore, the key question is not only how AI-integrated fully AWS can be regulated but, more fundamentally, whether such weapons can be effectively regulated at all. The need for regulation arises because the AI-integrated fully AWS, unlike other weapon systems or pieces of technology, can independently decide to apply lethal force against humans. It is argued that, without proper regulation, this capability could lead to arbitrary loss of lives (thereby violating Human Rights Law and International Humanitarian Law (IHL)). While IHL remains lex specialis in armed conflicts, Human Rights Law applies concurrently.3 Accordingly, this paper focuses on the use of fully AWS and its impact on human rights in the context of an armed conflict.4 Furthermore, it will essentially, though not exclusively, focus on non-derogable human rights, as other human rights can be derogated from, subject to conditions,5 during an armed conflict.
This paper explores the impact of AI-integrated fully autonomous weapons on human rights and draws the conclusion that incorporating such AI into weapons systems (in a manner that makes that weapon or weapon system a ‘fully’ AWS) makes it, by nature, impossible to regulate. This is because human agency is absent in the ‘decision’ to apply ‘lethal force’, thereby eliminating the element of accountability.
This paper will first discuss the current debate on AWS by briefly highlighting the arguments brought forth by those who are for and against the use of AWS. Secondly, it will discuss the impact of using AWS on some selected human rights. Thirdly, it will bring to focus the current positions on regulating AWS. Then it will focus on the challenges to regulating AWS, exposing its fundamental flaw. The paper will conclude that fully AWS have a fundamental flaw, making them unable to be regulated by law.
The current debate on AWS warrants a brief discussion to highlight the broad spectrum of opinions regarding its regulation and notably that of fully AWS.6 Those opposed to the use and development of AWS7 fear that humans will delegate the decision to use lethal force (thereby transferring the power to take lives) into the hands of a weapon system, which has no feelings or remorse.8 In a sense, they worry that these systems will be unable to exercise human judgment in the battlefield in determining what is right or wrong, or what is lawful or not. This lack of exercise of human judgement may lead to a serious breakdown of the law itself, as AWS cannot be held accountable for their actions. The foundation of law rests on the principle that those governed by it will adhere to the parameters and be held accountable for failing to do so.
The opposing view is that a complete prohibition on fully AWS cannot be achieved. Several reasons9 are evoked to support this stance, including the right to self-defence as outlined in the UN Charter.10 These proponents argue that AI-driven AWS should be available for use in self-defence in the event of an attack with similar weapons. They believe that only AI-driven weapons can successfully counter other AI-driven weapon systems. They argue that banning AWS would leave them vulnerable, unable to defend themselves. Moreover, they argue that employing AWS in the battlefield could significantly reduce the loss of combatant and civilian lives, improve objectivity and accuracy, and even wage war ethically.11
While some may argue that the use of AWS falls exclusively in the domain of IHL as it is lex specialis,12 the human rights law perspective cannot be ignored. For example, the 2013 Report of the UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions highlighted the importance of regulating AWS.13 Moreover, regional collectives, such as the Belén Communiqué of the Latin American and the Caribbean Conference of Social and Humanitarian Impact of Autonomous Weapons,14 and the Caribbean Community (CARICOM) Declaration on Autonomous Weapons Systems at the CARICOM Conference,15 have recognized the impact of the deployment of AWS on human rights.
To illustrate the impact of the use of AWS on human rights, this paper examines the concept of human dignity, which is fundamental to all human rights, the principle of non-discrimination that applies in concurrence with all human rights, and three specific human rights: the right to life, the prohibition of torture, and the right to privacy.
Every single human right is based on the underlying concept of human dignity.16 As the Universal Declaration on Human Rights (UDHR) emphasizes in its preamble,17 the “recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world”. Human rights are inherent in every human being by virtue of simply being human, whether they are criminals or law-abiding citizens. Again, Art. 1 of the UDHR reminds us that all “human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood”.
Undoubtedly, human dignity, the building block of all other human rights, is seriously affected in times of armed conflict. Granting AWS the power to make life-and-death decisions in armed conflict presents a fundamental challenge to the concept of human dignity. First, these weapons lack the capacity to comprehend and respect human dignity. To AWS, a human or a human combatant is merely digits and numbers in a mechanical process of calculating and selecting whether to kill. This, in a way, dehumanizes and reduces the value of a human being. As a Human Rights Watch report clearly points out fully AWS are inanimate machines, they “could truly comprehend neither the value of individual life nor the significance of its loss. Allowing them to make determinations to take life away would thus conflict with the principle of dignity”.18 Whilst it is true that IHL mandates the categorisation of people under the principle of distinction, thus reducing humans to legitimate (e.g. combatants) and unlawful (e.g. civilians) targets, one of the key principles of IHL is also the principle of humanity. For example, wounded or incapacitated combatants, as well as those who surrender, cannot be targeted. But how would the AWS understand that the combatant is unwell beyond its physical appearance? Would it be able to show compassion and then treat the combatant turned hors de combat in a humane manner?
Second, in the words of the UN Special Rapporteur on extra-judicial killing, the use of AWS means that “in addition to being physically removed from the kinetic action, humans would also become more detached from decisions to kill – and their execution”.19 In turn, this detachment may lead to an increased willingness to kill ‘the enemy’ (i.e. launch an AI-integrated AWS) without the burden of conscience or accountability. The premise is that it is only a human who can recognize the intrinsic value of another human.20 Machines, at their current stage of technological development, cannot fathom the concept of human dignity or the value of human life. Therefore, as it is impossible to train machines to ‘value’ human lives, AI-integrated AWS cannot be reliably regulated to ensure they respect human dignity.
Like the concept of human dignity, the principle of non-discrimination is fundamental in human rights law. The ICCPR, like many human rights treaties and declarations,21 emphasizes in its Art. 27 that “the law shall prohibit any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status”. While some may argue that an AWS does not have biases like humans do (as it is just a machine and is therefore neutral), all algorithms will reflect the biases of the programmers or parties manufacturing the AI or weapon system. This may lead to racial profiling or discrimination based on various other grounds, sometimes very subtly weaponizing racism or misogyny. It is thus difficult, if not impossible, to ensure that a fully AWS does not violate the principle of non-discrimination under human rights Law.
The right to life is a basic and fundamental right that is non-derogable. According to Article 6 of the ICCPR, “every human being has the inherent right to life (...) No one shall be arbitrarily deprived of his life”. This is reaffirmed in other treaties and declarations as well.22 The Human Rights Committee (UNHRC) noted in General Comment No. 36 (2019) that this right is “the supreme right from which no derogation is permitted, even in situations of armed conflict or other public emergencies that threaten the life of the nation”.23
As early as 2013 the attention of the Human Rights Council was drawn to the impact of AWS. As the Special Rapporteur’s report pointed out “the introduction of such powerful yet controversial new weapons systems has the potential to pose new threats to the right to life”.24 The horror of humans taking the lives of humans is now further complicated by the arrival of AWS which lack empathy or forgiveness. As the Special Rapporteur correctly points out:
“One of the most difficult issues that the legal, moral and religious codes of the world have grappled with is the killing of one human being by another. The prospect of a future in which fully autonomous robots25 could exercise the power of life and death over human beings raises a host of additional concerns.”26
From a legal viewpoint, the problem with, especially fully, AWS is that, in warfare, they might not be able to comply with the prohibition of arbitrary killing. Indeed, according to human rights law, not every taking of human life is prohibited; only that which is deemed arbitrary.27 When fully AWS engage in warfare, they essentially make decisions about using force without human intervention. The decisions made by AWS in any given situation can be extremely unpredictable for the humans who authorized the use or launched the weapon. This unpredictability can result in arbitrary killings that cannot be avoided as humans are taken out of the decision-making loop.
The use of AWS also raises concerns regarding the prohibition of torture, which is a non-derogable right enshrined in Article 7 of the ICCPR28 in the following manner: “no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment”. There are two scenarios in which the use of AWS could lead to a violation of the prohibition of torture and other forms of inhuman treatment. First, an AWS, programmed to kill, might fail to do so and, instead, inflict immense pain to humans. Second, depending on how they are programmed and manufactured, AWS could potentially be used with the intention to inflict torture or other types of inhuman treatment. It would be exceedingly difficult for a weapon or system to discern what constitutes ‘degrading’ or ‘cruel’ treatment, especially in context-specific situations. Even if AWS were able to gather data on the health of a human, they would likely be unable to assess physical and mental pain accurately and, thus, would struggle to determine whether they are engaging in torture or cruel, inhuman or degrading treatment or punishment.
Although the right to privacy in Art. 8 ICCPR which prohibits “arbitrary or unlawful interference with his privacy, family, home or correspondence […]” is not a non-derogable right, its examination is warranted because it highlights the complex nature of using fully AWS. In fact, an AI-integrated fully AWS might be able to tap into a variety of platforms and databases, mining for information. In today’s technological era, people’s information, such as medical records, national identity cards, and genetic data, is often stored in government and private databases. This information can be accessed and used for profiling individuals as well as engaging in targeted killings. If granted such access, AWS could easily profile individuals in an armed conflict and deploy force against them. Additionally, there is a risk that such data falls into the hands of armed non-State actors as well as other States (in times of occupation).
To summarize my position so far, fully AWS (that integrates high-level AI technology) would not be able to comply with human rights law.
Currently, there are three different approaches put forward by States, international organisations and NGOs on the matter of regulating AWS, namely:
| i. | Total prohibition of AWS (what many States mean by AWS here is fully AWS). |
| ii. | Regulation of AWS (without a complete prohibition) through a treaty or a non-legally binding code of conduct. |
| iii. | Regulation through a two-tier approach of prohibiting fully AWS without any meaningful human control and regulating those with some form of meaningful human control. |
Advocates of a ban on fully AWS, such as the Campaign to Stop Killer Robots (including Human Rights Watch) and States such as Canada, believe that fully automated AI-driven AWS could not ever be used in a manner compliant with IHL. They are of the opinion that the “use of an AWS whose operation, behaviour and effects cannot be limited according to IHL, notably the principles of distinction, proportionality and precaution, would be unlawful”.29 Many NGOs and human rights organizations, such as those that are part of the Stop Killer Robots Campaign, find that “fully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-legal safeguards for civilians.”30 Those supporting the use of AWS31 conclude that a categorical prohibition on AWS is unjustified. States, such as the United States of America, advocate for the development of a non-binding code of conduct on the matter.
It must be stressed that States approach regulation from an IHL perspective. As seen in the above-mentioned approaches, since the use of fully AWS has emerged in the context of armed conflict, States automatically consider addressing the matter through IHL. However, they overlook the fact that there is a human rights dimension to the use of fully AWS in armed conflict, as human rights law also applies in times of armed conflict and because AWS can also violate human rights law. Thus, human rights-related approaches can contribute to improving and guiding the understanding of the debate surrounding the use and development of fully AWS.
The most preferred form of regulation appears to be the two-tier approach.32 Most States support the ban of fully AWS with no meaningful human control while advocating for a set of rules (preferably through an IHL treaty) for SAWS or non-fully AWS. However, States already engaged in the development and research of AWS (most importantly fully AWS) have managed to evade responsibility because there is no consensus on what constitutes fully AWS. They can sidestep liability by simply adopting a higher threshold for the definition of fully AWS, thus arguing that these weapons are not fully AWS and thus not banned. The two-tier approach might thus not be effective unless an international agreement is reached on what constitutes a fully AWS.
One of the fundamental challenges of fully AWS is the lack of accountability. As the Belén Communiqué of the Latin American and the Caribbean Conference of Social and Humanitarian Impact of Autonomous Weapons points out “it is paramount to maintain meaningful human control to prevent further dehumanization of warfare, as well as to ensure individual accountability and state responsibility”.33
According to international law (to which the human rights law regime belongs), the responsibility of a State is engaged when there is an internationally wrongful act that 1) “is attributable to the State” and 2) “constitutes a breach of an international legal obligation”.34 As military personnel are State agents,35 and thus organs of a State according to Article 4 of the Draft Articles on the Responsibility of States for Internationally Wrongful Acts, their actions are to be scrutinized with a view to determining whether the State has violated human rights law. Yet, when it involves fully AWS, holding the State accountable for human rights breaches can be challenging. In fact, it may only be possible to hold the agent accountable for the launch of the AWS, not for its subsequent actions. At the time of the launch, the agent may have assessed the situation and deemed that deploying the AWS would comply with human rights law. Nonetheless, once deployed, the AWS may have acted in contravention of human rights law.
The key concepts in this context are human agency and human judgment. All humans possess human agency, which is the capacity to make decisions and act on them. Human judgement, comprised of moral, ethical, and legal building blocks, is a fundamental aspect of that human agency as it guides humans in exercising their agency in a morally, ethically, and legally appropriate manner. A person cannot be held accountable for something they could not predict or perceive at the time of the action. In fully AWS, there is no meaningful human control beyond the launch of the system, and so State agents can only be held responsible for that launch when they were exercising their human agency as constrained by their human judgement.
The Special Rapporteur sums up my argument well in the following words:
“Armed conflict and IHL often require human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people's actions, and understanding of values and anticipation of the direction in which events are unfolding. Decisions over life and death in armed conflict may require compassion and intuition. Humans – while they are fallible – at least might possess these qualities, whereas robots definitely do not…[T]hey have limited abilities to make the qualitative assessments that are often called for when dealing with human life. Machine calculations are rendered difficult by some of the contradictions often underlying battlefield choices.”36
The absence of any human involvement is a distinguishing feature of fully AWS. While States may argue that there is human involvement in the various stages of the AWS (such as in programming, manufacturing, and authorizing the use and deployment) the defining feature of fully AWS is the complete absence of meaningful human involvement or control in the final decision to use lethal force. Therefore, by nature, such fully AWS make life-and-death decisions without any human input. Here lies its fundamental flaw. When meaningful human involvement is eliminated from the equation, it automatically removes human agency and, thereby, human judgment. The lack of human judgment makes it impossible to hold anyone accountable for the actions of fully AWS. Indeed, the State agent may deny any responsibility for the violation of international law; although it was the agent who launched the weapon, they are not held accountable for the attack and the ensuing death of civilians, for example. The State can thus deny responsibility on the basis that the act of the AWS cannot be attributed to it; after all, the agent could not predict what the AWS would do. Consequently, it is contended that such AWS cannot, under the current legal framework, be effectively regulated.
In conclusion, the complexities surrounding the regulation of lethal force by AI-integrated fully AWS necessitates a more nuanced understanding of both technological capabilities and legal implications. The absence of human agency involved in the final decision to use lethal force presents significant regulatory challenges. Even if there is some form of human involvement present, such involvement materializes itself at different stages and not in the final decision to use lethal force as such. Thus, it is submitted that AWS carry with them the fundamental flaw that they cannot be regulated by law.
Indeed, traditional frameworks that require human oversight and thus responsibility will struggle to address the rapid advancements of such technology. As conflicts become increasingly influenced by technology, the debate surrounding the compliance of fully AWS with human rights law cannot be ignored. Those advocating for stricter regulations will have to grapple with the potential for misuse, accidental engagement and the legal implications of delegating life-and-death decisions to machines. Moreover, the challenge of establishing accountability in situations where AWS operate in a fully autonomous mode raises further questions about State responsibility. To move towards (effective) regulation, it is crucial to explore innovative frameworks that encompass the unique characteristics of AWS.
The author is a doctoral student at the University of the West of England, UK, with a research interest in International Humanitarian Law. Her Ph.D. research is focused on regulating autonomous weapon systems in armed conflicts.
This paper was presented at the 30th Anniversary Conference on Human Rights and Artificial Intelligence: Addressing challenges, enabling rights held on the 7th and 8th of November 2024 in Potsdam, Germany.
Please note that this definition, extracted from the components that constitute an AWS, is drawn from my PhD thesis, which is yet to be published. The components were identified by extensively analysing definitions adopted by States (submitted to the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System), international and regional organizations, NGOs and legal scholars.
Paul Scharre, Army of None: Autonomous Weapons and the Future of War, 2018, p. 12.
UNHRC, General Comment No. 31: Nature of the General Legal Obligation Imposed on States Parties to the Covenant of 26 May 2004, UN Doc CCPR/C/74/CRP.4/Rev.6./C/GC/35, para. 11; Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory, Advisory Opinion, ICJ Reports (2004), pp. 136, paras. 101 ff. and 127 ff.
Thus, the use of AWS in law enforcement is not covered. As it stands now, the technology of the AWS can be used to attack demonstrators but have not been used as such by any State (which could potentially occur in the distant future).
International Covenant on Civil and Political Rights of 16 December 1966, UNTS vol. 999, p. 171, (ICCPR) Art 4.
It must be noted that when different States use the term ‘AWS’ some include both SAWS and fully AWS. Others simply equate AWS with fully AWS.
An interesting point to note is that many who are opposed to the development and use of AWS are, in fact, against ‘fully’ AWS. Thus, when they call for a ban on AWS, what they actually mean is fully AWS. The confusion in the terminology has indeed not helped matters.
Bonnie Docherty, Losing Humanity: The Case against Killer Robots, 2012, p. 4.
See for the various arguments supporting the use of AWS: Christopher P. Toscano, Friend of Humans: An Argument for Developing Autonomous Weapons Systems, in: Journal of National Security Law and Policy 8 (2015), pp. 189-246.
Charter of the United Nations of 26 June 1945, UNTS vol. 1, p. XVI (UN Charter).
Ronald C. Arkin, The Case for Ethical Autonomy in Unmanned Systems, in: Journal of Military Ethics 9 (2010), pp. 332-341.
Toscano (fn. 9), p. 50.
UNHRC, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions of 9 April 2013, UN Doc A/HRC/23/47.
Latin American and the Caribbean Conference of Social and Humanitarian Impact of Autonomous Weapons, The Belén Communiqué of 24 February 2023, reproduced in: United Nations General Assembly, Lethal Autonomous Weapons Systems - Report of the Secretary-General, UN Doc A/79/88 of 1 July 2024, pp. 40-41.
CARICOM, Declaration on Autonomous Weapons Systems, CARICOM Conference: The Human Impacts of Autonomous Weapons, Port of Spain Trinidad and Tobago, 5–6 September 2023, available at: https://www.caricom-aws2023.com/_files/ugd/b69acc_c1ffb97ed9024930a3205ae4e34c1b45.pdf (last visited 17 December 2025).
UN Charter, preamble; Universal Declaration of Human Rights of 10 December 1948, UN Doc. A/RES/217 A (III), preamble; ICCPR, preamble; International Covenant on Economic, Social and Cultural Rights of 16 December 1966, UNTS vol. 993, p. 3 (ICESCR), preamble.
See also the preamble of the ICCPR.
Bonnie Docherty, Shaking the Foundations: The Human Rights Implications of Killer Robots, 2014, p. 3.
UNHRC (fn. 13), para. 27.
After World War II, human dignity was conceived as a prerequisite for human coexistence and solidarity (see the use of the terminology such as “members of the human family” and “brotherhood” in the UDHR) and thus can only be understood as a concept that works between humans. It carries with it an idea of shared humanity.
ICESCR, Art. 2 para. 2; UDHR, Art. 7.
UDHR, Art. 3.
UNHRC, General Comment No. 36: Article 6 (Right to Life) of 3 September 2019, UN Doc. CCPR/C/GC/35, para. 2.
UNHRC (fn. 13), para. 30.
He uses the term robots as a similar word to AWS.
UNHRC (fn. 13), para. 30.
For an example, under the General Comment No. 36, the use of lethal force in self-defence does not constitute an arbitrary deprivation of life. See UNHRC (fn. 23), para. 10.
See also Art. 5 of UDHR
Vincent Boulanin/Netta Goussac/Laura Bruun, Autonomous Weapon Systems and International Humanitarian Law: Identifying Limits and the Required Type and Degree of Human–Machine Interaction, SIPRI of June 2021, available at: https://www.sipri.org/publications/2021/policy-reports/autonomous-weapon-systems-and-international-humanitarian-law-identifying-limits-and-required-type (last visited 20 September 2024).
Docherty (fn. 8).
Toscano (fn. 9); Kenneth Anderson/Matthew C. Waxman, Debating Autonomous Weapon Systems, their Ethics, and their Regulation under International Law, in: Roger Brownsword/Eloise Scotford/Karen Yeung (ed.), The Oxford Handbook of Law, Regulation and Technology, 2017, pp. 1097-1117.
Supported by States such as Austria and China. See United Nations Office for Disarmament Affairs, The United Nations Disarmament Yearbook 2023, Vol. 48 (2024), available at: https://media-publications.unoda.org/documents/full-en-yb-vol-48-2023.pdf (last visited 17 December 2025), p. 124.
Latin American and the Caribbean Conference of Social and Humanitarian Impact of Autonomous Weapons (fn. 14).
See UN General Assembly, Responsibility of States for internationally wrongful acts, UN Doc. A/RES/56/83 of 28 January 2022, Annex, Arts. 1 and 2.
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) of 8 June 1977, UNTS vol. 1125, p. 3, Art. 91.
UNHRC (fn. 13), para. 55.