نوع مقاله : مقاله پژوهشی
نویسندگان
1 استادیار، گروه حقوق، دانشکده علوم اجتماعی و اقتصادی، دانشگاه الزهراء، تهران ، ایران (نویسنده اول و نویسنده مسئول)
2 دکتری حقوق بین الملل دانشگاه قم
3 استادیار گروه معارف اسلامی، دانشکدۀ الهیات و معارف اسلامی، دانشگاه پیام نور، تهران، ایران
چکیده
تازه های تحقیق
کلیدواژهها
موضوعات
عنوان مقاله [English]
نویسندگان [English]
Introduction
In the past decade, the development of emerging technologies, especially in the field of artificial intelligence (AI), has fundamentally transformed military and security decision-making. One of the most notable manifestations of this transformation is the emergence of lethal autonomous weapon systems (LAWS) —systems capable of identifying, selecting, and attacking targets without direct human intervention. Despite their advantages, such as reducing human casualties on the battlefield and increasing operational speed, these technologies pose profound challenges in terms of ethics, accountability, and, most critically, human rights compliance. One of the primary concerns in this domain is the removal of the human element from lethal decision-making and its impact on fundamental human rights. In this context, the concept of "meaningful human control" has gained prominence within the international legal community as a legal imperative to prevent human rights violations and to preserve human dignity.
The central issue of this research is to examine the human rights implications of deploying lethal AI systems in situations where meaningful human control over life-and-death decisions is absent. Specifically, the research aims to demonstrate how the removal or weakening of human oversight can lead to violations of the right to life, human dignity, and other rights guaranteed under the international human rights framework. The study is based on the premise that human control is not merely an ethical or operational requirement but a legal necessity to ensure compliance with human rights norms. Accordingly, the main research question is framed as follows: "What are the consequences of the absence of meaningful human control over decision-making by lethal systems for the protection and guarantee of human rights? " In addition to these concerns, the rapid acceleration of AI capabilities and their integration into security infrastructures have created a regulatory gap that international legal systems have yet to adequately address. Existing human rights instruments were drafted in an era when autonomous decision-making in warfare was unimaginable, resulting in uncertainty regarding how traditional legal norms apply to machines that operate without human judgment or moral reasoning. This gap not only complicates the interpretation of states’ human rights obligations but also raises questions about foreseeability, accountability, and the ability of victims to seek effective remedies. As states increasingly rely on automated systems in high-stakes environments, the urgency of establishing clear legal standards for human oversight becomes even more apparent, reinforcing the necessity of meaningful human control as a foundational safeguard in the international human rights system. The primary innovation of this study lies in presenting a systematic legal framework for analyzing the necessity of human control in relation to each fundamental human right. Previous literature has largely addressed human control from ethical or military standpoints. This research, for the first time, provides a detailed legal explanation of the connection between the lack of human control and specific threats to human rights, grounded in existing legal rules and documents. It also offers a critical analysis of gaps in the literature concerning legal accountability in the absence of oversight and the challenges of remedying harm to victims. Moreover, the study underscores that the legitimacy of any future regulatory framework depends on states’ willingness to integrate transparency, oversight, and ethical review mechanisms into the development cycle of lethal AI systems.
Methods
This research adopts a descriptive-analytical method with a qualitative approach. Data is collected through library-based research, analysis of international legal instruments such as the International Covenant on Civil and Political Rights (ICCPR), interpretations by human rights bodies like the Human Rights Committee, and expert reports including those of the Group of Governmental Experts under the Convention on Certain Conventional Weapons (CCW). The study also engages in comparative analysis of international practices and national positions regarding the necessity of human control, aiming to provide a more accurate picture of existing challenges and potential legal solutions.
Results and Discussion
The main findings of this study are: a) The removal or weakening of the human element in lethal decision-making challenges the principle of human dignity, as dignity requires that any decision to take life be made within the framework of human understanding of context, motive, and consequences. b) In the absence of human control, guaranteeing the right to life based on the principles of necessity and proportionality becomes impossible or severely constrained, as algorithms lack the capacity for human-like assessment of threats, necessity of force, and proportionality. c) Legal accountability—whether civil, criminal, or international—faces a vacuum in the absence of human control, since existing legal systems attribute responsibility to human agents, not to machines or algorithms. d) Fundamental human rights such as the prohibition of discrimination, the right to a fair trial, and the right to access justice are also significantly threatened by autonomous decision-making, as such systems often lack transparency, accountability, and mechanisms for review.
Conclusion
Lethal artificial intelligence represents one of the most complex challenges of the 21st century in the fields of technology and law, necessitating a rethinking of classical human rights concepts. The findings of this study demonstrate that meaningful human control is not only a tool for ensuring the moral legitimacy of lethal decisions but also a legal imperative to safeguard fundamental rights such as the right to life, human dignity, and accountability. The absence of such control disrupts the chain of accountability, increases the risk of human rights violations, and undermines the rule of law. Therefore, the international community must take steps toward adopting binding international norms that mandate effective human control over lethal systems. Only through such action can the coexistence of emerging technologies and human rights in a just and humane world be secured. The findings of this research have significant implications for international policymaking, technological regulation, and the enhancement of human rights protection mechanisms. From a policymaking perspective, the study emphasizes the need for binding international regulations to ensure the preservation of human control in the design, development, and deployment of lethal systems. These findings can serve as a foundation for treaty negotiations at the United Nations or for reinforcing the Human Rights Council's agenda on AI. Legally, the findings may be invoked in international litigation or in defense of victims of human rights violations caused by autonomous actions. Furthermore, the analysis can assist states currently formulating national AI policies in aligning their frameworks with human rights obligations
کلیدواژهها [English]
کتاب
مقاله
References
Books
Articles
Documents and Cases