ابعاد حقوق بشری هوش مصنوعی کشنده: کنترل انسانی معنادار به مثابه یک ضرورت حقوقی

نوع مقاله : مقاله پژوهشی

نویسندگان

1 استادیار، گروه حقوق، دانشکده علوم اجتماعی و اقتصادی، دانشگاه الزهراء، تهران ، ایران (نویسنده اول و نویسنده مسئول)

2 دکتری حقوق بین الملل دانشگاه قم

3 استادیار گروه معارف اسلامی، دانشکدۀ الهیات و معارف اسلامی، دانشگاه پیام نور، تهران، ایران

10.48308/jlr.2025.240735.2928

چکیده

رشد روزافزون سامانه‌های تسلیحاتی خودکار و هوش مصنوعی کشنده، مرزهای سنتی مسئولیت و پاسخگویی در نظام بین‌المللی حقوق بشر را به چالش کشیده است. این فناوری‌ها  با امکان تصمیم­گیری مرگ­بار بدون مداخله انسانی ،  چالش­هایی را درباره کارآمدی اصولی همچون حق حیات، کرامت ذاتی انسان و تضمین دادرسی عادلانه مطرح می‌سازند. پژوهش حاضر با رویکرد توصیفی–تحلیلی، به واکاوی مفهوم «کنترل انسانی معنادار» به عنوان یک اصل نو پدید و ضرورت حقوقی در مواجهه با هوش مصنوعی کشنده می‌پردازد. فرضیه اصلی آن است که کنترل انسانی معنادار، به دلیل ارتباط مستقیم با اصول کرامت انسانی، حق حیات و مسئولیت‌پذیری دولت‌ها، واجد جایگاه الزام‌آور در حقوق بشر معاصر است. یافته‌های پژوهش نشان می‌دهد که هرچند نظام حقوق بشر از لحاظ مفهومی و هنجاری ظرفیت‌هایی برای مواجهه با هوش مصنوعی کشنده دارد، اما نبود تعریف دقیق از کنترل انسانی و عدم تفکیک میان سطوح نظارت، پاسخگویی و شفافیت، موجب تضعیف ضمانت اجرای این نظام می‌شود. بدین ترتیب، ضرورت تدوین استانداردهای الزام‌آور در زمینه کنترل انسانی معنادار، نه صرفاً یک ملاحظه اخلاقی، بلکه پیش‌شرط تحقق تعهدات بنیادین حقوق بشر است.

تازه های تحقیق

  • پژوهش نشان می‌دهد که فقدان کنترل انسانی در سامانه‌های کشنده خودکار، حقوق بنیادینی همچون حق حیات، کرامت انسانی و حق دادرسی عادلانه را به‌شدت تهدید کرده و زنجیره مسئولیت‌پذیری و جبران خسارت را مختل می‌کند.
  • تحلیل انجام‌شده ثابت می‌کند که کنترل انسانی معنادار یک ضرورت حقوقی الزام‌آور در نظام حقوق بشر است و رعایت آن برای حفظ شفافیت، پاسخ‌گویی و انجام تعهدات دولت‌ها در تصمیمات مرگ‌بار اجتناب‌ناپذیر است.
  • در نهایت، مقاله بر ضرورت اقدام جامعه بین‌المللی برای وضع استانداردها و سازوکارهای الزام‌آور یا بازدارنده در حوزه هوش مصنوعی کشنده تأکید می‌کند تا کنترل انسانی از سطح دغدغه اخلاقی به یک الزام عملی و حقوقی تبدیل شود.

کلیدواژه‌ها

موضوعات


عنوان مقاله [English]

Human Rights Dimensions of Lethal Artificial Intelligence: Meaningful Human Control as a Legal Necessity in the International Human Rights System

نویسندگان [English]

  • Seyedeh Latifeh Hossein 1
  • narges Hosseini 2
  • Mohammadmahdi Hosseinmardi 3
1 i , professor assistant, Department of Law, Faculty of Social Sciences and Economics, Alzahra University, Tehran, Iran
2 University lecturer and researcher in international law, Tehran, Iran
3 Assistant Prof, Department of Islamic teachings, Payame Noor University, Tehran, Iran,
چکیده [English]

Introduction
In the past decade, the development of emerging technologies, especially in the field of artificial intelligence (AI), has fundamentally transformed military and security decision-making. One of the most notable manifestations of this transformation is the emergence of lethal autonomous weapon systems (LAWS) —systems capable of identifying, selecting, and attacking targets without direct human intervention. Despite their advantages, such as reducing human casualties on the battlefield and increasing operational speed, these technologies pose profound challenges in terms of ethics, accountability, and, most critically, human rights compliance. One of the primary concerns in this domain is the removal of the human element from lethal decision-making and its impact on fundamental human rights. In this context, the concept of "meaningful human control" has gained prominence within the international legal community as a legal imperative to prevent human rights violations and to preserve human dignity.
The central issue of this research is to examine the human rights implications of deploying lethal AI systems in situations where meaningful human control over life-and-death decisions is absent. Specifically, the research aims to demonstrate how the removal or weakening of human oversight can lead to violations of the right to life, human dignity, and other rights guaranteed under the international human rights framework. The study is based on the premise that human control is not merely an ethical or operational requirement but a legal necessity to ensure compliance with human rights norms. Accordingly, the main research question is framed as follows: "What are the consequences of the absence of meaningful human control over decision-making by lethal systems for the protection and guarantee of human rights? " In addition to these concerns, the rapid acceleration of AI capabilities and their integration into security infrastructures have created a regulatory gap that international legal systems have yet to adequately address. Existing human rights instruments were drafted in an era when autonomous decision-making in warfare was unimaginable, resulting in uncertainty regarding how traditional legal norms apply to machines that operate without human judgment or moral reasoning. This gap not only complicates the interpretation of states’ human rights obligations but also raises questions about foreseeability, accountability, and the ability of victims to seek effective remedies. As states increasingly rely on automated systems in high-stakes environments, the urgency of establishing clear legal standards for human oversight becomes even more apparent, reinforcing the necessity of meaningful human control as a foundational safeguard in the international human rights system. The primary innovation of this study lies in presenting a systematic legal framework for analyzing the necessity of human control in relation to each fundamental human right. Previous literature has largely addressed human control from ethical or military standpoints. This research, for the first time, provides a detailed legal explanation of the connection between the lack of human control and specific threats to human rights, grounded in existing legal rules and documents. It also offers a critical analysis of gaps in the literature concerning legal accountability in the absence of oversight and the challenges of remedying harm to victims. Moreover, the study underscores that the legitimacy of any future regulatory framework depends on states’ willingness to integrate transparency, oversight, and ethical review mechanisms into the development cycle of lethal AI systems.
Methods
This research adopts a descriptive-analytical method with a qualitative approach. Data is collected through library-based research, analysis of international legal instruments such as the International Covenant on Civil and Political Rights (ICCPR), interpretations by human rights bodies like the Human Rights Committee, and expert reports including those of the Group of Governmental Experts under the Convention on Certain Conventional Weapons (CCW). The study also engages in comparative analysis of international practices and national positions regarding the necessity of human control, aiming to provide a more accurate picture of existing challenges and potential legal solutions.
Results and Discussion
The main findings of this study are: a) The removal or weakening of the human element in lethal decision-making challenges the principle of human dignity, as dignity requires that any decision to take life be made within the framework of human understanding of context, motive, and consequences. b) In the absence of human control, guaranteeing the right to life based on the principles of necessity and proportionality becomes impossible or severely constrained, as algorithms lack the capacity for human-like assessment of threats, necessity of force, and proportionality. c) Legal accountability—whether civil, criminal, or international—faces a vacuum in the absence of human control, since existing legal systems attribute responsibility to human agents, not to machines or algorithms. d) Fundamental human rights such as the prohibition of discrimination, the right to a fair trial, and the right to access justice are also significantly threatened by autonomous decision-making, as such systems often lack transparency, accountability, and mechanisms for review.
Conclusion
Lethal artificial intelligence represents one of the most complex challenges of the 21st century in the fields of technology and law, necessitating a rethinking of classical human rights concepts. The findings of this study demonstrate that meaningful human control is not only a tool for ensuring the moral legitimacy of lethal decisions but also a legal imperative to safeguard fundamental rights such as the right to life, human dignity, and accountability. The absence of such control disrupts the chain of accountability, increases the risk of human rights violations, and undermines the rule of law. Therefore, the international community must take steps toward adopting binding international norms that mandate effective human control over lethal systems. Only through such action can the coexistence of emerging technologies and human rights in a just and humane world be secured. The findings of this research have significant implications for international policymaking, technological regulation, and the enhancement of human rights protection mechanisms. From a policymaking perspective, the study emphasizes the need for binding international regulations to ensure the preservation of human control in the design, development, and deployment of lethal systems. These findings can serve as a foundation for treaty negotiations at the United Nations or for reinforcing the Human Rights Council's agenda on AI. Legally, the findings may be invoked in international litigation or in defense of victims of human rights violations caused by autonomous actions. Furthermore, the analysis can assist states currently formulating national AI policies in aligning their frameworks with human rights obligations

کلیدواژه‌ها [English]

  • Human Rights
  • Meaningful Human Control
  • Lethal Artificial Intelligence
  • Accountability
  • Legal Necessity
  1. کتاب

    1. آل کجباف، حسین و حسن کریمی مهابادی، تأثیر هوشمصنوعی بر قواعد حقوق بشر، تهران: جنگل،
    2. فرزانه، یوسف، انسانی شدن حقوق بینالملل با تأکید بر مسئولیت حمایت، تهران: میزان، 1394.
    3. قاری سیدفاطمی، سیدمحمد، حقوق بشر در جهان معاصر، جلد 1، تهران: دانشگاه شهید بهشتی، 1382.

    مقاله

    1. جعفری، افشین، «حاکمیت بر فضای سایبر از منظر حقوق بین‌الملل و نظام حقوقی جمهوری اسلامی ایران»، فصلنامه رهیافت انقلاب اسلامی، دوره 13، شماره 49، 1398، صص 109 -132.
    2. حسینی، سید امیرعلی و سید علیرضا هاشمی‌زاده، «هوش ‌مصنوعی و صلح و امنیت بین‌المللی»، فصلنامه پژوهشهای روابط بینالملل، دوره 13، شماره 49، 1402، صص 325-345. Doi: 10.22034/irr.2024.427524.2467
    3. کریمی، یاشار، محمدرضا شریف‌زاده و مهرداد رایانی مخصوص، «نسبت شورمندی در فلسفه کی یرکگور و زندگی انسان در عصر ترابشریت و هوش‌مصنوعی»، نشریه متافیزیک، دوره 14، شماره 34، 1401، صص 99-114. Doi: 10.22108/mph.2022.133500.1414
    4. محمودی، امیررضا و مریم بحر کاظمی، «هوش ‌مصنوعی و تأثیر آن بر سیاست بین‌الملل»، فصلنامه راهبرد سیاسی، دوره 8، شماره 2، 1403، صص 237-256.
    5. مشرفیان، محمدرضا، «ترجمه کنوانسیون چارچوب شورای اروپا در مورد هوش‌مصنوعی و حقوق بشر، دموکراسی و حاکمیت قانون، ویلنیوس، 5 سپتامبر 2024»، مجله حقوقی بینالمللی، دوره 42، شماره 77، 1404، صص 137-151.
    6. مصطفوی اردبیلی، سید محمدمهدی، مصطفی تقی‌زاده انصاری و سمانه رحمتی رحمتی‌فر، «تأثیر هوش ‌مصنوعی بر نظام حقوق بشر بین‌الملل»، فصلنامه حقوق فناوریهای نوین، دوره 4، شماره 8، 1402، صص 85-100. http://doi.org/10.22133/ijtcs.2022.145651

    References

    Books

    1. Al-Kajbaf, Hossein and Hassan Karimi Mahabadi, The Impact of Artificial Intelligence on Human Rights Norms, Tehran: Jungle, 2024. (in Persian)
    2. Anderson, Kenneth. Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't Work and How the Laws of War Can, Stanford University: Hoover Institution, 2013.
    3. Boulanin, Vincent, Laura Bruun and Netta Goussac. Autonomous Weapon systems and International Humanitarian Law, Stockholm: International Peace Research Institute, 2021.
    4. Farzaneh, Yusef, The Humanization of International Law with an Emphasis on the Responsibility to Protect, Tehran: Mizan, 2015. (in Persian)
    5. Ghāri Sayyed Fatemi, Sayyed Mohammad, Human Rights in the Contemporary World, Volume 1, Tehran: Shahid Beheshti University, 2003. (in Persian)
    6. Quintavalla, Alberto and Jeroen Temperman (eds.). Artificial Intelligence and Human Rights, Oxford University Press, 2023.
    7. Scharre, Paul. Army of Non Autonomous Weapons and the Future of War, NewYork: W. Norton & Company, Second Edition, 2020.

    Articles

    1. Arkin, Ronald C. “The Case for Ethical Autonomy in Unmanned Systems”, Journal of Military Ethics, Volume 10, Issue 4, 2011, PP 332-341. Doi:10.1080/15027570.2010.536402/
    2. Asaro, P. “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision Making”, International Review of the Red Cross, Volume 94, Issue 886, 2012, PP 687 -709.
    3. Blanchard, Alexander.“Taddeo, Mariarosaria” Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems, Journal of Military Ethics, Volume 21, 2022. org/10.1080/15027570.2022.2157952/
    4. Heyns, Christof. “Autonomous Weapons in Armed Conflict and the Right to a Dignified Life: An African Perspective”, South African Journal on Human Rights, Volume 33, Issue 1, 2017, PP 46–71. DOI: 10.1080/02587203.2017.1303903
    5. Hosseini, Sayyed Amirali and Sayyed Alireza Hashemi-Zadeh, “Artificial Intelligence and International Peace and Security”, International Relations Research Quarterly, Volume 13, Issue 49, PP 325–345. Doi: 10.22034/irr.2024.427524.2467 (in Persian)
    6. Ja’fari, Afshin, “Sovereignty over Cyberspace from the Perspective of International Law and the Legal System of the Islamic Republic of Iran”, The Islamic Rvolution Approach Quarterly, Volume 13, Issue 49, 2019, PP 109–132. (in Persian)
    7. Karimi, Yashar, Mohammadreza Sharifzadeh and Mehrdad Rayani Makhsoos, “The Relation of Passion in Kierkegaard’s Philosophy and Human Life in the Age of Transhumanism and Artificial Intelligence”, Metaphysics Journal, Volume 14, Issue 34, 2022, PP 99–114. Doi: 10.22108/mph.2022.133500.1414 (in Persian)
    8. Mahmoudi, Amir Reza and Maryam Bahr Kazemi, “Artificial Intelligence and Its Impact on International Politics”, Political Strategy Quarterly, Volume 8, Issue 2, 2024, PP 237–256. (in Persian)
    9. Masoudi Lamraski, Ali, “Preliminary Remarks on Lethal Autonomous Weapon Systems from an IHL Perspective”, Asia Pacific Journal of International Humanitarian Law, Volume 02, Issue 01, ICRC, 2021, PP 8-30.
    10. Moshrefian, Mohammadreza, “Translation of the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, Vilnius, 5 September 2024”, International Legal Journal, Volume 42, Issue 77, 2025, PP 137–151. (in Persian)
    11. Mostafavi Ardebili, Sayyed Mohammad Mahdi, Mostafa Taghizadeh Ansari and Samaneh Rahmati-Far, “The Impact of Artificial Intelligence on the International Human Rights System”, Journal of New Technologies Law, Volume 4, Issue 8, 2023, PP 85–100. http://doi.org/10.22133/ijtcs.2022.145651 (in Persian)
    12. Santoni de Sio, Filippo and Jeroen van den Hoven. “Meaningful human control over autonomous systems: A philosophical account”, Frontiers in Robotics and AI, Volume 5, Issue 15, 2018. Doi:10.3389/frobt.2018.00015
    13. Sharkey, Amanda. “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Journal of Ethics and Information Technology, Volume 21, Issue 2, 2018, PP 75 - 87.
    14. Sparrow, R. “Killer Robots”, Journal of Applied Philosophy, Volume 24, Issue 1, 2007.

    Documents and Cases

    1. Amnesty International. Autonomous Weapons Systems: Five Key Human Rights Issues for Consideration. London: Amnesty International Publications, May 2015, ACT 30/1401/2015. Available at: https://www.amnesty.org/en/wp-content/uploads/2023/05/ACT3014012015ENGLISH.pdf/
    2. Amnesty International. This Is Real Life, Not Science Fiction: Why We Need a Treaty to Stop Killer Robots. Amnesty International USA, 2 November 2021. Retrieved from https://www.amnestyusa.org/blog/this-is-real-life-not-science-fiction-why-we-need-a-treaty-to-stop-killer-robots
    3. Amnesty International. This Is Real Life, Not Science Fiction: Why We Need a Treaty to Stop Killer Robots. Amnesty International USA, 9 May 2025. Retrieved from https://www.amnestyusa.org/blog/this-is-real-life-not-science-fiction-why-we-need-a-treaty-to-stop-killer-robots/
    4. Campaign to Stop Killer Robots. Key Elements of Meaningful Human Control. Geneva, 2021.
    5. Docherty, Bonnie. “Lethal Autonomous Weapons and the Accountability Gap”, International Review of the Red Cross, 102(914), 2020, PP 507–534.
    6. Ekelhof, Merel. “Autonomous Weapons: Operationalizing Meaningful Human Control.” Humanitarian Law & Policy Blog (ICRC), 15 August 2018. Available at: https://blogs.icrc.org/law-and-policy/2018/08/15/autonomous-weapons-operationalizing-meaningful-human-control/
    7. Gaeta, Paola. Autonomous Weapon Systems and the Alleged Responsibility Gap. In Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons, expert meeting, Versoix, Switzerland, 15–16 March 2016. Report. International Committee of the Red Cross.
    8. Human Rights Committee (United Nations). General Comment No. 36 on Article 6 of the ICCPR: The Right to Life. UN Doc. CCPR/C/GC/36, 30 October 2018.
    9. Human Rights Watch & International Human Rights Clinic (Harvard Law School). Heed the Call: A Moral and Legal Imperative to Ban Killer Robots. New York: HRW, 2020.
    10. Human Rights Watch & International Human Rights Clinic (Harvard Law School). Killer Robots and the Concept of Meaningful Human Control. 11 April 2016. Available at: https://www.hrw.org/news/2016/04/11/killer-robots-and-concept-meaningful-human-control
    11. Human Rights Watch & International Human Rights Clinic (Harvard Law School). A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making. New York: HRW, 28 April 2025. Available at: https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making/
    12. Human Rights Watch & International Human Rights Clinic (Harvard Law School). Mind the Gap: The Lack of Accountability for Killer Robots. New York: HRW, 9 April 2015. Available at: https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots/
    13. Human Rights Watch and Harvard Law School’s International Human Rights Clinic Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban, November 2016, Available at: https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban/
    14. Human Rights Watch. Losing Humanity: The Case Against Killer Robots. New York: HRW, 2012. Available at: https://searchworks.stanford.edu/view/9943181/
    15. Human Rights Watch. Statement on Meaningful Human Control, CCW Meeting on LAWS, 11 April 2018. Available at: https://www.hrw.org/news/2018/04/11/statement-meaningful-human-control-ccw-meeting-lethal-autonomous-weapons-systems/
    16. International Committee of the Red Cross (ICRC). Views of the ICRC on Autonomous Weapon Systems. Statement to CCW Meeting of Experts on LAWS, Geneva, April 2015.
    17. International Covenant on Civil and Political Rights (ICCPR). UN General Assembly Resolution 2200A (XXI), 16 December 1966, Article 6.
    18. Report on Lethal Autonomous Weapons Systems. A/HRC/48/22, 2021.
    19. United Nations Human Rights Council. Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions (Christof Heyns). UN Doc. A/HRC/23/47, 9 April 2013. Available at: https://undocs.org/A/HRC/23/47/
    20. United Nations Office for Disarmament Affairs (UNODA). Autonomous Weapons Systems and International Law. No. 26, 2020.
    21. United Nations Office for Disarmament Affairs (UNODA). Background on Lethal Autonomous Weapons Systems (LAWS): Convention on Certain Conventional Weapons (CCW). Geneva: UNODA, 2022.