Journal of Philosophical Investigations

نوع مقاله : مقاله علمی- پژوهشی

نویسنده

استادیار پژوهشکده مطالعات بنیادین علم و فناوری دانشگاه شهید بهشتی، تهران، ایران.

چکیده

فناوری‌های هوش مصنوعی، در مقایسه با مصنوعات پیشین دارای قابلیت‌هایی جدید هستند که مسائل اخلاقی متفاوتی را مطرح می‌کند. از جمله این مسائل، بحث درباره عاملیت اخلاقی هوش مصنوعی است. مسئله عاملیت اخلاقی مصنوعی، پرسش‌های نظری و عملیاتی مختلفی را مطرح می‌سازد؛ مانند این که شروط لازم و کافی برای عاملیت اخلاقی چیست، چطور می‌توان برقراری این شروط را در مصنوعات بررسی کرد، عاملیت اخلاقی دارای چه سطوح و درجاتی است و هر سطح از عاملیت اخلاقی برای واگذاری چه نوع وظایفی مناسب است. یکی از موضوعات پرتکرار در بحث حول عاملیت اخلاقی مصنوعی، عواطف است. فیلسوفان مختلفی به عواطف به‌عنوان عاملی اشاره کرده‌‌اند که در وجود یا عدم و میزان ظرفیت اخلاقی مصنوعات مدخلیت دارد. در این مقاله به رابطه میان عواطف و ظرفیت اخلاقی هوش مصنوعی خواهیم پرداخت. پرسش اصلی این است که عواطف در ارتقاء ظرفیت اخلاقی فناوری‌های هوش مصنوعی نقشی مثبت دارند یا منفی؟ چهار استدلال از جانب موافقین نقش مثبت عواطف در ظرفیت اخلاقی استخراج و صورت‌بندی می‌شود که شامل استدلال از طریق «حساسیت اخلاقی»، «عقلانیت محدود»، «برآورد خطر» و «مجازات‌پذیری» می‌شود. همچنین چهار استدلال از جانب مخالفین نقش مثبت عواطف در ظرفیت اخلاقی هوش مصنوعی ارائه می‌شود که شامل استدلال از طریق «ربایش عاطفی»، «سراب عواطف»، «پارادوکس انسان‌انگاری و انسانیت‌زدایی همزمان» و «مهارت‌زدایی اخلاقی» است. در نهایت، با شفاف‌سازی نقاط نزاع دو طرف بحث، دیدگاه‌های فوق مورد تحلیل قرار گرفته و چالش‌های اخلاقی پیاده‌سازی عواطف در هوش مصنوعی شرح داده‌ می‌شود. 

کلیدواژه‌ها

موضوعات

عنوان مقاله [English]

On the Relation of Emotion and Moral Capacity in Artificial Intelligence Technologies

نویسنده [English]

  • Zahra Zargar

Assistant Professor in Institute for Science and Technology Studies, Shahid Beheshti University, Tehran, Iran

چکیده [English]

The unpreceded abilities of AI technologies have led to the emergence of new ethical issues; among them is the possibility of the moral agency of AI artifacts. There are many questions around this subject, including what are the necessary and sufficient conditions of being a moral agent? How can we examine those conditions in artifacts? What levels and degrees of agency are possible for artifacts? And what level of moral agency is proper for allocating a certain task to AI artifacts? There are wide discussions about factors that figure in the moral capacities of AI artifacts, and emotions are one of the frequently referred factors. Emotions are directly or indirectly relevant for examining AI’s moral status. In this paper, we focus on the relation between emotions and moral capacities in AI technologies. Our main question is whether emotions play a positive role in the improvement of moral capacities or a negative role. We extract and articulate four arguments in defense of the positive role of emotions in the enhancement of AI moral capacities, including arguments from moral sensitivity, bounded rationality, risk assessment, and culpability. Then, we present other four arguments in defense of the negative role of emotions in moral capacities including arguments from emotional hijacking, deceptive emotions, anthropomorphism and dehumanization paradox, and moral deskilling. Finally, we analyze the debate by clarifying the point of contest between the two mentioned camps and discuss serious challenges of designing emotional AI.

کلیدواژه‌ها [English]

  • emotions
  • emotions and morality
  • artificial moral agency
  • artificial intelligence ethics
  • philosophy of artificial intelligence
Behdadi, D. & Munthe, C. (2020). A Normative Approach to Artificial Moral Agency, Minds and Machines, 30 (2), 195-218. https://doi.org/10.1007/s11023-020-09525-8
Brey, P., (2014). From Moral Agents to Moral Factors: The Structural Ethics Approach, in The Moral Status of Technical Artefacts, Eds. P. Kroes & P. P. Verbeek. Springer. 
Bringsjord, S. (2007). Ethical Robots: The Future Can Heed Us, AI and Society, 22 (4),539-550. https://doi.org/10.1007/s00146-007-0090-9
Butkus, M. A.  (2020). The Human Side of Artificial Intelligence, Science and Engineering Ethics, 26 (5), 2427-2437. https://doi.org/10.1007/s11948-020-00239-9
Cappuccio, M.; Peeters, A. & McDonald, W. (2019). Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition, Philosophy & Technology, 33 (1), 9-31. https://doi.org/10.1007/s13347-019-0341-y
Coeckellbergh, M., (2007). Violent Computer Games, Empathy, and Cosmopolitanism, Ethics and Information Technology, 9 (3), 219-231. https://doi.org/10.1007/s10676-007-9145-3
Farisco, M.; Evers, K. & Salles, A. (2020). Towards Establishing Criteria for the Ethical Analysis of Artificial Intelligence, Science and Engineering Ethics, 26 (5), 2413-2425. https://doi.org/10.1007/s11948020-00238-w
Floridi, L. & Sanders J. W. (2004). On the Morality of Artificial Agents, Minds and Machines, 14 (3), 349-379. https://doi.org/10.1023/B: MIND.0000035461.63578.9d
Franklin S. & Graesser A. (1996). Is it an Agent, or just a Program? A Taxonomy for Autonomous Agents, Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, Springer-Verlag.
Govindarajulu N. S.; Bringsjord S.; Ghosh R. & Vasanth S. (2019). Toward the Engineering of Virtuous Machines. In AAAI/ACM Conference on AI, Ethics, and Society (AIES ’19), January 27–28, 2019, Honolulu.
Himma, K. E. (2009). Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to Be a Moral Agent? Ethics and Technology. 11 (1), 19-29. https://doi.org/10.1007/s10676-008-9167-5
Johnson, D. G. & Noorman M. (2014). Artefactual Agency and Artefactual Moral Agency, in The Moral Status of Technical Artefacts, Eds. P. Kroes & P. P. Verbeek, Springer. 
Moor, J. H. (2006). The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4),18-21. https://doi.org/10.1109/MIS.2006.80
Nyholm, S. (2019). Other Minds, Other Intelligences: The Problem of Attributing Agency to Machines, Cambridge Quarterly of Healthcare Ethics, 28 (4), 592-598.  https://doi.org/10.1017/S0963180119000537
Orteny A.; Clore G. & Colins A. (2022). The Cognitive Structure of Emotions, Cambridge University Press.
Pitt, J. (2014). Guns Don’t Kill, People Kill: Values in/and or around Technologies, in The Moral Status of Technical Artefacts, Eds Kroes P. & P. P. Verbeek, Springer. 
Powers, T. M. (2013). On the Moral Agency of Computers, Topoi, 32 (2), 227-236. https://doi.org/10.1007/s1124512-9149-4
Prinz, J. (2004). Gut Reactions: A Perceptual Theory of Emotion, Oxford University Press.
Roeser, S. (2009). The Relation between Cognition and Affect in Moral Judgments about Risks, in The Ethics of Technological Risk, Eds Saved & Roeser, Earthscan Publication Ltd.
Roeser, S. (2010). Emotional Reflection about Risks, in Emotions and Risky Technologies, Ed R. Sabine, Springer.
Roeser, S. (2012). Emotional Engineers: Toward Morally Responsible Design, Science Engineering Ethics, 18 (1),103-115. https://doi.org/10.1007/s11948-010-9236-0
Scheutz, M. C. & Crowell, C. (2007). The Burden of Embodied Autonomy: Some Reflections on the Social and Ethical Implications of Autonomous Robots. Paper presented at the Workshop on Roboethics at the International Conference on Robotics and Automation, Rome.
Simon, H. A. (1967). Motivation and emotional controls of cognition. Psychological Review, 74 (1), 29-39. https://doi.org/10.1037/h0024127
Sparrow, R. (2016). Kicking a Robot Dog, ACM/IEEE 11th International Conference on Human-Robot Interaction, 229-229.
Vallor, S. (2024). The AI Mirror: How to Reclaim Our Humanity in the Age of Machine Thinking, Oxford University Press.
Verbeek, P. P. (2011). Moralizing Technology: Understanding and Designing the Morality of Things, Chicago Press.
Véliz, C. (2021). Moral Zombies: Why Algorithms are not Moral Agents, AI and Society, 36 (2), 487-497. https://doi.org/10.1007/s00146-021-01189-x
Winner, L. (1980). Do Artefacts Have Politics? Daedalus, 109 (1),121-136.
Wong, P. (2019). Rituals and Machines: A Confucian Response to Technology-Driven Moral Deskilling, Philosophies, 4 (4), 59. https://doi.org/10.3390/philosophies4040059
CAPTCHA Image