- 无标题文档
查看论文信息

中文题名:

 人工智能体伦理风险防范策略研究    

姓名:

 郑春林    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 010105    

学科专业:

 伦理学    

学生类型:

 博士    

学位:

 哲学博士    

学位类型:

 学术学位    

学位年度:

 2024    

校区:

 北京校区培养    

学院:

 哲学学院    

研究方向:

 人工智能伦理    

第一导师姓名:

 田海平    

第一导师单位:

 哲学学院    

提交日期:

 2024-01-10    

答辩日期:

 2023-12-12    

外文题名:

 RESEARCH ON ETHICAL RISK PREVENTION STRATEGIES FOR ARTIFICIAL INTELLIGENCE AGENT    

中文关键词:

 人工智能体 ; 伦理风险 ; 道德主体 ; 道德决策 ; 道德责任    

外文关键词:

 Artificial agent ; Ethical risk ; Moral subject ; Moral decision-making ; Moral responsibility    

中文摘要:

人工智能技术的发展随着算法优化、海量数据输入和算力提升取得了重大突破,但同时也带来了隐私侵犯、算法偏见、黑箱决策和职业替代等伦理风险。特别是近来的生成式预训练大语言模型(ChatGPT)等技术应用,使人们直观地感受到人工智能体的对象化存在和其对人类行动的影响,引发无可回避的伦理挑战。毋庸赘言,强化风险研判与防范策略制定是确保人工智能朝向安全、可靠和可控发展的重要课题,也是保障人民利益和国家安全的必要措施。总体来看,在当前实践中虽然人们已经就人工智能体伦理风险防范制定策略,但尚未形成完整框架。必须指出的是,目前国际上主流的风险防范策略主要关注人工智能技术发展可能导致的实际问题,例如隐私和安全问题,各类策略普遍缺乏对人工智能体拟主体地位的承认,由此带来策略本身的结构缺乏稳定性,参与策略制定主体相互之间诉求各异、条文复杂,难以协调落实。

以上缺点部分原因是理论思考框架不足以支撑策略制定导致,本文认为,在探讨人工智能体伦理风险防范策略时,首先应当重新审视和定义该策略的核心目标以及所面临的人工智能体作为共同行动者的承认问题。进一步而言,本文旨在将人工智能体纳入多元主体的讨论中,基于一种“主体间性”的视角,支撑伦理风险防范策略的创新和完整性。这种视角转换,首先需要避免简单地将人工智能体视为纯粹的工具或系统,而是在复杂的伦理关系中为人工智能体找到合适的位置。通过将人工智能体纳入共同行动中,将其视为某种“拟主体”,由此可进一步印证它们在实践意义上的道德责任和决策能力,这不仅能为风险防范策略提供一个更为完整和合理的伦理基础,也为人工智能体的伦理合理性和实践可行性提供辩护。

本文首先概述了人工智能体发展的现状及其伴随的伦理挑战,聚焦于人工智能体在伦理视域下的发展、以对其内在表征和存在环境的阐述为后续章节做铺垫,第二章的核心论点集中在探讨人工智能体的伦理风险,通过讨论人工智能体伦理风险的类型、来源和本质将问题整体性带入。第三章通过论述人工智能体作为“拟主体”道德地位的必要性,为人工智能体纳入伦理视域讨论做准备,并基于“主体间性”的道德关系进行阐释,通过主奴辩证到承认的隐喻建立思考框架。第四章通过道德决策的本质、决策机制,以及决策风险评估来表述人工智能体在道德决策中的挑战,从而对道德决策作为人工智能体伦理风险起因进行阐述。第五章着眼于人工智能体在伦理领域的道德责任形态的阐释,涵盖人工智能体对人类和同类的道德责任,并基于“痛苦”的探讨对道德责任动因以及虚拟与现实环境对道德责任的影响进行概念扩展。最后在第六章,通过人工智能体道德图灵测试的启示直观展示了人工智能体何以可能介入伦理关系,通过规范伦理学视域下人工智能体伦理风险防范策略思考进路的讨论,提出从行动者行动到共同行动的实践路径思考。

总体而言,本研究尝试从多个维度来探讨人工智能的伦理问题,涵盖了道德地位、道德责任和道德决策模式等方面,试图在理论与实践之间构建一个更全面的视角。首先,在应用领域方面,我们试图通过考察人工智能在多个场景中的表现,来提供一种更综合的伦理分析。其次,关于结构框架,本研究提出了一个初步的“理论-原则-主体-行动”的伦理风险防范策略模型。这个模型涵盖了从基础理论到应用层面,包括适用于人工智能体内在技术逻辑的伦理原则、价值取向和行动指导策略。第三,在理论与实践的结合方面,本研究试图通过“理论方法研究”与“实践问题研究”两大部分来互相支撑而提供一套系统的方法。本文以“痛苦”作为行动者行动动因之一,为人工智能体风险防范策略机制从设计者、使用者责任划分向承认人工智能体拟主体的行动正当性转变,从行动向行动者(拟主体)聚焦,再从行动者行动向共同行动进一步推进,以期为人工智能体风险防范策略机制提供一个初步、便捷并更加聚焦的思考路径。总体来说,这项研究希望能在一定程度上汇聚现有研究的优点,并为人工智能的伦理问题提供一个初步分析框架。

外文摘要:

The development of artificial intelligence technology has made major breakthroughs with algorithm optimization, massive data input, and increased computing power. However, it also brings ethical risks such as privacy invasion, algorithmic bias, black-box decision-making, and career substitution. In particular, recent technological applications such as the generative pre-trained large language model (ChatGPT) allow people to intuitively feel the objectified existence of artificial intelligence and its impact on human actions, triggering unavoidable ethical challenges. Needless to say, strengthening risk analysis and prevention strategy formulation is an important issue to ensure the safe, reliable and controllable development of artificial intelligence. It is also a necessary measure to protect people's interests and national security. Overall, in current practice, although people have formulated strategies for preventing ethical risks in artificial agent, a complete framework has not yet been formed. It must be pointed out that the current international mainstream risk prevention strategies mainly focus on the practical problems that may be caused by the development of artificial intelligence technology, such as privacy and security issues. Various strategies generally lack recognition of the subject status of artificial intelligence entities, which has brought about The structure of the strategy itself lacks stability. The parties involved in the formulation of the strategy have different demands from each other and the provisions are complex, making it difficult to coordinate and implement them.

The above shortcomings are partly caused by the insufficient theoretical thinking framework to support strategy formulation. This article believes that when discussing the artificial intelligence ethical risk prevention strategy, we should first re-examine and define the core goals of the strategy and the artificial intelligence faced as common actions. the issue of recognition of the perpetrators. Furthermore, this article aims to bring artificial agents into the discussion of ethical relationships among multiple subjects, based on an "Intersubjectivity" perspective, to support the innovation and integrity of ethical risk prevention strategies. This perspective change requires first of all to avoid simply viewing artificial agents as pure tools or systems, but to find a suitable position for artificial agents in complex ethical relationships. By including artificial agents in joint actions and treating them as some kind of "quasi-subjects", their moral responsibility and decision-making ability in a practical sense can be further confirmed. This can not only provide a more complete risk prevention strategy and reasonable ethical foundations, and also provide defense for the ethical rationality and practical feasibility of artificial agents.

This article first outlines the current status of the development of artificial intelligence and its accompanying ethical challenges, focusing on the development of artificial intelligence from an ethical perspective, and elaborating on its internal representation and existential environment as a pavement for subsequent chapters. Chapter 2 The core argument focuses on exploring the ethical risks of artificial agent, and brings the issue into perspective by discussing the types, sources and nature of ethical risks of artificial agent. Chapter 3 discusses the necessity of the moral status of artificial agents as "Quasi-subjects", preparing for the inclusion of artificial agents in the discussion of ethical perspectives, and explains it based on the moral relationship of "Intersubjectivity", through the master-slave dialectic to recognition Metaphors establish a thinking framework. Chapter 4 describes the challenges of artificial intelligence in moral decision-making through the nature of moral decision-making, decision-making mechanisms, and decision-making risk assessment, thereby elaborating on moral decision-making as the cause of ethical risks in artificial intelligence. Chapter 5 focuses on the explanation of the form of moral responsibility of artificial intelligence in the field of ethics, covering the moral responsibility of artificial intelligence to humans and similar species, and discusses the motivations of moral responsibility based on "Agony" and the moral responsibility of virtual and real environments. Concept expansion of influence. Finally, in Chapter 6, through the revelation of the moral Turing test of artificial intelligence, it intuitively shows how artificial intelligence can intervene in ethical relationships, and through the discussion of the thinking approach of ethical risk prevention strategies for artificial intelligence from the perspective of normative ethics, It proposes a reflection on the practical path from actor action to joint action.

Overall, this study attempts to explore the ethical issues of artificial intelligence from multiple dimensions, covering aspects such as moral status, moral responsibility, and moral decision-making models, in an attempt to build a more comprehensive perspective between theory and practice. First, in terms of application fields, we attempt to provide a more comprehensive ethical analysis by examining the performance of artificial intelligence in multiple scenarios. Secondly, regarding the structural framework, this study proposes a preliminary "theory-principle-subject-action" ethical risk prevention model. This model covers from basic theory to application level, including ethical principles, value orientations and action guidance strategies applicable to the internal technical logic of artificial intelligence agent. Third, in terms of the combination of theory and practice, this study attempts to provide a systematic method by supporting each other through the two parts of "theoretical method research" and "practical problem research". Overall, this study hopes to bring together the merits of existing research to a certain extent and provide a preliminary analytical framework for ethical issues in artificial intelligence.

参考文献总数:

 175    

馆藏地:

 图书馆学位论文阅览区(主馆南区三层BC区)    

馆藏号:

 博010105/24003    

开放日期:

 2025-01-09    

无标题文档

   建议浏览器: 谷歌 360请用极速模式,双核浏览器请用极速模式