中文题名: | 深度学习认识论中的不透明性问题研究 |
姓名: | |
保密级别: | 公开 |
论文语种: | 中文 |
学科代码: | 0101Z1 |
学科专业: | |
学生类型: | 博士 |
学位: | 哲学博士 |
学位类型: | |
学位年度: | 2022 |
校区: | |
学院: | |
研究方向: | 科学技术哲学 |
第一导师姓名: | |
第一导师单位: | |
提交日期: | 2022-01-19 |
答辩日期: | 2021-12-21 |
外文题名: | RESEARCH ON OPACITY IN DEEP LEARNING’S EPISTEMOLOGY |
中文关键词: | |
外文关键词: | Deep learning ; Epistemology ; Opacity ; Interpretability ; Dark knowledge |
中文摘要: |
“人工智能”自诞生以来,它虽然在繁荣发展的过程中遇到坎坷,但又在坎坷中从未停止前进的步伐,由此取得一系列令人瞩目的研究成果。人工智能在持续发展中形成三个主导性学派,它们分别是:符号主义、行为主义和连接主义。而本论文谈论的深度学习则属于连接主义学派提出的人工智能模型,其关键思想是:通过搭建人工神经网络来模拟人类大脑进而使机器拥有人类智能。更为重要的是,论文围绕深度学习认识论中的不透明性问题展开研究,共分为三个部分: 第1章是论文的引言部分,主要论述以下三方面内容:一、对早期人工智能思想进行简要追溯,直至“人工智能”这一术语被正式提出;二、扼要概括了人工智能的三个主流学派,并着重介绍连接主义学派中的深度学习;三、引出本论文所要研究的问题即深度学习认识论中的不透明性问题。 第2章对认识不透明性的概念做出辨析,首先,在厘清认识不透明性的含义之前,需要理解认识的透明性指的是什么;其次,在计算科学哲学中,一直充斥着计算主义与反计算主义的激烈争论,反计算主义批判计算主义还原论式的世界观即世间万物经由算法而来,换言之,计算主义在认识世界的进程中夹杂着不透明性,而探讨这种认识的不透明性最早可回溯至对计算机模拟的过程的研究;最后,详细阐述了计算机模拟过程中的认识不透明性并对其进行评价。 第3-6章是论文的核心部分,对深度学习认识论中的不透明性问题做出细致探究。认识不透明性不再仅仅局限于计算机模拟过程中,它已介入到深度学习领域,通过研究典型案例,我们发现,在解释深度学习模型失效时,人类并没有弄清楚隐藏在对抗样本后面的因果关系,这从侧面映射出人类对模型失效的不透明性认识。接下来,从训练深度学习模型的数据、算法与以深度学习算法为核心基础的整个智能体三个维度来阐述认识不透明性在深度学习中的具体表现。之后,又论述了与深度学习认识论中的不透明性密切相关的可解释性问题,通过从模型内外部和模型自身来分析当今深度学习可解释性问题的研究现状,与此同时,人类倘若要彻底实现深度学习的可解释性,还要面临相关性与因果性、还原论与整体论之争的疑难,在这样的情境下,认识不透明性将会在相当长的一段时间内存在于深度学习中,而人类也无法达到使深度学习具备完全可解释性的目的。正是基于深度学习的不可解释性,暗知识最终应运而生,它是由机器作为认识主体而产生的一种新型知识,与传统知识有着很大不同,另外,伴随计算能力的提高、数据量的激增以及算法的不断突破,暗知识逐渐崛起并呈现出喷发式增长趋势,虽然人类难以理解暗知识,但它仍旧具有合理性,我们从理论和实践两个角度来论述暗知识的合理性,总的来说,暗知识对传统认识论形成巨大冲击,它的出现意义非凡。 |
外文摘要: |
Artificial intelligence has never stopped advancing through the ups and downs, and it has obtained a series of remarkable research outcome, although it has encountered ups and downs in the process of prosperity development since the "artificial intelligence" was born. Artificial intelligence has formed three dominant schools in the continuous development, they are: Symbolism, Behaviorism and Connectionism. The deep learning discussed in this paper belongs to the artificial intelligence model proposed by the Connectionism and its key idea is to simulate the human’s brain and make the machine possess human’s intelligence by building artificial neural networks. More importantly, the paper focuses on the issue of opacity in deep learning’s epistemology, which is divided into three parts: The introduction part of the paper is Chapter 1, which mainly discusses the following three aspects: 1. It’s a brief review of early artificial intelligence’s thoughts until John McCarthy formally proposed the term "artificial intelligence"; 2. It briefly summaries three schools of artificial intelligence and focuses on the deep learning in the Connectionism; 3. The problem of opacity in the deep learning’s epistemology has aroused people’s concern. The concept of epistemic opacity is analyzed by Chapter 2. Firstly, it is necessary to understand the meaning of transparency before we clarify the meaning of epistemic opacity; Secondly, there is a dispute between computationalism and anti-computationalism in the philosophy of computational science. Anti-computationalism criticizes the reductionist computationalism’s worldview that everything in the world comes into being by algorithms. In other words, computationalism is mixed with opacity in the process of understanding the world, and this kind of epistemic opacity can be traced back to the study of the computer simulation process. Finally, epistemic opacity in the computer simulation process is elaborated and evaluated. The core parts of the paper are Chapters 3-6, which explores the opacity problem in deep learning’s epistemology in detail. Epistemic opacity is no longer limited to the computer simulation process, and it has been involved in the field of deep learning. Through studying typical cases, we find that when explaining the failure of deep learning models, humans have not figured out the causality hidden behind the adversarial samples. It reflects human beings’ epistemic opacity about the failure of the models from the side. Next, we use three dimensions to analyze the specific performance of epistemic opacity in deep learning, which include the data applied to train deep learning models, algorithms, and the entire agent based on deep learning algorithms. Then, we discuss the interpretability problem closely related to the opacity in deep learning’s epistemology, and analyze the current research of the deep learning’s interpretability problem from the inside and outside of the model and the model itself. At the same time, if humans want to thoroughly realize the interpretability of deep learning, we must also face the difficulties of relevance and causality, reductionism and holism, so the epistemic opacity will lie in deep learning for a long time in this situation, and humans cannot achieve the goal of making deep learning fully interpretable. When the interpretability problem of deep learning cannot be solved, dark knowledge eventually emerges. Dark knowledge is not only a new type of knowledge generated by the machine as the subject of cognition, but also very different from traditional knowledge. In addition, with the increase of computing power, the rapid increase of data’s amount and the continuous breakthrough of algorithms, dark knowledge has gradually grown up and is showing an eruptive growth trend. Although it is difficult for humans to understand dark knowledge, it is still reasonable. We discuss the rationality of dark knowledge from both theoretical and practical perspectives. On the whole, dark knowledge has had a huge impact on traditional epistemology, and its emergence is of extraordinary significance. |
参考文献总数: | 184 |
作者简介: | 郭艳娜,先后获理学学士、哲学硕士学位。在攻读哲学博士学位期间,发表学术论文2篇,其中1篇被“人大复印资料”和“社会科学文摘”转载 |
馆藏地: | 图书馆学位论文阅览区(主馆南区三层BC区) |
开放日期: | 2023-01-19 |