中文题名: | BP神经网络中的数学原理 |
姓名: | |
保密级别: | 公开 |
论文语种: | 中文 |
学科代码: | 070101 |
学科专业: | |
学生类型: | 学士 |
学位: | 理学学士 |
学位年度: | 2021 |
学校: | 北京师范大学 |
校区: | |
学院: | |
第一导师姓名: | |
第一导师单位: | |
提交日期: | 2021-06-25 |
答辩日期: | 2021-05-19 |
外文题名: | Mathematical Principles in BP Neural Network |
中文关键词: | |
外文关键词: | Back-propagation ; Gradient descent method ; Vanishing gradient ; zero-centered |
中文摘要: |
人工神经网络仿照生物神经网络的信息处理方式,在模式识别、信号处理等领域具有广泛的应用。单隐层前馈神经网络具有良好的逼近能力,是神经网络研究的重点。其中,基于梯度下降法的BP算法(Backpropagation),即反向传播算法,是训练单隐层前馈神经网络最常用的算法。本文从人工神经网络基础出发,介绍三种神经元的激活函数,介绍包含单隐层前馈神经网络在内的三种神经网络结构,并以单隐层前馈神经网络为例,推导了BP算法中信息前向传播与误差反向传播的过程,阐述了BP算法中的数学原理。同时,本文分析了激活函数选用Sigmoid函数的优缺点,并针对其“梯度消失”问题和非“零均值”问题讨论了目前的几种改进方法。
﹀
|
外文摘要: |
Artificial neural network imitates the information processing method of biological neural network and has a wide range of applications in the fields of pattern recognition and signal processing. The single hidden layer feedforward neural network has good approximation ability and is the focus of neural network research. Among them, the BP algorithm (Back-propagation) based on the gradient descent method is the most commonly used method to train single hidden layer feedforward neural networks. Starting from the basis of artificial neural networks, this article introduces the activation functions of three neurons, and introduces three neural network structures including single hidden layer feedforward neural networks. And taking the single hidden layer feedforward neural network as an example, the process of information forward propagation and error back propagation in BP algorithm is deduced, and the mathematical principle of BP algorithm is explained. At the same time, this article analyzes the advantages and disadvantages of using the Sigmoid function for the activation function, and discusses several current improvement methods for its "gradient disappearance" problem and non-"zero-centered" problem. |
参考文献总数: | 10 |
插图总数: | 0 |
插表总数: | 0 |
馆藏号: | 本070101/21051 |
开放日期: | 2022-06-25 |