中文题名: | 针对凸有限和优化问题的随机一阶算法 |
姓名: | |
保密级别: | 公开 |
论文语种: | chi |
学科代码: | 070101 |
学科专业: | |
学生类型: | 学士 |
学位: | 理学学士 |
学位年度: | 2024 |
校区: | |
学院: | |
研究方向: | 最优化理论与算法 |
第一导师姓名: | |
第一导师单位: | |
提交日期: | 2024-05-26 |
答辩日期: | 2024-05-07 |
外文题名: | Stochastic First-order Methods for Convex Finite-sum Optimization Problems |
中文关键词: | |
外文关键词: | convex optimization ; finite-sum problem ; stochastic optimization ; iteration complexity |
中文摘要: |
机器学习中的许多模型都可以归结于求解一个凸有限和优化问题. 如何利用凸有限和优化问题的结构设计有效的算法成为一个值得关心的问题. 本文介绍了针对凸有限和问题的随机一阶优化算法, 并分析了它们的迭代复杂度: 随机梯度下降法(SGD)和随机加速梯度下降法(SAGD)利用了问题的结构, 因此与对应的确定算法相比减小了梯度计算量, 但迭代复杂度受到随机梯度方差的影响; SAG、SVRG、SAGA通过减小随机梯度的方差, 改进了SGD和SAGD的迭代复杂度; RPDG、RGEM进一步改进了迭代复杂度, 达到了一类随机增量梯度法的最优迭代复杂度. |
外文摘要: |
Many models in machine learning can be reduced to solving a convex finite-sum optimization problem. How to design effective algorithms using the structure of the problem is a question of interest. In this paper, we introduce stochastic first-order optimization algorithms for convex finite-sum problems, and analysis their iteration complexity. Stochastic gradient descent (SGD) method and stochastic accelerated gradient descent (SAGD) method take advantage of the structure of the problem, hence reduce the amount of gradient computations compared with the corresponding deterministic algorithms. But their iteration complexity is affected by the variance of the stochastic gradient. SAG, SVRG, SAGA improve the iteration complexity by reducing the variance of the stochastic gradient. RPDG, RGEM further improve the iteration complexity and reach the optimal iteration complexity of a class of randomized incremental gradient methods. |
参考文献总数: | 18 |
插图总数: | 0 |
插表总数: | 0 |
馆藏号: | 本070101/24129 |
开放日期: | 2025-06-10 |