- 无标题文档
查看论文信息

中文题名:

 基于可解释人工智能的学业数据洞察研究    

姓名:

 毛月恒    

保密级别:

 公开    

论文语种:

 中文    

学科代码:

 085212    

学科专业:

 软件工程    

学生类型:

 硕士    

学位:

 工程硕士    

学位类型:

 专业学位    

学位年度:

 2021    

校区:

 北京校区培养    

学院:

 人工智能学院    

研究方向:

 软件工程    

第一导师姓名:

 徐鹏飞    

第一导师单位:

 北京师范大学人工智能学院    

提交日期:

 2021-06-09    

答辩日期:

 2021-06-01    

外文题名:

 INSIGHTS INTO ACADEMIC DATA BASED ON EXPLAINABLE ARTIFICIAL INTELLIGENCE    

中文关键词:

 可解释性人工智能 ; 在线教育 ; 学业数据 ; 机器学习    

外文关键词:

 Explainable Artificial Intelligence ; online education ; Academic data ; Machine learning    

中文摘要:

可解释性人工智能(Explainable Artificial IntelligenceXAI)有望在与人类互动时赋予人工智能系统以解释能力,能够以人类可以理解的方式传达数据分析结果。此外,可解释人工智能在教育领域也能发挥重要作用。

本研究以英国开放大学公开数据集为例,重点关注学业结果预测模型的可解释性。针对在线学习行为的特点,首先分析了学习者的若干典型行为特征,实现了逻辑回归、XGBoostLightGBM以及人工神经网络等多种机器学习模型对学习者学业结果的预测实验。随后使用SHAPLIME以及反事实三种可解释人工智能的方法对预测建立的模型进行全局层面以及局部层面的解释,并且根据模型解释对正面样本和负面样本的学习特征与行为模式进行了分析比较,最后给出了针对性的教学干预建议。

研究结果表明,在虚拟学习环境中,失败组与通过组的学习者的学习行为之间存在明显差异,在线学习中的内容学习、论坛交流、界面交互以及参与作业情况共同决定了最终的学业结果,而人口背景信息以及课程前的学习准备对结果几乎没有影响。可解释人工智能在学业数据洞察的场景中发挥了重要作用,此外,机器学习模型的事后解释方法展现出了在单个样本分析方面的优越性。这项研究可以有效解决教学情境下预测性模型的解释性问题,为影响在线学习中的学业成就的因素提供参考,并为讲师提供有关在线课程开发的新见解,从而通过调整教学过程来满足未来学习者的需求,也有助于及时提供学业预警与干预措施,对于个性化教学很有帮助。
外文摘要:

Explainable Artificial Intelligence (XAI) is expected to give AI systems the ability to interpret when interacting with humans and to communicate the results of data analysis in a way that humans can understand. In addition, XAI can play an important role in the field of education.

This study focuses on the interpretability of academic outcome prediction models using the UK Open University open dataset as an example. To address the characteristics of online learning behavior, several typical behavioral features of learners are first analyzed and experiments are implemented to predict learners' academic outcomes using various machine learning models such as logistic regression, XGBoost, LightGBM, and artificial neural networks. The predicted models were then interpreted at the global and local levels using three interpretable AI methods, SHAP, LIME, and counterfactual, and the learning characteristics and behavioral patterns of positive and negative samples were analyzed and compared based on the model interpretations, and finally, targeted instructional interventions were suggested.

The results of the study show that there are significant differences between the learning behaviors of learners in the failure and pass groups in the virtual learning environment, and that content learning, forum communication, interface interaction, and participation in assignments in online learning jointly determine the final academic outcomes, while demographic background information and pre-course learning preparation have little effect on the outcomes. XAI played an important role in the scenario of academic data insights, and in addition, the post hoc interpretation approach of machine learning models demonstrated superiority in the analysis of individual samples. This study can effectively address the interpretability of predictive models in instructional contexts, inform factors that influence academic achievement in online learning, and provide instructors with new insights about online course development so as to meet the needs of future learners by adapting the teaching and learning process, and also help provide timely academic alerts and interventions that are useful for personalized instruction.

参考文献总数:

 105    

作者简介:

 毛月恒    

馆藏号:

 硕085212/21011    

开放日期:

 2022-06-09    

无标题文档

   建议浏览器: 谷歌 360请用极速模式,双核浏览器请用极速模式