中文题名: | 智能导学系统中解释性用户接口的设计与应用研究 |
姓名: | |
保密级别: | 公开 |
论文语种: | chi |
学科代码: | 078401 |
学科专业: | |
学生类型: | 硕士 |
学位: | 教育学硕士 |
学位类型: | |
学位年度: | 2023 |
校区: | |
学院: | |
研究方向: | 人工智能教育应用 |
第一导师姓名: | |
第一导师单位: | |
提交日期: | 2023-06-14 |
答辩日期: | 2023-05-24 |
外文题名: | DESIGN AND APPLICATION RESEARCH OF EXPLAINABLE USER INTERFACES FOR INTELLIGENT TUROTING SYSTEMS |
中文关键词: | |
外文关键词: | Intelligent tutoring system ; Explainable artificial intelligence ; Explainable user interface ; Deep neural networks |
中文摘要: |
智能导学系统是指利用信息科学领域的技术手段,为学习者提供即时教学指导和实时反馈的信息系统。如今,人工智能不断发展,基于深度学习的复杂模型开始被引入系统。然而,这类模型的决策过程不透明。其在系统中的应用会增加服务功能的复杂度,导致服务功能在反馈信息时无法为用户提供决策依据等关键解释性信息。对用户而言,缺乏解释性信息会影响其理解系统,无法建立与系统工作逻辑相符的心理模型。相关调查显示,缺乏解释性信息也会影响用户对系统的信任。因此,本研究利用可解释人工智能技术对系统中的复杂模型进行解释并为相关服务功能设计解释性用户接口,以为用户提供解释性信息。 具体来说,研究以已有的“雷达数学”系统为研究对象,选取了分层相关性传播(LRP)与模型无关局部解释(LIME)两种解释方法,分别对系统“认知地图”服务功能的深度知识追踪(DKT)模型与“自动评分”服务功能的基于注意力机制的自动评分(Att-Grader)模型进行了解释。相关实验结果显示,LRP方法与LIME方法能有效解释出DKT模型与Att-Grader模型的决策逻辑。其中,DKT模型的决策逻辑为“依据用户近期在待预测知识点上的练习表现评估其该知识点的掌握状态”,Att-Grader模型的决策逻辑为“依据用户答案是否包括参考答案中的内容预测其答案得分”,两者都符合教育中的一般原则。 基于模型解释结果,研究开展了解释性用户接口设计工作。对于“认知地图”服务功能,研究将DKT模型的解释结果转化用户知识点掌握状态评估依据并为其设计“悬浮窗”形式的解释性用户接口,使普通学习者得到服务功能评估知识点掌握状态的关键解释性信息。对于“自动评分”服务功能,研究将Att-Grader模型的解释结果转化为用户答案得分判断依据并为其设计“反馈栏”形式的解释性用户接口,使普通学习者得到服务功能预测答案得分的关键解释性信息。 最后,研究开展了教育实验,以检验解释性用户接口的应用效果。结果显示,“认知地图”服务功能的解释性用户接口能帮助用户建立符合服务功能工作机制的心理模型,提升用户对服务功能给出知识点掌握状态的信任度,并且不会降低用户对服务功能的技术接受度或是给用户服务功能使用增加新的认知负荷。“自动评分”服务功能的解释性用户接口能提升用户对服务功能给出答案评分的信任度,提高用户对服务功能的技术接受度,同时不会给用户服务功能使用增加新的认知负荷。实验证明,解释性用户接口能促进用户对智能导学系统的理解与信任并改善使用体验。 |
外文摘要: |
Intelligent tutoring system is an information system that utilize technology in the field of information science to provide learners with real-time teaching guidance and feedback. Nowadays, artificial intelligence continues to evolve, complex models based on deep learning are being introduced into the system. However, these models' decision-making processes are opaque. Their application in the system increases the complexity of service functions and leads to the lack of critical explainable information such as decision-making basis in feedback to users. For users, a lack of decision-making basis affects their understanding of the system and their ability to build a mental model that matches the system's working logic. Surveys showed that the lack of explainable information also increases users' distrust of the system. Therefore, this study attempted to use explainable artificial intelligence technology to explain complex models in the system and designed explainable user interfaces for related service functions to provide users with explainable information. Specifically, the study took the existing "Radar-Math" system as the research object, and selected two explanation methods, Layer-wise Relevance Propagation (LRP) and Local Interpretable Model-Agnostic Explanations (LIME), to explain the Deep Knowledge Tracing (DKT) model of the "Cognitive Map" service function and the Attention-based Automatic Short Answer Grading (Att-Grader) model of the "Automatic Grading" service function. Through relevant experiments, the study found that the LRP and LIME methods could effectively explain the decision logic of the DKT and Att-Grader models and the decision logic of the DKT model was " based on the user's recent practice performance on the knowledge to be predicted, evaluate their mastery of that knowledge", the decision logic of the Att-Grader model was "predict the user's answer score based on whether their answer includes the content in the reference answer". Both of these decision logics were consistent with general principles in education. Based on the explanation results of the models, the study carried out the design of explainable user interfaces. For the "Cognitive Map" service function, the study transformed the explanation results of the DKT model into the evaluation basis of the user's knowledge mastery status and designed the explainable user interface in the form of a "floating window", allowing ordinary learners to obtain critical explainable information about the evaluation of their knowledge mastery status through the service function. For the "Automatic Grading" service function, the study transformed the explanation results of the Att-Grader model into the judging basis for the user's answer score and designed an explainable user interface in the form of a "feedback bar", allowing ordinary learners to obtain critical explanatory information on predicted answer scores from the service function. Finally, the study conducted educational experiments to test the application effects of the explainable user interfaces. The results showed that the explainable user interface of the "Cognitive Map" service function could help users establish a mental model that matches the working logic of the service function, enhance users' trust in the service function's evaluation of knowledge mastery status, and would not reduce users' technology acceptance of the service function or increase their cognitive load of using the service function. The explanatory user interface of the "Automatic Grading" service function could enhance users' trust in the service function's answer scoring, improve users' technology acceptance of the service function, and would not increase their cognitive load of using the service function. Experiments proved that the explainable user interface could promote users' understanding and trust in the intelligent tutoring system and improve the experience of using it. |
参考文献总数: | 88 |
馆藏号: | 硕078401/23005 |
开放日期: | 2024-06-14 |