- 无标题文档
查看论文信息

中文题名:

 基于机器学习即时反馈的中学生科学解释能力形成性评价研究——以高中化学“原电池的应用”内容为例    

姓名:

 张美娜    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 045106    

学科专业:

 学科教学(化学)    

学生类型:

 硕士    

学位:

 教育硕士    

学位类型:

 专业学位    

学位年度:

 2024    

校区:

 北京校区培养    

学院:

 教育学部    

研究方向:

 评价研究    

第一导师姓名:

 王磊    

第一导师单位:

 化学学院    

提交日期:

 2024-06-09    

答辩日期:

 2024-05-26    

外文题名:

 A study on the formative assessment of secondary school students' scientific explanation ability based on instant feedback of machine learning: An example of "application of primary cell" in high school chemistry    

中文关键词:

 高中化学 ; 科学解释能力 ; 自动评分 ; 即时反馈 ; 形成性评价    

外文关键词:

 High School Chemistry ; Scientific Explanatory ; Automated Scoring ; Instant Feedback ; Formative Assessment    

中文摘要:

科学解释是一种重要的科学实践活动,但在教学实践中培养学生科学解释能力具有很大挑战性,主要是由于科学解释这项复杂能力的测评与培养需要在真实情境中设置一系列开放性任务,学生在解决开放性任务的过程中,通过教师即时的指导改进自己的想法进而提升科学解释能力,而在真实教学中由于客观原因往往做不到对学生的回答给予即时的个性化反馈,从而使得学生丧失了改进提升的机会,也就无法更好地获得科学解释能力。而快速发展的机器学习技术可以实现对主观题的自动评分与即时反馈,目前国内外很多学者已经尝试将机器学习技术应用到教育领域实现主观题的自动评分,但基于机器学习即时反馈的评价效果研究尚在起步阶段。因此本研究选取了高中化学领域内容——“原电池的应用”,以此内容为载体,探讨基于机器学习自动评分与即时反馈的形成性评价下中学生化学学科的科学解释能力表现及其变化。

研究主要包括三个任务:(1)设计基于机器学习即时反馈的科学解释评价工具;(2)构建科学解释自动评分模型并实现即时反馈功能;(3)实施基于机器学习即时反馈的科学解释能力形成性评价,探究在此环境下高二学生化学学科科学解释能力的表现与变化。

在任务一中,首先基于已有文献确定科学解释模型并预设其表现水平,通过对学生的实际作答进行分析,将初步预设的表现水平进行修订完善,确定本研究的科学解释的表现水平。然后依据科学解释的表现水平构建本研究中所使用的重要评分标准——分析式评价标准。接着选取“原电池的应用”为评价载体,在前文确定的评价标准下,依据相关原则开发具体的评价工具,在开发评价工具的过程中,使用学生访谈和一线教师评估对评价工具进行修订。最后使用Rasch模型对修订后的评价工具进行质量检测,最终确定了基于机器学习即时反馈的科学解释评价工具。

在任务二中,首先对应用最广泛的几种大型语言模型进行比较后,选择了兼具性能和安全性的BERT模型,利用收集到的380份学生作答对BERT模型进行训练得到自动评分模型,并测试其自动评分的表现,发现经过微调的自动评分模型准确率可以达到0.993,平均kappa值达到0.973。然后根据反馈内容的相关知识分别对前后测评价题目和形成性评价题目进行了反馈设计。最后搭建基于机器学习即时反馈的科学解释能力形成性评价系统平台,并展示平台的功能设计。

在任务三中,具体实施了基于机器学习即时反馈的科学解释能力形成性评价,观察高二学生在基于机器学习即时反馈的形成性评价下科学解释能力的表现与变化情况。通过对测试结果的分析,同时结合学生的访谈发现:(1)学生的整体科学解释能力水平一般,35.48%的被试学生可以明确表述主张,找到正确且充分的资料或理论。22.58%的被试学生可以明确表述主张,同时找到正确且充分的资料和理论。仅有16.13%的被试学生可以达到在明确表述主张的同时,基于正确且充分的资料和理论作出完整推理来支持主张;(2)学生在科学解释能力不同维度上的表现各不相同,在四个维度上的能力值由高到低分别是资料、主张、理论和推理。总体来看,学生在推理和理论维度上的表现相对较弱;(3)基于机器学习即时反馈的形成性评价可以显著促进学生科学解释能力以及主张、资料、理论、推理四个维度的水平。

外文摘要:

 

Scientific explanation is an important practical scientific activity, but it is very challenging to cultivate students' scientific explanation ability in teaching practice, mainly because the assessment and cultivation of the complex ability of scientific explanation requires a series of open-ended tasks in real situations, and students can improve their own ideas and enhance their scientific explanation ability through the immediate guidance of the teacher in the process of solving the open-ended tasks. However, in real teaching, due to objective reasons, it is often impossible to give immediate personalized feedback to students' answers, thus making students lose the opportunity to improve and enhance their scientific explanation ability. The rapid development of machine learning technology can realize automatic scoring and instant feedback for subjective questions, at present, many scholars at home and abroad have tried to apply machine learning technology to the field of education to realize automatic scoring of subjective questions, but the research on the evaluation effect based on the instant feedback of machine learning is still in its infancy.Therefore, in this study, we selected the content of high school chemistry, “Simple Application of Primary Cells”, as a carrier to explore the performance of secondary school students' scientific explanation ability and its changes under the formative assessment based on automatic scoring and instant feedback of machine learning in chemistry.

The study consists of three main tasks: (1) designing a scientific explanation evaluation tool based on machine learning instant feedback; (2) constructing an automatic scoring model for scientific explanation and realizing the instant feedback function; and (3) implementing a formative evaluation of scientific explanation competence based on machine learning instant feedback to investigate the performance and changes of sophomore students' scientific explanation competence in chemistry under this environment.

In Task 1, the scientific explanation model was first identified and the learning progression was preset based on existing literature, and the initial preset learning progression was revised and improved by analyzing students' actual responses to determine the learning progression of scientific explanation in this study. The learning progression of scientific explanation was then used to construct the analytic evaluation criteria, an important scoring criterion used in this study. Then, “simple application of primary battery” was selected as the evaluation carrier, and the specific evaluation tool was developed according to the relevant principles under the evaluation criteria determined in the previous section, and in the process of developing the evaluation tool, the evaluation tool was revised by using student interviews and first-line teachers' assessment. In the process of developing the evaluation tool, student interviews and frontline teacher assessments were used to revise the evaluation tool. Finally, the quality of the revised evaluation tool was tested using the Rasch model, and the scientific interpretation evaluation tool based on instant feedback from machine learning was finalized.

In Task 2, after first comparing several large-scale language models that are most widely used, the BERT model, which combines both performance and security, is chosen. 380 student responses collected are used to train the BERT model to get the auto-scoring model and test its auto-scoring performance, and it is found that the fine-tuned auto-scoring model can reach an accuracy of 0.993, and the average kappavalue is up to 0.973.Then the feedback design is carried out according to the relevant knowledge of feedback content for pre and post-test evaluation questions and formative evaluation questions respectively. Finally, the platform of formative evaluation system of scientific explanation ability based on machine learning instant feedback is built, and the functional design of the platform is demonstrated.

In Task 3, the formative assessment of scientific explanation ability based on instant feedback of machine learning was specifically implemented to observe the performance and change of the sophomore students' scientific explanation ability under the formative assessment based on instant feedback of machine learning. The analysis of the test results, together with the students' interviews, revealed that: (1) the overall level of students' scientific explanatory ability was average, with 35.48% of the students being able to state a claim explicitly and find correct and sufficient information or theories, and 22.58% being able to state a claim explicitly and find correct and sufficient information and theories at the same time. Only 16.13% of the students could achieve the ability to support a claim by making a complete reasoning based on correct and sufficient information and theories while clearly stating the claim.(2) The students' performance in different dimensions of scientific explanatory competence varied, and the competence values in the four dimensions, from the highest to the lowest, were information, claim, theory, and reasoning, respectively. Overall, students' performance on the reasoning and theory dimensions was relatively weak; (3) formative assessment based on immediate feedback from machine learning can significantly contribute to the level of students' scientific explanatory ability as well as the four dimensions of claim, information, theory, and reasoning.

参考文献总数:

 68    

馆藏号:

 硕045106/24004    

开放日期:

 2025-06-09    

无标题文档

   建议浏览器: 谷歌 360请用极速模式,双核浏览器请用极速模式