中文题名: | 多模态情绪识别能力测验的开发与验证 |
姓名: | |
保密级别: | 公开 |
论文语种: | chi |
学科代码: | 04020005 |
学科专业: | |
学生类型: | 硕士 |
学位: | 教育学硕士 |
学位类型: | |
学位年度: | 2024 |
校区: | |
学院: | |
研究方向: | 心理测量学 |
第一导师姓名: | |
第一导师单位: | |
提交日期: | 2024-05-30 |
答辩日期: | 2024-05-25 |
外文题名: | DEVELOPMENT AND VERIFICATION OF MULTIMODAL EMOTION RECOGNITION ABILITY TEST |
中文关键词: | 多模态情绪识别能力 ; 多模态情绪表达数据库 ; 测验开发 |
外文关键词: | Multi-Modal Emotion Recognition Ability ; Multi-Modal Emotional Expression Database ; Test Development |
中文摘要: |
情绪识别能力对个体的社会互动和心理健康具有至关重要的意义。对情绪识别的研究最早多单一地考察不同形式或模态的情绪,但在日常生活中,情感表达由动态的面部、声音和身体等多种线索共同构成了最有效的沟通手段,较为全面的多模态情绪识别能力测评研究由此展开。然而,现有多模态情绪识别研究对多模态情绪识别能力的定义和侧重并不统一,测验的多模态情绪材料获取有限,缺乏自然的生态效度,从而影响测验的效果。故而本研究关注多模态情绪信息交互识别的内涵,同时做好多模态情绪素材库的设计,以探索多模态情绪识别能力测验更优的设计,并探讨其与个体心理社会功能的独特关联。 研究一旨在建构本土化的多模态情绪表达数据库,补充国内动态的、多模态情感刺激材料库的空白,同时解决情绪表达强度缺失的问题。并且在数据库设计过程中,考虑到实际中情绪表达的多种可能性,设置“说话”和“不说话”两种条件的自然表达,作为情绪模态信息差异的基础。情绪表达通过“情境指导自然表达”的方式进行,生成90条标准情绪情境文本提示材料,用于指导6名专业演员对6种基本情绪在5种情绪强度下的表达,最终生成360条基础表达情绪视频素材,加上3个拍摄机位和2类处理方式,总计2160个有效视频。根据后续研究需求,选择完全有声、有声静音、无声静音条件下共540个视频进行表演有效性验证和初步测评。结果发现,专家评价一致性良好(Kendall W系数在0.56-0.82),与预期表演一致性较高(Kendall W系数在0.78-0.92)。随后以专家评价作为标准答案,选择90名被试进行分组测评,初步测评结果显示数据库材料难度偏简单,区分度还有待提升;被试在各个部分的得分基本呈正态分布,个体间存在着明显的差异,其中识别条件和情绪类型对被试的作答得分存在显著的影响作用。说明素材库用于测评被试情绪识别能力具有可行性。 研究二在研究一测评的基础上,根据难度选择三种识别条件共90个视频材料进行测验设计,让测验难度呈正态分布,有助于区分被试的能力水平。将情绪类别识别和强度识别问题进行难度加权整合计分,对测验进行质量分析,结果发现,在保留61个题目后,测验信度表现良好(Cronbach's α系数为0.86,各部分系数均大于0.7);校标关联效度与抑郁水平、自闭水平呈显著负相关关系;结构效度上满足识别条件二阶因子模型和情绪类别二阶因子模型,其中后者模型拟合结果最优(χ2 = 135.62,df = 113,p =0.146,RMSEA = 0.029,CFI = 0.965,TLI = 0.957,SRMR = 0.046)。说明本研究开发的多模态情绪识别能力测验有效、可靠,可以较为全面地测评被试的情绪识别能力。 总体来说,本研究不仅成功构建了本土化的多模态情绪表达数据库,还开发了一套可靠的多模态情绪识别能力测验,并通过测验的成功应用展示了多模态数据库在实际研究中的潜力。这一成果为后续的情绪识别能力相关研究提供了宝贵的素材,也为深入理解情绪识别能力与个体心理健康关系提供了有力支持。 |
外文摘要: |
Emotion recognition ability is of crucial importance to individual social interaction and mental health. Early research on emotion recognition mostly focused on examining emotions in different forms or modalities, yet in daily life, emotional expressions are comprised of dynamic cues from facial expressions, voices, and body language, collectively constituting the most effective communication means. As a result, research on comprehensive multi-modal emotion recognition ability assessment has emerged. However, existing multi-modal emotion recognition studies exhibit inconsistencies in the definition and emphasis of multi-modal emotion recognition ability, and there are limitations in acquiring multi-modal emotional materials for testing, lacking natural ecological validity, thereby affecting the effectiveness of the assessments. Therefore, this study focuses on the nature of multi-modal emotional information interactive recognition and designs a multi-modal emotional stimulus database to explore a more optimal design for multi-modal emotion recognition ability assessment and investigate its unique correlation with individual psycho-social functions. Study 1 aimed to construct a localized multi-modal emotional expression database, supplementing the gaps in dynamic, multi-modal emotional stimulus material libraries in China and addressing the issue of missing emotional expression intensities. In the process of database design, considering the various possibilities of emotional expression in reality, two natural expression conditions of "speaking" and "not speaking" were set up as the basis for differences in emotional modality information. Emotional expressions were generated through "scenario-guided natural expression," producing 90 standard emotional scenario text prompt materials to guide six professional actors in expressing six basic emotions under five emotional intensities, ultimately resulting in 360 basic emotional expression video materials. Combined with three camera positions and two types of processing methods, a total of 2,160 valid videos were generated. Based on subsequent research needs, a total of 540 videos under the conditions of fully audible, audible with mute, and silent with mute were selected for performance validation and preliminary assessment. The results showed good consistency in expert evaluations (Kendall's W coefficient ranging from 0.56 to 0.82) and high consistency with expected performances (Kendall's W coefficient ranging from 0.78 to 0.92). Subsequently, using expert evaluations as the standard answer, 90 participants were selected for group assessment. The preliminary assessment results indicated that the database materials were relatively simple in difficulty, with room for improvement in discrimination; participants' scores in each section were generally normally distributed, with significant individual differences. Among them, recognition conditions and emotional types had a significant impact on participants' scores. This demonstrates the feasibility of using the stimulus database to assess participants' emotion recognition ability. Based on the assessment of Study 1, Study 2 designed a test using 90 video materials from three recognition conditions according to difficulty, ensuring a normal distribution of test difficulty to aid in distinguishing participants' ability levels. The test integrated scoring for emotional category recognition and intensity recognition questions based on difficulty weighting, and a quality analysis of the test revealed that after retaining 61 items, the test demonstrated good reliability (Cronbach's α coefficient of 0.86, with all sub-scale coefficients greater than 0.7). The criterion-related validity showed a significant negative correlation with depression level and autistic traits. Structurally, the test satisfied the second-order factor models for recognition conditions and emotional categories, with the latter model exhibiting the best fit (χ2 = 135.62, df = 113, p = 0.146, RMSEA = 0.029, CFI = 0.965, TLI = 0.957, SRMR = 0.046). This indicates that the multi-modal emotion recognition ability test developed in this study is effective, reliable, and can comprehensively assess participants' emotion recognition ability. In summary, this study successfully constructed a localized multi-modal emotional expression database and developed a reliable multi-modal emotion recognition ability test, demonstrating the potential of the multi-modal database in practical research through the successful application of the test. This achievement provides valuable materials for subsequent research on emotion recognition ability and provides strong support for a deeper understanding of the relationship between emotion recognition ability and individual mental health. |
参考文献总数: | 82 |
馆藏号: | 硕040200-05/24003 |
开放日期: | 2025-05-31 |