- 无标题文档
查看论文信息

中文题名:

 基于图信号小波变换的大脑多尺度特征融合算法研究    

姓名:

 徐文岩    

保密级别:

 公开    

论文语种:

 中文    

学科代码:

 081002    

学科专业:

 信号与信息处理    

学生类型:

 硕士    

学位:

 工学硕士    

学位类型:

 学术学位    

学位年度:

 2020    

校区:

 北京校区培养    

学院:

 人工智能学院    

研究方向:

 神经影像    

第一导师姓名:

 邬霞    

第一导师单位:

 北京师范大学人工智能学院    

提交日期:

 2020-06-15    

答辩日期:

 2020-05-28    

外文题名:

 Research on Multiscale Feature Fusion Algorithm of Brain Based on Graph Signal Wavelet Transform    

中文关键词:

 多尺度 ; 图信号小波变换 ; 特征融合 ; fMRI ; 脑状态解码    

外文关键词:

 Multiscale ; Graph signal wavelet transform ; Feature fusion ; FMRI ; Brain state decoding    

中文摘要:

随着认知神经科学研究的不断深入,大脑的空间多尺度特性已得到证明并被应用于认知功能的解读和疾病的预测。已有研究利用多尺度的概念提取大脑特征,进行脑状态解码和疾病诊断等方面的研究,并取得了一定的进展。但是目前对多尺度特征的提取和融合还存在信息提取不够充分,无法同时刻画尺度内和尺度间信息的问题。本研究引入图信号小波变换方法,在考虑大脑的不规则特性的基础上进行大脑多个尺度信息的提取,并基于尺度不变特征变换(Scale Invariant Feature Transform, SIFT)和图卷积两种方法,提出了两种多尺度特征融合框架。在这两种框架中,分别采用基于解剖相邻或功能相邻两种方式实现多尺度特征融合,探讨多尺度特征包含的认知意义,并将其应用于脑状态分类和行为数据回归预测。主要工作和研究成果如下:

一、提出基于SIFT的多尺度特征融合框架(Graph Signal Wavelet Multiscale-Scale Invariant Feature Transform, GSWM-SIFT)实现大脑多尺度信息的融合。采用图信号小波变换的方法,基于大脑非规则的空间结构获取多个尺度上的信息,再分别基于大脑的解剖相邻或功能相邻提取脑区间的信息梯度构造特征。具体来说,首先寻找大脑局部关键脑区,然后计算关键脑区与解剖或功能相邻脑区在多个尺度下的信息梯度,借此融合多尺度信息构造特征,最后将多尺度特征应用于HCP(Human Connectome Project)数据集进行脑状态分类和行为指标预测。结果显示,除了使用线性SVM(Support Vector Machine)方法分类数据集2的情况,两种多尺度融合特征在分类和回归表现上均优于或等于单尺度小波特征的最优表现,证明了多尺度特征的有效性。此外,基于解剖相邻的GSWM-SIFT(I)特征的分类表现优于基于功能相邻的多尺度特征GSWM-SIFT(II)。从两种算法得到的多尺度特征在大脑上的分布情况发现,多尺度特征能够包含关键脑区在多尺度下信息梯度的变化模式,有助于研究大脑在多个尺度间的信息交流,促进大脑认知功能的深入解读。

二、提出基于图卷积的特征融合框架(Graph Signal Wavelet Multiscale-Convolution, GSWM-CONV)实现大脑的多尺度特征融合。与GSWM-SIFT框架不同的是,在得到多尺度的图信号小波后,GSWM-CONV通过提取关键脑区的局部信息和来融合多尺度特征。找到关键脑区之后根据解剖相邻或功能相邻计算关键脑区及近邻脑区的加权系数和,融合多尺度信息构造特征,并应用于HCP数据集进行脑状态分类和行为指标预测。结果显示,两种多尺度特征均能有效区分不同脑状态,但是基于功能相邻得到的多尺度特征对分类方法的泛化能力较强。另外,对行为数据的预测结果显示,基于功能相邻的多尺度特征回归预测表现更好。从多尺度特征在大脑上的分布情况可以看出,多尺度特征提取了关键脑区在多个尺度上的信息变化模式,有助于对大脑认知功能的深入理解。

两种特征融合框架使用图信号小波变换方法基于大脑的不规则特性提取多尺度信息,并分别提取多尺度小波的局部信息差值和局部信息和构造多尺度特征。GSWM-SIFT特征融合框架捕捉了关键脑区在局部多尺度上的信息变化梯度,能够提取关键脑区与近邻脑区在多个尺度上的信息差值,刻画其在各个尺度上的信息变化情况;同时在此框架下,利用大脑的解剖相邻进行多尺度融合得到的多尺度特征表现得更加优秀。GSWM-CONV特征融合框架融合了大脑在多个尺度下的局部信息和,找到多个尺度下的信息集中点的位置及信息强度的变化,便于理解大脑的各功能系统内部的信息分布情况;并且基于功能相邻的融合思想在此框架下表现得更加优秀。综上,本研究提出的两种多尺度特征融合框架均为大脑在多尺度下的认知功能研究提供了新的视角。

外文摘要:

With the development of cognitive neuroscience, the spatial multiscale characteristics of brain have been proved and applied to the interpretation of cognitive function and disease prediction. Many researches have been done to decode brain state and diagnose diseases, and made some progress. However, there are still some problems in the extraction and fusion of multi-scale features, such as insufficient information extraction, unable to describe the information within and between scales at the same time. In this study, the graph signal wavelet transform was introduced to extract multi-scale information of the brain based on its irregular characteristics. Based on the scale invariant feature transform (SIFT) and graph convolution, two multi-scale feature fusion frameworks were proposed. According to these two frameworks, multi-scale feature fusion algorithms were realized based on anatomic adjacency and functional adjacency. We applies the multi-scale features to brain state classification and behavioral data regression prediction, and the cognitive meaning of multi-scale features was discussed. The main works are as follows:

1.A multi-scale feature fusion framework based on SIFT (Graph Signal Wavelet Multi-scale Invariant Feature Transform, GSWM-SIFT) was proposed to realize the multi-scale information fusion of brain. Using the method of graph signal wavelet transform, the information on multiple scales was obtained, and then the gradient information of brain regions on multiple scales were extracted based on the anatomical or functional adjacency of brain respectively. To be specific, we first found the key regions in the brain, then calculated the gradient information between the key regions and the anatomical or functional adjacent regions at multiple scales, so as to integrate the multi-scale information and construct features, and finally applied the multi-scale features to the HCP (Human Connectome Project) data set for brain state classification and behavior index prediction. The results showed that, in addition to the case of using the linear SVM (support vector machine) method to classify dataset 2, the two multi-scale features were superior to the optimal performance of single scale wavelet features in classification and regression, which proved the effectiveness of multi-scale features. In addition, the classification performance of GSWM-SIFT(I) was better than that of GSWM-SIFT(II). From the distribution of multi-scale features on the brain obtained by the two algorithms, it was found that multi-scale features could contain the change pattern of brain in multi-scale, which was helpful to study the information exchange between multiple scales of the brain and promote the in-depth interpretation of the cognitive function of the brain.

2.This study proposed a graph signal wavelet multiscale fusion framework (GSWM-CONV) based on graph convolution to realize multiscale feature fusion of brain. Different from the GSWM-SIFT framework, GSWM-CONV extracted the sum of local information to fuse the multi-scale features. After the key regions of brain were found, the weighted coefficients sum of the key and the adjacent brain regions were calculated according to the anatomic or functional adjacent relationship, so that the multi-scale information structure features were fused, and applied to the HCP data set for brain state classification and behavior index prediction. The results showed that the two multi-scale fusion features can effectively distinguish different brain states, but the multi-scale features based on functional adjacency had strong generalization ability to classification methods. In addition, the prediction results of behavioral data showed that multi-scale feature regression results based on functional adjacency were better. From the distribution of multi-scale features in the brain, we could see that multi-scale features extract the information change patterns of key regions on multiple scales, which is helpful for the deep understanding of brain cognitive function.

The two feature fusion frameworks used the graph signal wavelet transform method to extract multi-scale information based on the irregular characteristics of the brain, and extracted the gradient information and local information of multi-scale wavelet respectively and construct multi-scale features. GSWM-SIFT feature fusion framework captured the gradient information of brain regions on multiple scales, extracting the information difference between key and adjacent brain regions on multiple scales; at the same time, under this framework, the multi-scale features obtained according to brain anatomical neighbors were more excellent. GSWM-CONV feature fusion framework integrated the sum of local information on multiple scales, found the location of information concentration regions and the change of information intensity in multiple scales, which was convenient to understand the information distribution within each functional system of the brain; and the fusion algorithm based on functional adjacency performed better in this framework. To sum up, the two multi-scale feature fusion frameworks proposed in this study provided a new perspective for the study of brain cognitive function in multi-scale.

参考文献总数:

 107    

馆藏号:

 硕081002/20005    

开放日期:

 2021-06-15    

无标题文档

   建议浏览器: 谷歌 360请用极速模式,双核浏览器请用极速模式