• 《工程索引》(EI)刊源期刊
  • Scopus
  • 中文核心期刊
  • 中国科学引文数据库来源期刊

深度学习在磁共振影像脑疾病诊断中的应用

朱健椿, 魏嘉昕, 毛浚彬, 刘坤, 何鸿宇, 刘锦

朱健椿, 魏嘉昕, 毛浚彬, 刘坤, 何鸿宇, 刘锦. 深度学习在磁共振影像脑疾病诊断中的应用[J]. 工程科学学报, 2024, 46(2): 306-316. DOI: 10.13374/j.issn2095-9389.2023.02.04.002
引用本文: 朱健椿, 魏嘉昕, 毛浚彬, 刘坤, 何鸿宇, 刘锦. 深度学习在磁共振影像脑疾病诊断中的应用[J]. 工程科学学报, 2024, 46(2): 306-316. DOI: 10.13374/j.issn2095-9389.2023.02.04.002
ZHU Jianchun, WEI Jiaxin, MAO Junbin, LIU Kun, HE Hongyu, LIU Jin. Applications of deep learning in magnetic resonance imaging–based diagnosis of brain diseases[J]. Chinese Journal of Engineering, 2024, 46(2): 306-316. DOI: 10.13374/j.issn2095-9389.2023.02.04.002
Citation: ZHU Jianchun, WEI Jiaxin, MAO Junbin, LIU Kun, HE Hongyu, LIU Jin. Applications of deep learning in magnetic resonance imaging–based diagnosis of brain diseases[J]. Chinese Journal of Engineering, 2024, 46(2): 306-316. DOI: 10.13374/j.issn2095-9389.2023.02.04.002

深度学习在磁共振影像脑疾病诊断中的应用

基金项目: 国家自然科学基金资助项目(62172444);湖南省自然科学基金资助项目(2022JJ30753);中南大学创新驱动计划资助项目(2023CXQD018)
详细信息
    通信作者:

    刘锦: E-mail: liujin06@csu.edu.cn

  • 分类号: TP391

Applications of deep learning in magnetic resonance imaging–based diagnosis of brain diseases

More Information
  • 摘要:

    由于脑疾病的发生会对社会产生严重危害,所以脑疾病诊断研究的重要性日益显著. 中国“脑计划”列入“十三五”规划与国务院《“健康中国2023”规划纲要》的印发表明国家对脑疾病诊疗问题的高度重视. 由于磁共振影像的高分辨率及非入侵性等优势使其成为脑疾病研究与临床检查的主要技术手段,为脑疾病诊断提供丰富的数据基础. 深度学习由于其可拓展性与灵活性在各个领域得到广泛应用,展现出巨大的发展潜力. 本文针对深度学习在典型脑疾病诊断中的应用进行综述,结构组织如下:首先对深度学习在自闭症、精神分裂症、阿尔兹海默症三种典型脑疾病诊断上的应用进行了阐述;然后对用于三种脑疾病研究的数据集和已有的开源工具进行了汇总;最后对深度学习在磁共振影像脑疾病诊断应用中的局限性及未来发展方向进行总结与展望.

    Abstract:

    As brain diseases can severely affect society, studies on the diagnosis of brain diseases are gaining importance. China is focused on counteracting the issues in brain disease diagnosis and treatment. Magnetic resonance imaging (MRI) has the advantages of high resolution and noninvasive nature, making it a preferred technique for brain disease research and clinical examination, providing rich databases for brain disease diagnosis. Deep learning is used in various fields due to its scalability and flexibility, and it has shown great potential for further development. Owing to recent developments in deep learning, it has made impressive achievements in various fields, such as computer vision and natural language processing, exhibiting great potential for its development and impact on brain disease diagnosis. Deep learning is being increasingly used for the diagnosis of brain disorders. We categorized studies reporting the use of deep learning for brain disease diagnosis by the type of disease to provide insights into the latest developments in this field. We cover the following aspects in this review. First, we reviewed and summarized the application of deep learning in the diagnosis of three typical brain disorders: autism spectrum disorder (ASD), schizophrenia (SZ), and Alzheimer’s disease (AD). Second, we reviewed commonly used datasets and available open-source tools for diagnosing these three brain disorders. Finally, we summarized and predicted the application of deep learning in the diagnosis of brain disorders. The review focused on the diagnosis of the aforementioned brain disorders. ASD is a neurodevelopmental disorder that occurs in early childhood. SZ is a psychiatric disorder that occurs in young adulthood. AD is a brain disorder that commonly occurs in old age. We illustrated the application of deep learning in the diagnosis of these brain disorders based on the characteristics of their different inputs. While using MRI as an input source, most convolutional neural networks were used as backbone networks to design feature extraction methods. However, while working with data containing sequence information from many time points, recurrent neural networks were used to extract key information from the sequences. Apart from directly processing images as input, many studies extracted manual features, constructed graphs of manual features, and used graph neural networks for analysis. This approach yielded remarkable results. Moreover, our findings indicated that graph neural network–based analysis methods are being commonly used to diagnose brain disorders.

  • 脑疾病的发生往往会影响人们的日常生活,且由于大脑的工作机制十分复杂,人类很难清楚地认知脑疾病的成因,许多脑疾病的诊断依靠人类的主观判断[1],加重了脑疾病给人类所带来的风险. 随着生物技术、信息技术的不断突破,脑疾病研究迅猛发展,脑疾病研究也越来越得到国家层面的重视:“中国脑计划”作为重大科技创新项目被列入“十三五”规划;中共中央、国务院发布的《健康中国2030规划纲要》和科技部发布的《关于支持建设新一代人工智能示范场景的通知》均明确指出我国要大力发展医疗大数据的应用体系建设,大规模利用人工智能解决包括脑疾病在内的常见疾病的诊疗需求. 在我国,包含脑疾病研究在内的脑科学研究已被列为我国重大的科技项目之一. 磁共振影像可分为结构性磁共振影像和功能性磁共振影像,结构性磁共振影像包含有T1磁共振影像、T2磁共振影像、弥散磁共振影像等;功能性磁共振影像包含有静息态功能磁共振影像、任务态功能磁共振影像等. 磁共振影像可以从横断面、矢状面、冠状面等多方位成像,所以对大脑组织结构有较高的分辨率,能够获得大脑准确的细节与丰富的组织脉络特征[2],进而探索大脑的结构组织以及功能连接的变化,因此磁共振影像成为许多脑疾病临床检查、预测诊断的主要手段,广泛应用于脑疾病诊断领域[35]. 随着人工智能技术的不断发展,其在脑疾病诊疗领域的影响不断加深,而深度学习是人工智能技术的一个重要代表. 深度学习是一种基于神经网络的算法,通过不断的非线性变换来自动学习复杂的特征,并且可以在大规模数据集上进行训练. 这使得深度学习在图像识别、语音识别、自然语言处理等领域具有很大的优势.

    本文的框架分为以下三个部分:首先综述了深度学习在三种典型的脑疾病诊断中的应用,其次汇总了在三种疾病中常用的数据集与开源工具,最后进行了总结与展望.

    脑疾病会影响人们的生活,其出现的时期也各有不同. 在青少年时期易于出现自闭症这样的神经发育障碍性疾病;在中青年时期,精神分裂症这种精神障碍疾病容易被诊断出来;在老年时期,阿尔茨海默病这种退行性大脑疾病会影响到患者的正常生活. 本章汇总近年来深度学习在青少年、中青年、老年三个阶段代表性脑疾病上的应用.

    自闭症谱系障碍是一种常见的大脑神经发育障碍,患者主要特点是社交困难、重复行为、兴趣受限和认知问题[6]. 根据国际发病率估算,中国约有300万~500万的自闭症谱系障碍儿童[7]. 当前自闭症病因尚不明确,诊断标准在不同国家与地区之间存在差异等,这些原因导致当前自闭症的准确诊断是一个具有挑战性的问题.

    功能连接可以反映个体在认知和行为功能上的重要差异,在预测自闭症谱系障碍发挥着重要的作用. 由于目前研究大多使用的是来自同一个成像中心单个模板的数据,而忽略了多个模板间的互补信息,所以Huang等[8]提出了一种基于不同的预定义模板,使用基于Pearson相关性的稀疏低秩表征,为每个受试者构建多个功能连接大脑网络的多模板多中心学习模型,实现自闭症谱系障碍的自动诊断. 考虑到在以往的研究中较少关注大脑的静息态功能磁共振影像全局网络结构随时间的演变,Wang等[9]提出了一种可以同时挖掘全局网络结构的动态模式,并对每个时间戳的特定网络特征进行建模的时间动态学习方法. 考虑到数据由于多站点采集以及不同预处理方法所造成的差异性,导致现有的方法对疾病的识别性能较低的问题,Wang等[10]提出了一种连接组形势建模方法,可以挖掘跨站点一致的连接组形势,并提取功能连接网络表征用于自闭症谱系障碍的识别. 为了对重要脑区间的连接进行进一步的研究,Li等[11]提出了一种图神经网络框架用于挖掘分类任务有关的区域和跨区域的功能激活模式,通过在图卷积层中设计了一种新的基于聚类的嵌入方法,解决了在所有节点上应用相同嵌入的局限性. 由于当前研究可能会忽略了非影像学信息和受试者之间的关系,无法识别分析与疾病相关的局部脑区和生物标志物,Zhang等[12]通过设计局部感兴趣区域图神经网络生成的特征嵌入来学习多个主体之间的关系;通过设计自适应权重聚合块生成每个受试者的多尺度特征嵌入,提出了一种由局部到全局图神经网络用于自动识别自闭症谱系障碍. 由于将不同模态的数据互补地整合到一个统一模型以提高自闭症的诊断效果是具有挑战的,Huang和Chung[13]提出了一种新的图卷积学习框架,制定了一个具有变分边的自适应人口图模型,以补充基于种群的疾病预测的多模态数据. 本节在表1展示近三年自闭症谱系障碍诊断的研究进展.

    表  1  基于深度学习方法的自闭症诊断概述
    Table  1.  Overview of using deep learning-based methods to diagnose autism spectrum disorder
    Reference Method Database Performance Criteria
    [8] Adaptive learning ABIDE I:
    NYU: 73 ASD vs 98 HC,
    UCLA_1: 28 ASD vs 27 HC,
    UM_1: 36 ASD vs 46 HC,
    Yale: 22 ASD vs 26 HC
    NYU:
    Acc: 77.63%, AUC: 77.67%,
    UCLA_1:
    Acc: 82.73%, AUC: 78.67%,
    UM_1:
    Acc: 78.11%, AUC: 75.44%,
    YALE:
    Acc: 89.13%, AUC: 87.33%
    [9] Dynamics learning ABIDE:
    403 ASD vs 468 HC
    Acc: 72.67%, AUC: 77.26%
    [10] CLM ABIDE:
    NYU: 71 ASD vs 93 HC,
    UM: 48 ASD vs 65 HC,
    UCLA: 36 ASD vs 38 HC
    NYU: Acc: 81.25%,
    UM: Acc: 80%,
    UCLA: Acc: 76.19%
    [11] GNN Biopoint Dataset:
    115 subjects
    Acc: 79.80%
    [12] GNN ABIDE:
    403ASD vs 468 HC
    Acc: 81.75%, AUC: 85.22%
    [13] GCN ABIDE:
    468 ASD vs 403 HC
    Acc: 82.20%, AUC: 84.95%
    [14] CNN ABIDE I:
    500 ASD vs 500 HC
    Acc: 73%
    [15] CNN ABIDE I+II:
    620 ASD vs 542 HC
    Acc: 64%, F1: 66%
    [16] CNN NDAR:
    33 ASD vs. 33 HC
    Acc: 77.2%, AUC: 77.3%
    [17] GCN ABIDE:
    485 ASD vs 544 HC
    Acc: 66.7%, AUC: 66.3%
    [18] CNN ABIDE I:
    463 ASD vs 471 HC;
    ABIDE Ⅱ:
    410 ASD vs 382 HC
    ABIDE I:
    Acc: 68.89%;
    ABIDEⅡ:
    Acc: 68.20%
    [19] Domain adaptation ABIDE:
    505 ASD vs 530 HC
    Acc: 73%, AUC: 78%
    [20] Clustering ABIDE:
    280 ASD vs 329 HC
    Acc: 68.42%, AUC: 69.31%
    [21] GNN ABIDE I:
    481 ASD vs 526 HC
    Acc: 74.7%
    [22] GCN ABIDE:
    485 ASD vs 544 HC
    Acc: 63.7%, AUC: 63.6%
    [23] GNN ABIDE:
    403 ASD vs 468 HC
    Acc: 89.77%, AUC: 89.81%
    Note: ABIDE=Autism Brain Imaging Data Exchange; NDAR=National Database for Autism Research; ASD=Autism spectrum disorder; HC= Health control; NYU=NYU Langone Medical Center; UCLA=University of California, Los Angeles; UM=University of Michigan; YALE= Yale Child Study Center; CLM= Connectome landscape modeling; CNN=Convolutional neural networks; GNN=Graph neural networks; GCN=Graph convolutional networks; Acc= Accuracy; AUC= Area under roc curve.
    下载: 导出CSV 
    | 显示表格

    精神分裂症是一种复杂的伴有感知、行为等多方面障碍的精神疾病,会严重影响患者的日常生活[24]. 精神分裂症尚无明确的诊断标准,其诊断方法依赖于对具有明显精神症状的人的定性检查和患者的自述[25].

    当前对精神分裂症进行研究的方法大多只简单考虑了大脑静态网络的功能连通性,而没有考虑动态功能连通性,如何将静态功能连通性与动态功能连通性联合分析是一个挑战, Huang等[26]提出了一种使用了两种类型的扩散连接来促进静态路径和动态路径之间的信息传递的卷积神经网络,用于分析静态–动态功能脑网络. 由于大多数构建动态功能连接网络的方法不能很好的聚合脑拓扑结构和与脑区功能相关的变化信息,Zhu等[27]提出了一种块结构和稀疏局部结构来对动态功能连接进行构造和表示,并将其应用于脑部疾病的诊断. 当前研究较少将随时间动态变化的大脑活动状况与功能连接网络进行联合分析,Zhao等[28]提出了一种将卷积递归神经网络和深度神经网络相结合的混合深度学习框架,旨在同时提高分类精度和可解释性. 随着神经影像学的发展,人们对精神分裂症早期发病时大脑结构的改变进行了不同程度上的探究. SupriyaPatro等[29]提出了一种能够从三维体积磁共振影像扫描中提取空间和光谱特征,并使用带有集成策略分类器进行分类的轻量级三维卷积神经网络的框架,用于基于磁共振影像的精神分裂症诊断. 随着数据量的不断增长,训练和测试数据之间存在特征不匹配的问题;同时由于在数据采集时,不同地点的人群、仪器以及采集协议之间的不统一又进一步限制了算法的临床应用,针对领域内适应以及领域间泛化的问题,Wang等[30]提出了一个领域适应框架,通过预训练的模型在两种范式中适应新的成像条件来克服领域内适应和领域间泛化的问题. 考虑到较少研究利用复值功能磁共振数据, 而从复值功能磁共振影像数据衍生的空间源相位图噪声更小,对精神障碍引起的空间激活变化更敏感,因此Lin等[31]构建了一个带有两个卷积层的3D-CNN框架,以充分探索来自空间源相位图中3D结构和体素之间的关系. 通过前面的描述可以了解近年来精神分裂症诊断的研究方向,本节将其整理成表2以供查阅.

    表  2  基于深度学习方法的精神分裂症诊断概述
    Table  2.  Overview of using deep learning–based methods to diagnose schizophrenia
    References Method Database Performance Criteria
    [25] CNN COBRE:
    60 SZ vs 71 HC
    Acc: 82.42%
    [26] CNN In-House:
    178 SZ vs 180 HC
    Acc: 82.05%
    [27] KDA Department of Psychiatry, NBH:
    24 SZ vs 21 HC
    Acc: 91.33%, AUC: 90.95%
    [28] C-RNN In-House:
    558 SZ vs 542 HC
    Acc: 85.3%
    [29] CNN MCICShare, COBRE,
    and fBRINPhase-II:
    300 SZ vs 300 HC
    Acc: 92.22%
    [30] CNN PHENOM:
    Penn: 96 SZ vs 131 HC,
    Munichi: 145 SZ vs 157 HC,
    China: 66 SZ vs 76 HC
    Penn: Acc: 73.12%,
    Munich: Acc: 64.22%,
    China: Acc: 78.94%
    [31] CNN 42 SZ vs 40 HC Acc: 98.39%
    [32] CNN 335 SZ vs 380 ASD Acc: 87%
    [33] CNN BrainGluSchi, COBRE, MCICShare,
    NMorphCH, and NUSDAST:
    443 SZ vs 423 HC
    AUC: 95.9%
    [34] CNN COBRE:
    69 SZ vs 72 HC
    Acc: 77.8%
    [35] VAE Kaggle:
    40 SZ vs 46 HC
    Acc: 84%
    [36] CNN FBIRN:
    98 SZ vs 112 HC
    Acc: 75.3%
    [37] RNN B-SNIP:
    229 HC vs. 176 SZ vs 140 BDP vs 129 SAD
    Acc: 78.5%
    [38] CNN IMH:
    148 SZ vs 76 HC
    Acc: 81.02%, AUC: 84%
    [39] GCN Affiliated Brain Hospital of Guangzhou Medical University and the local community:
    140 SZ vs 205 HC
    Acc: 92.47%, AUC: 95.36%
    Note: SZ= Schizophrenia; HC=Health control; BDP=Bipolar disorder with psychosis; SAD= Schizoaffective disorder; COBRE=Center of Biomedical Research Excellence; MCIC= MIND Clinical Imaging Consortium; NUSDAST= Northwestern University Schizophrenia Data and Software Tool; NBH=Nanjing Brain Hospital; PHENOM= PHENOM consortium; FBIRN=Function biomedical informatics research network; B-SNIP= Bipolar-schizophrenia consortium on intermediate phenotypes; IMH= Institute of Mental Health; KDA= Kernel discriminant analysis; RNN= Recurrent neural network; C-RNN= Convolutional recurrent neural network; VAE= Variational auto-encoders.
    下载: 导出CSV 
    | 显示表格

    阿尔兹海默症是一种退行性精神疾病,会逐渐破坏脑细胞,影响记忆、行为以及推理能力,并逐渐影响到日常的生活[40]. 据报道,每年新增约1000万例,根据世界卫生组织报告,预计到2050年,阿尔兹海默症患者将会达到1.52亿[41],阿尔兹海默症的诊断十分具有现实意义. 本文将分别从单模态数据和多模态数据两个方面进行概述.

    在当前利用功能磁共振影像数据的研究中,由于数据中存在噪声,受试者间的异质性,同时前人的方法专注于从单个功能连接网络进行分析等缘故,导致疾病诊断性能不佳,于是Gan等[42]研究了一种多图融合方法来探索两个功能连接网络之间的共同和互补信息,对静息态功能磁共振成像数据进行脑部疾病诊断. 在现有的功能连接网络的构建方法中,很多方法忽略了网络构建中的高阶网络特征,Jie等[43]定义了一种新的加权相关核来测量大脑区域的相关性,通过数据驱动的方式学习加权因子来表征不同时间点的贡献;此外,他们构建了一个基于加权相关核的卷积神经网络框架,通过使用功能磁共振数据来学习疾病诊断的分层特征. 仅从功能连接网络的角度来分析疾病可能会忽略了非结构化的拓扑信息,但是目前现有的图构建技术通常将分析限制在单一的空间尺度上,只关注到感兴趣区域之间的成对关系,忽略了受试者之间的信息关联. 为解决此问题,Yao等[44]提出了一个多尺度三元组图卷积网络来分析大脑的功能和结构连通性,用以诊断阿尔兹海默症. 由于站点间数据的异质性问题,Guan等[45]为多站点磁共振一致性分析提出了一种注意力引导的深度域适应框架,并将其应用于多站点核磁共振成像的脑疾病自动识别. 除了单独使用磁共振成像数据进行研究外,多模态数据可以提高疾病的诊断性能. 当前利用多模态数据的研究中,大多数采用了简单的策略来联合分析不同来源的特征,达到的效果并不能令人满意. 针对这类问题,Ko等[46]提出了一种新的深度生成和判别学习框架,联合分析脑疾病诊断和认知评分预测的表型和基因型数据. 将不同模态的数据处理为功能连接网络的方法中,有构建静态功能连接网络和动态功能连接网络的两种方法,但是目前动态功能连接网络的建模方法,大多采用滑动窗口的方式提取动态交互信息,其性能对窗口参数异常敏感[4748]. 由于很少有研究能够提供具有足够说服力的窗口参数的最佳组合,所以基于滑动窗口相关性的分析方法可能并非捕获大脑活动时间变化信息的最佳方法,因此Li等[49]提出了一种新的基于静息态功能磁共振成像和弥散张量成像数据的自适应动态功能连接估计模型,并进一步提出了一种深度时空特征融合方法,以实现更全面的多域表示. 目前大多数基于图的方法使用的是单一模态数据进行手动定义图,然后再加入其他的模态信息后进行图表征学习,这导致模态之间复杂的相关性被忽略. 为解决这个问题,Zheng等[23]提出了模态感知表示学习,利用模态之间的相关性和互补性来聚合每个模态的特征;同时设计了一种轻量级自适应图学习方法,为下游任务构建最优图结构用于疾病预测. 本文将在表3中展示基于深度学习的阿尔兹海默症诊断的研究近况.

    表  3  基于深度学习方法的阿尔兹海默症诊断概述
    Table  3.  An overview of the application of deep learning–based methods to diagnose Alzheimer’s disease
    References Method Database Performance Criteria
    [43] CNN ADNI:
    48 NC vs 50 eMCI vs 45 lMCI vs 31 AD
    eMCI vs HC:
    Acc: 84.6%
    AD vs HC:
    Acc: 88.0%
    AD vs. lMCI vs eMCI vs HC:
    Acc: 57.0%
    [49] AE ADNI:
    37 NC vs 36 MCI
    Acc: 87.7%,
    AUC: 0.889
    [50] Adaptive sparse learning ADNI:
    220 NC vs 192 AD
    vs 402 MCI,
    402 MCI vs 146 lMCI
    vs 256 sMCI
    NC vs AD vs MCI:
    Acc: 77.48%
    NC vs AD vs lMCI vs sMCI:
    Acc: 64.97%
    [42] Multi-graph fusion ADNI:
    59 AD vs 48 NC
    Acc: 88.84%,
    AUC: 90.22%
    [44] GCN ADNI:
    191 MCI vs 179 NC
    Acc: 86%,
    AUC: 90.3%
    [45] CNN ADNI1:
    205 AD vs 231 NC vs 165 pMCI vs 147 sMCI
    ADNI2:
    162 AD vs 205 NC vs 88 pMCI vs 253 sMCI
    ADNI3:
    60 AD vs 329 NC vs 178 MCI
    AIBL:
    71 AD vs 447 NC vs 11 pMCI vs 20 sMCI
    AD vs NC:
    Acc: 93.57%,
    AUC: 94.98%
    [51] GCN ADNI:
    116 NC vs 98 MCI
    Acc: 92.7%
    [52] GRU ADNI:
    164 AD vs 346 MCI vs 198 NC
    Acc: 70.9%
    [53] GCN ADNI:
    44 SMC vs 44 eMCI vs 38 lMCI vs 44 NC
    NC vs SMC:
    Acc: 84.9%
    NC vs MCI:
    Acc: 85.2%
    NC vs lMCI:
    Acc: 89.0%
    SMC vs eMCI:
    Acc: 88.6%
    SMC vs lMCI:
    Acc: 87.8%
    eMCI vs. lMCI:
    Acc: 85.5%
    [54] self-expressive network ADNI:
    160 AD vs 82 SMC vs 273 eMCI vs 187 lMCI vs 211 NC
    AD vs. NC:
    Acc: 93.76%,
    AUC: 95%
    eMCI vs lMCI:
    Acc: 73.85%,
    AUC: 70%
    [46] GAN ADNI:
    211 NC vs 350 MCI vs 173 AD
    NC vs AD:
    AUC: 92.31%
    NC vs MCI:
    AUC: 69.73%
    sMCI vs pMCI:
    AUC: 73.51%
    NC vs MCI vs AD:
    AUC: 71.33%
    NC vs sMCI vs pMCI vs AD:
    AUC: 69.31%
    [23] GNN ADNI:
    211 NC vs 275 sMCI vs 45 pMCI vs 72 AD,
    AD vs. sMCI vs NC:
    Acc: 92.31%,
    AUC: 93.91%
    sMCI vs pMCI:
    Acc: 92.30%,
    AUC: 92.38%
    [55] Transfer Learning ADNI:
    85 AD vs 185 MCI vs 90 NC
    Acc: 90.14%,
    AUC: 96%
    [56] GCN ADNI2, ADNI3, In-House:
    163 NC vs 44 SMC vs 86 eMCI vs 166 lMCI
    NC vs SMC:
    Acc: 93.2%
    NC vs eMCI:
    Acc: 91.1%
    NC vs lMCI:
    Acc: 94.2%
    SMC vs eMCI:
    Acc: 91.5%
    SMC vs. lMCI:
    Acc: 95.7%
    eMCI vs lMCI:
    Acc: 92.4%
    Note: ADNI= Alzheimer's Disease Neuroimaging Initiative; AIBL=Australian Imaging Biomarkers and Lifestyle Study of Aging database; AD=Alzheimer's Disease; NC=Normal control; MCI= Mild cognitive impairment; eMCI= Early mild cognitive impairment; lMCI=Late mild cognitive impairment; pMCI= Progressive MCI; sMCI= Stable MCI; SMC= Significant memory concern; AE=Auto encoder; GRU=Gate recurrent unit; GAN=Generative adversarial network.
    下载: 导出CSV 
    | 显示表格

    在脑疾病诊断的研究中使用了大量的数据集,常见的数据集如表4所示.

    表  4  公开数据集
    Table  4.  Open databases
    Database Disease Link Modal Access
    ABIDE ASD http://preprocessed-connectomes-project.org/abide/ fMRI Free
    SFARI ASD https://www.sfari.org/resource/sfari-base/ Phenotypic, Genetic,
    Imaging Data
    Register
    SPARK ASD https://www.sfari.org/resource/spark Phenotypic Data, Genomic Data Register
    Kaggle Autism Facial Dataset ASD https://drive.google.com/drive/folders/1XQU0pluL0m3TIlXqntano12d68peMb8A?usp=sharing Facial Images Free
    CANDI SZ https://www.nitrc.org/projects/cs_schizbull08/ MRI Register
    COBRE SZ http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html rs-fMRI, sMRI, Phenotypic Data Register
    ADNI AD https://adni.loni.usc.edu/ Clinical, Genetic, MRI, PET, Biospecimen Register
    OASIS AD https://www.oasis-brains.org/ T1w, T2w, FLAIR, ASL, DTI Free
    AIBL AD https://aibl.csiro.au/ PET, T1w, PDW, T2w, DWI, FLAIR, SWI Register
    HCP AD https://www.humanconnectome.org/study/hcp-young-adult/data-releases MRI, MEG Register
    Note: PET= Positron emission tomography, T1w=T1-weighted, T2w= T2-weighted, FLAIR= Fluid attenuated inversion recovery, ASL= Arterial spin labeling, DTI= Diffusion tensor imaging, PDW= Proton density weighted, DWI= Diffusion weighted imaging, SWI= Susceptibility weighted imaging, MEG= Magnetoencephalography.
    下载: 导出CSV 
    | 显示表格

    深度学习在磁共振影像脑疾病诊断中的应用是一项复杂的工程,实现这些深度学习方法需要花费研究人员大量的时间. 为了方便研究人员继续深入研究,促进深度学习的应用,本文所收集的开源工具如表5所示. 同时在不同研究中会使用相应的数据预处理工具,一并在表5中进行展示.

    本文调研了近三年深度学习在磁共振影像脑疾病诊断上的应用. 深度学习在磁共振影像脑疾病诊断研究中的发展较为短暂,却已展现出强大的性能表现,这证明了深度学习有着巨大的发展潜力,但同时也存在众多挑战:深度学习对超参数的设置十分敏感,其性能表现可能会受到不同超参数设置带来的巨大影响;深度学习模型需要海量的数据进行训练,多站点数据采集,不同的采集标准可能会导致数据存在较大的异质性,这可能会影响深度模型的表征. 本文总结了几种深度学习在磁共振影像脑疾病诊断的应用未来发展的方向.

    (1) 小样本问题:当前的研究中,由于隐私性等原因,导致可以公开获得的数据量十分稀少,而深度学习模型需要大量的数据来进行训练才能够达到更加令人满意的性能,所以小样本问题是一个很显著的问题. 针对此类问题,Ali等[63]提出了一种神经扩散模型来合成图像数据. Godasu等[64]提出了一种多阶段迁移学习方法,以缓解数据有限的问题. Dhinagar等[65]提出了一种站点不可知元学习方法,来解决训练数据少的问题. 扩散模型虽然可以通过生成新数据来增加数据量,提高模型性能,但扩散模型存在生成时间慢、训练成本高,在医学图像领域研究尚未成熟等问题. 迁移学习虽然可以在已有的大规模数据集上先进行预训练,再将训练好的模型应用到小样本数据集上,但是效果并不理想. 元学习可以帮助模型在少量样本下快速学习,但需要多个不同且相关的任务支持,当任务间差异较大或样本任务过少时,模型可能过于依赖先前所学知识,从而导致对新任务的泛化性降低. 对于小样本问题,可以探索新的算法,针对数据量过少的问题,可以从生成对抗网络、元学习、迁移学习等思路来设计更合适的方法以获得更多有用的特征,从而更好地分析脑疾病的成因.

    (2) 多模态融合:在脑疾病诊断的研究上,很多研究者认为不同模态的数据包含有利于脑疾病诊断的信息,所以越来越多的研究者将多模态数据应用于脑疾病的分析中. 但多模态数据如何有效融合是一个具有挑战性的问题. 当前研究中,多模态融合方法集中在数据融合、特征融合以及决策融合这三个方面. Xu等[66]提出了一种无监督增强医学图像融合网络,来减缓常见的多模态数据融合方法导致信息失真,进而限制融合性能的问题. Liu等[67]提出了一种多模态多视图图表征知识嵌入框架来诊断轻度认知障碍患者,并且提出了一种多步决策融合方法来提高诊断性能. Bi等[68]利用多模态数据的互补性进行表征融合以提高模型新能. 数据融合可以通过多个数据源信息进行融合,但在面对异构数据时有很大的局限性. 决策融合能够融合多个决策以降低单一决策的风险和错误,进而提升决策的可靠性,但不能直接利用多模态数据进行联合学习. 特征融合方法可以将深度学习模型中不同层次的抽象表征合并在一起,提升特征的表达能力进一步提高模型的鲁棒性等,但不同的融合方法适用于不同的特征和任务,不同模态的特征的有效融合方法是一个值得研究的问题. 因此多模态融合方法也是一个值得研究方向.

    (3) 可解释性:传统的机器学习算法的特征提取是基于医学专家的先验知识,具有较好的可解释性,而深度学习通过设计多层叠加的非线性变换的复杂网络来获得原始数据的新表征,但这种深度变换导致数据缺乏可解释性. Eslami等[69]提出了一种使用支持向量机与深度学习的混合方法来解释基于功能磁共振成像检测自闭症谱系障碍的研究. Nigri等[70]提出了一种专门为大脑扫描任务设计的可解释方法,该方法描绘了大脑中最能区分阿尔兹海默症的区域,以临床医生可以理解的方式为模型的决策提供了可解释性. Shojaei等[71]将基于遗传算法的遮挡图方法与一组基于反向传播的可解释性方法相结合,为阿尔兹海默症患者找到了一个具有可解释性的大脑面罩. 因为医生需要对模型的输出结果进行验证和解释,以便更好地进行诊断和治疗,所以可解释性是一个非常重要的问题. 而目前本领域内的可解释性研究主要包括模型的可解释性、数据的可解释性以及结果可视化解释等,可解释性还存在一些不足之处. 例如,模型的复杂性导致其决策过程难以解释;在某些情况下,模型的输出结果可能会受到一些干扰因素的影响,导致输出结果的不确定性,这将使模型的可解释性变得复杂;同时,本领域内也缺乏一个标准的解释方法. 在未来的研究中,需要进一步探索如何提高磁共振影像脑疾病领域的可解释性,并加强对模型输出结果的验证和解释,所以深度学习在磁共振影像脑疾病诊断领域的可解释性是一个具有探索价值意义的方向,以便更好地将深度学习方法应用于临床实践中.

  • 表  1   基于深度学习方法的自闭症诊断概述

    Table  1   Overview of using deep learning-based methods to diagnose autism spectrum disorder

    Reference Method Database Performance Criteria
    [8] Adaptive learning ABIDE I:
    NYU: 73 ASD vs 98 HC,
    UCLA_1: 28 ASD vs 27 HC,
    UM_1: 36 ASD vs 46 HC,
    Yale: 22 ASD vs 26 HC
    NYU:
    Acc: 77.63%, AUC: 77.67%,
    UCLA_1:
    Acc: 82.73%, AUC: 78.67%,
    UM_1:
    Acc: 78.11%, AUC: 75.44%,
    YALE:
    Acc: 89.13%, AUC: 87.33%
    [9] Dynamics learning ABIDE:
    403 ASD vs 468 HC
    Acc: 72.67%, AUC: 77.26%
    [10] CLM ABIDE:
    NYU: 71 ASD vs 93 HC,
    UM: 48 ASD vs 65 HC,
    UCLA: 36 ASD vs 38 HC
    NYU: Acc: 81.25%,
    UM: Acc: 80%,
    UCLA: Acc: 76.19%
    [11] GNN Biopoint Dataset:
    115 subjects
    Acc: 79.80%
    [12] GNN ABIDE:
    403ASD vs 468 HC
    Acc: 81.75%, AUC: 85.22%
    [13] GCN ABIDE:
    468 ASD vs 403 HC
    Acc: 82.20%, AUC: 84.95%
    [14] CNN ABIDE I:
    500 ASD vs 500 HC
    Acc: 73%
    [15] CNN ABIDE I+II:
    620 ASD vs 542 HC
    Acc: 64%, F1: 66%
    [16] CNN NDAR:
    33 ASD vs. 33 HC
    Acc: 77.2%, AUC: 77.3%
    [17] GCN ABIDE:
    485 ASD vs 544 HC
    Acc: 66.7%, AUC: 66.3%
    [18] CNN ABIDE I:
    463 ASD vs 471 HC;
    ABIDE Ⅱ:
    410 ASD vs 382 HC
    ABIDE I:
    Acc: 68.89%;
    ABIDEⅡ:
    Acc: 68.20%
    [19] Domain adaptation ABIDE:
    505 ASD vs 530 HC
    Acc: 73%, AUC: 78%
    [20] Clustering ABIDE:
    280 ASD vs 329 HC
    Acc: 68.42%, AUC: 69.31%
    [21] GNN ABIDE I:
    481 ASD vs 526 HC
    Acc: 74.7%
    [22] GCN ABIDE:
    485 ASD vs 544 HC
    Acc: 63.7%, AUC: 63.6%
    [23] GNN ABIDE:
    403 ASD vs 468 HC
    Acc: 89.77%, AUC: 89.81%
    Note: ABIDE=Autism Brain Imaging Data Exchange; NDAR=National Database for Autism Research; ASD=Autism spectrum disorder; HC= Health control; NYU=NYU Langone Medical Center; UCLA=University of California, Los Angeles; UM=University of Michigan; YALE= Yale Child Study Center; CLM= Connectome landscape modeling; CNN=Convolutional neural networks; GNN=Graph neural networks; GCN=Graph convolutional networks; Acc= Accuracy; AUC= Area under roc curve.
    下载: 导出CSV

    表  2   基于深度学习方法的精神分裂症诊断概述

    Table  2   Overview of using deep learning–based methods to diagnose schizophrenia

    References Method Database Performance Criteria
    [25] CNN COBRE:
    60 SZ vs 71 HC
    Acc: 82.42%
    [26] CNN In-House:
    178 SZ vs 180 HC
    Acc: 82.05%
    [27] KDA Department of Psychiatry, NBH:
    24 SZ vs 21 HC
    Acc: 91.33%, AUC: 90.95%
    [28] C-RNN In-House:
    558 SZ vs 542 HC
    Acc: 85.3%
    [29] CNN MCICShare, COBRE,
    and fBRINPhase-II:
    300 SZ vs 300 HC
    Acc: 92.22%
    [30] CNN PHENOM:
    Penn: 96 SZ vs 131 HC,
    Munichi: 145 SZ vs 157 HC,
    China: 66 SZ vs 76 HC
    Penn: Acc: 73.12%,
    Munich: Acc: 64.22%,
    China: Acc: 78.94%
    [31] CNN 42 SZ vs 40 HC Acc: 98.39%
    [32] CNN 335 SZ vs 380 ASD Acc: 87%
    [33] CNN BrainGluSchi, COBRE, MCICShare,
    NMorphCH, and NUSDAST:
    443 SZ vs 423 HC
    AUC: 95.9%
    [34] CNN COBRE:
    69 SZ vs 72 HC
    Acc: 77.8%
    [35] VAE Kaggle:
    40 SZ vs 46 HC
    Acc: 84%
    [36] CNN FBIRN:
    98 SZ vs 112 HC
    Acc: 75.3%
    [37] RNN B-SNIP:
    229 HC vs. 176 SZ vs 140 BDP vs 129 SAD
    Acc: 78.5%
    [38] CNN IMH:
    148 SZ vs 76 HC
    Acc: 81.02%, AUC: 84%
    [39] GCN Affiliated Brain Hospital of Guangzhou Medical University and the local community:
    140 SZ vs 205 HC
    Acc: 92.47%, AUC: 95.36%
    Note: SZ= Schizophrenia; HC=Health control; BDP=Bipolar disorder with psychosis; SAD= Schizoaffective disorder; COBRE=Center of Biomedical Research Excellence; MCIC= MIND Clinical Imaging Consortium; NUSDAST= Northwestern University Schizophrenia Data and Software Tool; NBH=Nanjing Brain Hospital; PHENOM= PHENOM consortium; FBIRN=Function biomedical informatics research network; B-SNIP= Bipolar-schizophrenia consortium on intermediate phenotypes; IMH= Institute of Mental Health; KDA= Kernel discriminant analysis; RNN= Recurrent neural network; C-RNN= Convolutional recurrent neural network; VAE= Variational auto-encoders.
    下载: 导出CSV

    表  3   基于深度学习方法的阿尔兹海默症诊断概述

    Table  3   An overview of the application of deep learning–based methods to diagnose Alzheimer’s disease

    References Method Database Performance Criteria
    [43] CNN ADNI:
    48 NC vs 50 eMCI vs 45 lMCI vs 31 AD
    eMCI vs HC:
    Acc: 84.6%
    AD vs HC:
    Acc: 88.0%
    AD vs. lMCI vs eMCI vs HC:
    Acc: 57.0%
    [49] AE ADNI:
    37 NC vs 36 MCI
    Acc: 87.7%,
    AUC: 0.889
    [50] Adaptive sparse learning ADNI:
    220 NC vs 192 AD
    vs 402 MCI,
    402 MCI vs 146 lMCI
    vs 256 sMCI
    NC vs AD vs MCI:
    Acc: 77.48%
    NC vs AD vs lMCI vs sMCI:
    Acc: 64.97%
    [42] Multi-graph fusion ADNI:
    59 AD vs 48 NC
    Acc: 88.84%,
    AUC: 90.22%
    [44] GCN ADNI:
    191 MCI vs 179 NC
    Acc: 86%,
    AUC: 90.3%
    [45] CNN ADNI1:
    205 AD vs 231 NC vs 165 pMCI vs 147 sMCI
    ADNI2:
    162 AD vs 205 NC vs 88 pMCI vs 253 sMCI
    ADNI3:
    60 AD vs 329 NC vs 178 MCI
    AIBL:
    71 AD vs 447 NC vs 11 pMCI vs 20 sMCI
    AD vs NC:
    Acc: 93.57%,
    AUC: 94.98%
    [51] GCN ADNI:
    116 NC vs 98 MCI
    Acc: 92.7%
    [52] GRU ADNI:
    164 AD vs 346 MCI vs 198 NC
    Acc: 70.9%
    [53] GCN ADNI:
    44 SMC vs 44 eMCI vs 38 lMCI vs 44 NC
    NC vs SMC:
    Acc: 84.9%
    NC vs MCI:
    Acc: 85.2%
    NC vs lMCI:
    Acc: 89.0%
    SMC vs eMCI:
    Acc: 88.6%
    SMC vs lMCI:
    Acc: 87.8%
    eMCI vs. lMCI:
    Acc: 85.5%
    [54] self-expressive network ADNI:
    160 AD vs 82 SMC vs 273 eMCI vs 187 lMCI vs 211 NC
    AD vs. NC:
    Acc: 93.76%,
    AUC: 95%
    eMCI vs lMCI:
    Acc: 73.85%,
    AUC: 70%
    [46] GAN ADNI:
    211 NC vs 350 MCI vs 173 AD
    NC vs AD:
    AUC: 92.31%
    NC vs MCI:
    AUC: 69.73%
    sMCI vs pMCI:
    AUC: 73.51%
    NC vs MCI vs AD:
    AUC: 71.33%
    NC vs sMCI vs pMCI vs AD:
    AUC: 69.31%
    [23] GNN ADNI:
    211 NC vs 275 sMCI vs 45 pMCI vs 72 AD,
    AD vs. sMCI vs NC:
    Acc: 92.31%,
    AUC: 93.91%
    sMCI vs pMCI:
    Acc: 92.30%,
    AUC: 92.38%
    [55] Transfer Learning ADNI:
    85 AD vs 185 MCI vs 90 NC
    Acc: 90.14%,
    AUC: 96%
    [56] GCN ADNI2, ADNI3, In-House:
    163 NC vs 44 SMC vs 86 eMCI vs 166 lMCI
    NC vs SMC:
    Acc: 93.2%
    NC vs eMCI:
    Acc: 91.1%
    NC vs lMCI:
    Acc: 94.2%
    SMC vs eMCI:
    Acc: 91.5%
    SMC vs. lMCI:
    Acc: 95.7%
    eMCI vs lMCI:
    Acc: 92.4%
    Note: ADNI= Alzheimer's Disease Neuroimaging Initiative; AIBL=Australian Imaging Biomarkers and Lifestyle Study of Aging database; AD=Alzheimer's Disease; NC=Normal control; MCI= Mild cognitive impairment; eMCI= Early mild cognitive impairment; lMCI=Late mild cognitive impairment; pMCI= Progressive MCI; sMCI= Stable MCI; SMC= Significant memory concern; AE=Auto encoder; GRU=Gate recurrent unit; GAN=Generative adversarial network.
    下载: 导出CSV

    表  4   公开数据集

    Table  4   Open databases

    Database Disease Link Modal Access
    ABIDE ASD http://preprocessed-connectomes-project.org/abide/ fMRI Free
    SFARI ASD https://www.sfari.org/resource/sfari-base/ Phenotypic, Genetic,
    Imaging Data
    Register
    SPARK ASD https://www.sfari.org/resource/spark Phenotypic Data, Genomic Data Register
    Kaggle Autism Facial Dataset ASD https://drive.google.com/drive/folders/1XQU0pluL0m3TIlXqntano12d68peMb8A?usp=sharing Facial Images Free
    CANDI SZ https://www.nitrc.org/projects/cs_schizbull08/ MRI Register
    COBRE SZ http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html rs-fMRI, sMRI, Phenotypic Data Register
    ADNI AD https://adni.loni.usc.edu/ Clinical, Genetic, MRI, PET, Biospecimen Register
    OASIS AD https://www.oasis-brains.org/ T1w, T2w, FLAIR, ASL, DTI Free
    AIBL AD https://aibl.csiro.au/ PET, T1w, PDW, T2w, DWI, FLAIR, SWI Register
    HCP AD https://www.humanconnectome.org/study/hcp-young-adult/data-releases MRI, MEG Register
    Note: PET= Positron emission tomography, T1w=T1-weighted, T2w= T2-weighted, FLAIR= Fluid attenuated inversion recovery, ASL= Arterial spin labeling, DTI= Diffusion tensor imaging, PDW= Proton density weighted, DWI= Diffusion weighted imaging, SWI= Susceptibility weighted imaging, MEG= Magnetoencephalography.
    下载: 导出CSV
  • [1]

    Quaak M, van de Mortel L, Thomas R M, et al. Deep learning applications for the classification of psychiatric disorders using neuroimaging data: Systematic review and meta-analysis. NeuroImage, 2021, 30: 102584 doi: 10.1016/j.nicl.2021.102584

    [2]

    Yi P, Jin L, Xu T, et al. Hippocampal segmentation in brain MRI images using machine learning methods: A survey. Chin J Electron, 2021, 30(5): 793 doi: 10.1049/cje.2021.06.002

    [3]

    Feng W B, Liu G Y, Zeng K L, et al. A review of methods for classification and recognition of ASD using fMRI data. J Neurosci Meth, 2022, 368: 109456 doi: 10.1016/j.jneumeth.2021.109456

    [4]

    Liu J, Wang J X, Tang Z J, et al. Improving Alzheimer’s disease classification by combining multiple measures. IEEE/ACM Trans Comput Biol Bioinform, 2018, 15(5): 1649 doi: 10.1109/TCBB.2017.2731849

    [5]

    Liu J, Pan Y, Li M, et al. Applications of deep learning to MRI images: A survey. Big Data Min Anal, 2018, 1(1): 1 doi: 10.26599/BDMA.2018.9020001

    [6]

    Rapin I, Tuchman R F. Autism: Definition, neurobiology, screening, diagnosis. Pediatr Clin N Am, 2008, 55(5): 1129 doi: 10.1016/j.pcl.2008.07.005

    [7]

    Zhou R Y, Ma B X, Wang J J. Difficulties in the diagnosis and treatment of children with autism spectrum disorder in China. J Autism Dev Disord, 2022, 52(2): 959 doi: 10.1007/s10803-021-04997-8

    [8]

    Huang F L, Tan E L, Yang P, et al. Self-weighted adaptive structure learning for ASD diagnosis via multi-template multi-center representation. Med Image Anal, 2020, 63: 101662 doi: 10.1016/j.media.2020.101662

    [9]

    Wang M L, Huang J S, Liu M X, et al. Modeling dynamic characteristics of brain functional connectivity networks using resting-state functional MRI. Med Image Anal, 2021, 71: 102063 doi: 10.1016/j.media.2021.102063

    [10]

    Wang M L, Zhang D Q, Huang J S, et al. Consistent connectome landscape mining for cross-site brain disease identification using functional MRI. Med Image Anal, 2022, 82: 102591 doi: 10.1016/j.media.2022.102591

    [11]

    Li X X, Zhou Y, Dvornek N, et al. BrainGNN: Interpretable brain graph neural network for fMRI analysis. Med Image Anal, 2021, 74: 102233 doi: 10.1016/j.media.2021.102233

    [12]

    Zhang H, Song R, Wang L P, et al. Classification of brain disorders in rs-fMRI via local-to-global graph neural networks. IEEE Trans Med Imaging, 2023, 42(2): 444 doi: 10.1109/TMI.2022.3219260

    [13]

    Huang Y X, Chung A C S. Disease prediction with edge-variational graph convolutional networks. Med Image Anal, 2022, 77: 102375 doi: 10.1016/j.media.2022.102375

    [14]

    Shahamat H, Saniee Abadeh M. Brain MRI analysis using a deep learning based evolutionary approach. Neural Netw, 2020, 126: 218 doi: 10.1016/j.neunet.2020.03.017

    [15]

    Thomas R M, Gallo S, Cerliani L, et al. Classifying autism spectrum disorder using the temporal statistics of resting-state functional MRI data with 3D convolutional neural networks. Front Psychiatry, 2020, 11: 440

    [16]

    Haweel R, Shalaby A, Mahmoud A, et al. A novel dwt-based discriminant features extraction from task-based fmri: An asd diagnosis study using cnn // 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). Nice, 2021: 196

    [17]

    Peng L, Wang N, Dvornek N, et al. FedNI: Federated graph learning with network inpainting for population-based disease prediction. IEEE Trans Med Imag, 2023, 42(7): 2032 doi: 10.1109/TMI.2022.3188728

    [18]

    Ji J Z, Zhang Y Q. Functional brain network classification based on deep graph hashing learning. IEEE Trans Med Imag, 2022, 41(10): 2891 doi: 10.1109/TMI.2022.3173428

    [19]

    Kunda M, Zhou S, Gong G L, et al. Improving multi-site autism classification via site-dependence minimization and second-order functional connectivity. IEEE Trans Med Imag, 2023, 42(1): 55 doi: 10.1109/TMI.2022.3203899

    [20]

    Wang N, Yao D R, Ma L Z, et al. Multi-site clustering and nested feature extraction for identifying autism spectrum disorder with resting-state fMRI. Med Image Anal, 2022, 75: 102279 doi: 10.1016/j.media.2021.102279

    [21]

    Chen Y Z, Yan J D, Jiang M X, et al. Adversarial learning based node-edge graph attention networks for autism spectrum disorder identification. IEEE Trans Neural Netw Learn Syst.doi: 10.1109/TNNLS.2022.3154755

    [22]

    Peng L, Wang N, Xu J, et al. GATE: Graph CCA for temporal self-supervised learning for label-efficient fMRI analysis. IEEE Trans Med Imag, 2023, 42(2): 391 doi: 10.1109/TMI.2022.3201974

    [23]

    Zheng S, Zhu Z F, Liu Z Z, et al. Multi-modal graph learning for disease prediction. IEEE Trans Med Imag, 2022, 41(9): 2207 doi: 10.1109/TMI.2022.3159264

    [24]

    Tost H, Meyer-Lindenberg A. Puzzling over schizophrenia: Schizophrenia, social environment and the brain. Nat Med, 2012, 18(2): 211 doi: 10.1038/nm.2671

    [25]

    Wang T, Bezerianos A, Cichocki A, et al. Multikernel capsule network for schizophrenia identification. IEEE Trans Cybern, 2022, 52(6): 4741 doi: 10.1109/TCYB.2020.3035282

    [26]

    Huang J S, Wang M L, Ju H R, et al. SD-CNN: A static-dynamic convolutional neural network for functional brain networks. Med Image Anal, 2023, 83: 102679 doi: 10.1016/j.media.2022.102679

    [27]

    Zhu Q, Xu R T, Wang R, et al. Stacked topological preserving dynamic brain networks representation and classification. IEEE Trans Med Imag, 2022, 41(11): 3473 doi: 10.1109/TMI.2022.3186797

    [28]

    Zhao M, Yan W Z, Luo N, et al. An attention-based hybrid deep learning framework integrating brain connectivity and activity of resting-state functional MRI data. Med Image Anal, 2022, 78: 102413 doi: 10.1016/j.media.2022.102413

    [29]

    SupriyaPatro P, Goel T, VaraPrasad S A, et al. Lightweight 3D convolutional neural network for schizophrenia diagnosis using MRI images and ensemble bagging classifier. Cogn Comput, 2022: 1

    [30]

    Wang R G, Chaudhari P, Davatzikos C. Embracing the disharmony in medical imaging: A Simple and effective framework for domain adaptation. Med Image Anal, 2022, 76: 102309 doi: 10.1016/j.media.2021.102309

    [31]

    Lin Q H, Niu Y W, Sui J, et al. SSPNet: An interpretable 3D-CNN for classification of schizophrenia using phase maps of resting-state complex-valued fMRI data. Med Image Anal, 2022, 79: 102430 doi: 10.1016/j.media.2022.102430

    [32]

    Du Y H, Li B, Hou Y L, et al. A deep learning fusion model for brain disorder classification: Application to distinguishing schizophrenia and autism spectrum disorder // Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics. New York, 2020: 1

    [33]

    Oh J, Oh B L, Lee K U, et al. Identifying schizophrenia using structural MRI with a deep learning algorithm. Front Psychiatry, 2020, 11: 16

    [34]

    Hashimoto Y, Ogata Y, Honda M, et al. Deep feature extraction for resting-state functional MRI by self-supervised learning and application to schizophrenia diagnosis. Front Neurosci, 2021, 15: 696853 doi: 10.3389/fnins.2021.696853

    [35]

    Huang Q, Qiao C, Jing K L, et al. Biomarkers identification for Schizophrenia via VAE and GSDAE-based data augmentation. Comput Biol Med, 2022, 146: 105603 doi: 10.1016/j.compbiomed.2022.105603

    [36]

    Yu H K, Florian T, Calhoun V, et al. Deep learning from imaging genetics for schizophrenia classification // 2022 IEEE International Conference on Image Processing (ICIP). Bordeaux, 2022: 3291

    [37]

    Yan W Z, Zhao M, Fu Z N, et al. Mapping relationships among schizophrenia, bipolar and schizoaffective disorders: A deep classification and clustering framework using fMRI time series. Schizophr Res, 2022, 245: 141 doi: 10.1016/j.schres.2021.02.007

    [38]

    Hu M J, Qian X, Liu S W, et al. Structural and diffusion MRI based schizophrenia classification using 2D pretrained and 3D naive Convolutional Neural Networks. Schizophr Res, 2022, 243: 330 doi: 10.1016/j.schres.2021.06.011

    [39]

    Chen X Y, Zhou J, Ke P F, et al. Classification of schizophrenia patients using a graph convolutional network: A combined functional MRI and connectomics analysis. Biomed Signal Process Contr, 2023, 80: 104293 doi: 10.1016/j.bspc.2022.104293

    [40]

    Liu J, Li M, Lan W, et al. Classification of Alzheimer’s disease using whole brain hierarchical network. IEEE/ACM Trans Comput Biol Bioinform, 2016, 15(2): 624

    [41]

    Collaborators G 2 D F. Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: An analysis for the Global Burden of Disease Study 2019. Lancet Public Health, 2022, 7(2): e105 doi: 10.1016/S2468-2667(21)00249-8

    [42]

    Gan J Z, Peng Z W, Zhu X F, et al. Brain functional connectivity analysis based on multi-graph fusion. Med Image Anal, 2021, 71: 102057 doi: 10.1016/j.media.2021.102057

    [43]

    Jie B, Liu M X, Lian C F, et al. Designing weighted correlation kernels in convolutional neural networks for functional connectivity based brain disease diagnosis. Med Image Anal, 2020, 63: 101709 doi: 10.1016/j.media.2020.101709

    [44]

    Yao D R, Sui J, Wang M L, et al. A mutual multi-scale triplet graph convolutional network for classification of brain disorders using functional or structural connectivity. IEEE Trans Med Imag, 2021, 40(4): 1279 doi: 10.1109/TMI.2021.3051604

    [45]

    Guan H, Liu Y B, Yang E K, et al. Multi-site MRI harmonization via attention-guided deep domain adaptation for brain disorder identification. Med Image Anal, 2021, 71: 102076 doi: 10.1016/j.media.2021.102076

    [46]

    Ko W, Jung W, Jeon E, et al. A deep generative–discriminative learning for multimodal representation in imaging genetics. IEEE Trans Med Imag, 2022, 41(9): 2348 doi: 10.1109/TMI.2022.3162870

    [47]

    Hindriks R, Adhikari M H, Murayama Y, et al. Can sliding-window correlations reveal dynamic functional connectivity in resting-state fMRI? NeuroImage, 2016, 127: 242

    [48]

    Shakil S, Lee C H, Keilholz S D. Evaluation of sliding window correlation performance for characterizing dynamic functional connectivity and brain states. NeuroImage, 2016, 133: 111 doi: 10.1016/j.neuroimage.2016.02.074

    [49]

    Li Y, Liu J Y, Tang Z Y, et al. Deep spatial-temporal feature fusion from adaptive dynamic functional connectivity for MCI identification. IEEE Trans Med Imag, 2020, 39(9): 2818 doi: 10.1109/TMI.2020.2976825

    [50]

    Lei B Y, Zhao Y J, Huang Z W, et al. Adaptive sparse learning using multi-template for neurodegenerative disease diagnosis. Med Image Anal, 2020, 61: 101632 doi: 10.1016/j.media.2019.101632

    [51]

    Zhang L, Wang L, Gao J, et al. Deep fusion of brain structure-function in mild cognitive impairment. Med Image Anal, 2021, 72: 102082 doi: 10.1016/j.media.2021.102082

    [52]

    Huang M Y, Lai H R, Yu Y W, et al. Deep-gated recurrent unit and diet network-based genome-wide association analysis for detecting the biomarkers of Alzheimer’s disease. Med Image Anal, 2021, 73: 102189 doi: 10.1016/j.media.2021.102189

    [53]

    Song X G, Zhou F, Frangi A F, et al. Graph convolution network with similarity awareness and adaptive calibration for disease-induced deterioration prediction. Med Image Anal, 2021, 69: 101947 doi: 10.1016/j.media.2020.101947

    [54]

    Wang M L, Shao W, Hao X K, et al. Identify complex imaging genetic patterns via fusion self-expressive network analysis. IEEE Trans Med Imag, 2021, 40(6): 1673 doi: 10.1109/TMI.2021.3063785

    [55]

    Han X M, Fei X Y, Wang J, et al. Doubly supervised transfer classifier for computer-aided diagnosis with imbalanced modalities. IEEE Trans Med Imag, 2022, 41(8): 2009 doi: 10.1109/TMI.2022.3152157

    [56]

    Song X G, Zhou F, Frangi A F, et al. Multicenter and multichannel pooling GCN for early AD diagnosis based on dual-modality fused brain network. IEEE Trans Med Imag, 2022, 42(2): 354

    [57]

    Yan. DPARSF: A MATLAB toolbox for “pipeline” data analysis of resting-state fMRI. Front Syst Neurosci, 2010, 4: 1377

    [58]

    Song X W, Dong Z Y, Long X Y, et al. REST: A toolkit for resting-state functional magnetic resonance imaging data processing. PLoS One, 2011, 6(9): e25031 doi: 10.1371/journal.pone.0025031

    [59]

    Ashburner J. SPM: A history. NeuroImage, 2012, 62(2): 791 doi: 10.1016/j.neuroimage.2011.10.025

    [60]

    Fischl B. FreeSurfer. NeuroImage, 2012, 62(2): 774 doi: 10.1016/j.neuroimage.2012.01.021

    [61]

    Yan C G, Wang X D, Lu B. DPABISurf: Data processing & analysis for brain imaging on surface. Sci Bull, 2021, 66(24): 2453 doi: 10.1016/j.scib.2021.09.016

    [62]

    Wen J H, Varol E, Sotiras A, et al. Multi-scale semi-supervised clustering of brain images: Deriving disease subtypes. Med Image Anal, 2022, 75: 102304 doi: 10.1016/j.media.2021.102304

    [63]

    Ali H, Murad S, Shah Z. Spot the fake lungs: Generating synthetic medical images using neural diffusion models // Irish Conference on Artificial Intelligence and Cognitive Science. Cham, 2023: 32

    [64]

    Godasu R, El-Gayar O, Sutrave K. Multi-stage transfer learning system with light-weight architectures in medical image classification // 26th Americas Conference on Information Systems(AMCIS 2020). Virtual Conference, 2020: 1

    [65]

    Dhinagar N J, Santhalingam V, Lawrence K E, et al. Few-shot classification of autism spectrum disorder using site-agnostic meta-learning and brain MRI [J/OL]. arXiv preprint (2023-03-14) [2023-08-22].https://arxiv.org/abs/2303.08224

    [66]

    Xu H, Ma J Y. EMFusion: An unsupervised enhanced medical image fusion network. Inf Fusion, 2021, 76: 177 doi: 10.1016/j.inffus.2021.06.001

    [67]

    Liu J, Du H, Guo R, et al. MMGK: Multimodality multiview graph representations and knowledge embedding for mild cognitive impairment diagnosis. IEEE Trans Comput Soc Syst.doi: 10.1109/TCSS.2022.3216483

    [68]

    Bi X A, Hu X, Xie Y M, et al. A novel CERNNE approach for predicting Parkinson’s Disease-associated genes and brain regions based on multimodal imaging genetics data. Med Image Anal, 2021, 67: 101830 doi: 10.1016/j.media.2020.101830

    [69]

    Eslami T, Raiker J S, Saeed F. Explainable and Scalable Machine Learning Algorithms for Detection of Autism Spectrum Disorder Using fMRI Data. San Diego, Academic Press, 2021

    [70]

    Nigri E, Ziviani N, Cappabianco F, et al. Explainable deep CNNs for MRI-based diagnosis of Alzheimer’s disease // 2020 International Joint Conference on Neural Networks (IJCNN). Glasgow, 2020: 1

    [71]

    Shojaei S, Saniee Abadeh M, Momeni Z. An evolutionary explainable deep learning approach for Alzheimer’s MRI classification. Expert Syst Appl, 2023, 220: 119709 doi: 10.1016/j.eswa.2023.119709

  • 期刊类型引用(2)

    1. 南淞华,彭超杰,崔应麟. 线粒体功能障碍与脑衰老:Web of Science核心数据库来源文献的计量学分析. 中国组织工程研究. 2025(26): 5642-5651 . 百度学术
    2. 米吾尔依提·海拉提,热娜古丽·艾合麦提尼亚孜,卡迪力亚·库尔班,严传波. 基于改进YOLOv7的肝囊型包虫病超声图像小病灶检测. 中国医学物理学杂志. 2024(03): 299-308 . 百度学术

    其他类型引用(4)

表(5)
计量
  • 文章访问数:  1233
  • HTML全文浏览量:  352
  • PDF下载量:  154
  • 被引次数: 6
出版历程
  • 收稿日期:  2023-02-03
  • 网络出版日期:  2023-09-28
  • 刊出日期:  2024-02-24

目录

/

返回文章
返回