Abstract
Background
Alzheimer's disease (AD) is a neurodegenerative disorder. There are no drugs and methods for the treatment of AD, but early intervention can delay the deterioration of the disease. Therefore, the early diagnosis of AD and mild cognitive impairment (MCI) is significant. Structural magnetic resonance imaging (sMRI) is widely used to present structural changes in the subject's brain tissue. The relatively mild structural changes in the brain with MCI have led to ongoing challenges in the task of conversion prediction in MCI. Moreover, many multimodal AD diagnostic models proposed in recent years ignore the potential relationship between multimodal information.
Objective
To solve these problems, we propose a multimodal fine-grained classification model based on deep metric learning for AD diagnosis (DML-MFCM), which can fully exploit the fine-grained feature information of sMRI and learn the potential relationships between multimodal feature information.
Methods
First, we propose a fine-grained feature extraction module that can effectively capture the fine-grained feature information of the lesion area. Then, we introduce a multimodal cross-attention module to learn the potential relationships between multimodal data. In addition, we design a hybrid loss function based on deep metric learning. It can guide the model to learn the feature representation method between samples, which improves the model's performance in disease diagnosis.
Results
We have extensively evaluated the proposed models on the ADNI and AIBL datasets. The ACC of AD vs. NC, MCI vs. NC, and sMCI vs. pMCI tasks in the ADNI dataset are 98.75%, 95.88%, and 88.00%, respectively. The ACC on the AD vs. NC and MCI vs. NC tasks in the AIBL dataset are 94.33% and 91.67%.
Conclusions
The results demonstrate that our method has excellent performance in AD diagnosis.
Get full access to this article
View all access options for this article.
