Abstract

Resting-state functional magnetic resonance imaging (rs-fMRI) helps characterize the regional neural activity of the human brain. Currently, supervised deep learning methods that rely on a large amount of fMRI data have shown good performance in diagnosing specific brain diseases. However, there are significant differences in the structure and function of brain connectivity networks among patients with different brain diseases. This makes it difficult for the model to achieve satisfactory diagnostic performance when facing new diseases with limited data, thus severely hindering their application in clinical practice. In this work, we propose a self-supervised learning framework based on graph contrastive learning for cross-dataset brain disorder diagnosis. Specifically, we develop a graph structure learner that adaptively characterizes general brain connectivity networks for various brain disorders. We further develop a multi-state brain network encoder that can effectively enhance the representation of brain networks with functional information related to different brain diseases. We finally evaluate our model on different brain disorders and demonstrate advantages compared to other state-of-the-art methods.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/0719_paper.pdf

SharedIt Link: pending

SpringerLink (DOI): pending

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/0719_supp.pdf

Link to the Code Repository

N/A

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Che_Selfsupervised_MICCAI2024,
        author = { Chen, Dongdong and Yao, Linlin and Liu, Mengjun and Shen, Zhenrong and Hu, Yuqi and Song, Zhiyun and Wang, Qian and Zhang, Lichi},
        title = { { Self-supervised Learning with Adaptive Graph Structure and Function Representation For Cross-Dataset Brain Disorder Diagnosis } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15011},
        month = {October},
        page = {pending}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper proposes a self-supervised brain network learning framework with contrastive learning to predict unseen brain diseases. Besides, the paper designs a graph structure learner to generate different views of brain networks and a multi-state brain network encoder is used to adaptively extract brain network representations for multiple brain disorders.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1.The whole architecture of the paper is well organized and key motivations are clearly described, which is followed by most researchers. 2.The framework of proposed method seems to be reasonable and effective. 3.The paper considers rich functional features in multi-state brain networks and proposes a multi-state brain network encoder to learn the comprehensive representations, which is a new insight for brain disorder prediction. 4.The proposed method obtains obvious improvement compared with other baselines, and ablation study and qualitative analysis experiments demonstrate the effectiveness of the model.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1.There are some unclear expressions in this paper. For example, what is the bold signals in the Fig 1 and what is the difference between the bold signals and the original ROI-based fMRI signals? The explanations of the matrix P and Hp are confused.

    1. The paper only selects three contrast baselines, which cannot support the effectiveness of proposed method. Besides, there are still some similar works currently as: [1] Tang, Haoteng, et al. “Contrastive brain network learning via hierarchical signed graph pooling model.” IEEE Transactions on Neural Networks and Learning Systems (2022). [2] Luo, Xuexiong, et al. “An Interpretable Brain Graph Contrastive Learning Framework for Brain Disorder Analysis.” Conference on Web Search and Data Mining (WSDM’24). 2024. [3] Yang, Yi, et al. “Data-efficient brain connectome analysis via multi-task meta-learning.” Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022.
    2. There are lack of detailed explanations in some parts of this paper. For example, how to perform baseline experiments? Because some baselines are not the cross-dataset training methods. How to define the number of brain states and what is the reasoning for the setting?
    3. The method is only suitable for the setting that the training datasets and target datasets are the same neuroimaging data and the size of brain networks.
    4. The adaptive brain network representation learning for multiple disorders is highlighted an important contribution, but the method does not give more explanations.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    No

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    1.The authors should add more related baselines for the performance contrast. 2.The authors need to give more details for the multi-state brain network encoder. 3.The authors should give more explanations for experimental settings (as seen weaknesses) and discuss the limitations of the work. 4.More advice is seen as the weaknesses above.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    There are some drawbacks to this paper in method details and experiment analysis.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • [Post rebuttal] Please justify your decision

    Thanks for the authors’ replies. I keep the initial score.



Review #2

  • Please describe the contribution of the paper

    The authors propose a self-supervised learning framework based on graph contrastive learning for cross-dataset brain disorder diagnosis. The graph structure learner included in the framework is designed to generate adjacency matrices, and the multi-state brain network encoder is designed to consider multiple brain states simultaneously. Furthermore, the contrastive learning mechanism is involved to pre-train the framework based on large datasets. The results demonstrate the effectiveness of the proposed method.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Cross-dataset brain disorder diagnosis is of clinical potential, so the motivation of the study is attractive.
    2. Multiple brain states are considered, and the strategy may enrich the information extracted by the proposed brain network encoder.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) The section “Multi-State Brain Network Encoder”, which is of essential importance for the paper, is not clear enough (please see also comments to Question 8). 2) The reason why they design each strategy is unclear. For instance, the authors force each row of the matrix P (representing the probability of allocating the feature of each dimension to different brain states) “close to a one-hot vector”. This practice is not so reasonable, as overlapping connections are common among different brain states (Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, Calhoun VD. Tracking whole-brain connectivity dynamics in the resting state. Cerebral cortex. 2014 Mar 1;24(3):663-76.).

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    It would be better if the authors release their code. Now I cannot work out the “Multi-State Brain Network Encoder” totally based on their paper.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    1. Introduction: The second challenge is not straightforward enough (“How do we fully explore the characteristic information of brain networks to obtain the optimal representation for downstream disorder diagnosis tasks?”)
    2. Method: It’s not clear how Wb in Eq. (1) is obtained.
    3. Method: Though mathematically applicable, the functional significance of “the assignment probability of each dimension of functional features in distinct brain states” is difficult to understand.
    4. Method: ThetaL for calculating Hc is not explained.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    1) Cross-dataset brain disorder diagnosis is valuable for clinical practice, and considering multiple brain states is interesting; 2) It is possible for the authors to improve their section “Multi-State Brain Network Encoder” through providing strong reasons for their design.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Accept — should be accepted, independent of rebuttal (5)

  • [Post rebuttal] Please justify your decision

    The authors have addressed my other concerns, except the one regarding the practice of forcing each row of P to be close to one-hot. Even though the strategy is not so reasonable for me, the limited shortcoming cannot conceal the virtue of the paper. So, the paper is acceptable for me.



Review #3

  • Please describe the contribution of the paper

    This manuscript proposes a novel approach using self-supervision to enhance the generalizability and performance of brain disease classifiers based on resting-state functional magnetic resonance imaging. The authors employ a specific self-supervised learning technique that leverages unlabeled data to better capture the complex patterns of brain connectivity associated with various disorders. The methods were evaluated on two distinct datasets, ABIDE for autism spectrum disorders and ADHD for attention deficit hyperactivity disorder, demonstrating statistically significant improvements over both supervised and existing self-supervised baselines.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Strenghts:

    • The idea is interesting and the use of self-supervised learning seems to boost the performance on detecting brain disorders.
    • The application of the proposed techniques on two distinct and well-recognized datasets—ABIDE for autism spectrum disorders and ADHD for attention deficit hyperactivity disorder—demonstrates the robustness and effectiveness of the method across different brain disorders.
    • The manuscript documents statistically significant enhancements in diagnostic performance compared to both supervised and other self-supervised methods
    • The inclusion of ablation studies provides a clear understanding of the importance and impact of each component of the model.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Comments:

    • The manuscript notes that optimal results are achieved only when all proposed components are integrated. Missing any of the components seems to result in poorer results compared to the baseline. Can the authors elaborate on this? Can any of the employed components be applied to [22] as well?
    • There is a need for clearer differentiation from prior work, especially since self-supervised learning in this context has been explored previously. (see contribution 1))
    • How do simple classifiers that directly operate on the ROIs BOLD signals perform? They could serve as a simple and informative baseline.
    • Can the localization of disease be quantified as well? The authors briefly mentioned the amygdala in ADHD. I was wondering whether such spatial correlations could be evaluated more systematically.
    • Adding connectivity network visualizations for healthy subjects could provide a useful benchmark and enrich the overall discussion by illustrating the contrast with disease-specific networks.
    • The paper could improve by discussing the sensitivity of the parameters (alpha, beta, and delta) on the model’s performance. The figure in the supplementary materials is difficult to read, could the authors provide a table instead?
    • While the introduction discusses the diagnostic challenges of rare diseases, these are not directly addressed in the experiments. Refocusing on challenges that the paper tackles, such as disease heterogeneity, might make the objectives clearer and more aligned with the presented results.
    • For readers not familiar with the key findings of references ([4,13]), including a concise summary within the manuscript would help make the discussion more self-contained and accessible.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    See above.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper presents interesting applications of self-supervised learning for disease detection using data derived from resting-state functional brain magnetic resonance imaging. The evaluation is comprehensive, and the authors provide statistical significance tests to prove meaningful improvements. However, the method seems to depend on all the different introduced concepts to outperform related self-supervised approaches, and the omission of any component causes significant drops in performance. This might raise questions about the sensitivity of the performance also in respect to the chosen hyper-parameters for optimization.

  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Accept — should be accepted, independent of rebuttal (5)

  • [Post rebuttal] Please justify your decision

    The authors addressed my concerns during the rebuttal.




Author Feedback

We thank all reviewers for the insightful comments. Our responses to the major concerns are itemized as follows. 1) Provide design reason for “Multi-state brain network encoder” (R1): Existing methods directly encode brain networks into a feature vector, which entangles different brain functional characteristics. Thus, we design the encoder to separate distinct functional brain states and capture a high-level representation of brain networks.

2) Why force each row of P close to one-hot (R1)? P is a probability matrix where each column represents the probability corresponding to a specific brain state and each row represents the probability distribution of a one-dimensional feature in distinct brain states. Each feature characterizes the specific nature of the brain and should ideally be clustered into a specific brain state. Thus, we use one-hot to constrain feature space, which has no relationship with brain connections among different brain states.

3) Unclearness in method (e.g., What is the difference between bold signals and fMRI? Explain matrix P and HP) (R4): Our method utilizes original fMRI data as input, which is voxel-based 4D (XYZT) imaging data. After preprocessing, we acquire bold signals, which are ROI-based 2D (ROI_numT) sequence data. We generate a probability embedding matrix HP, which is a hidden representation and utilized for learnable clustering to acquire probability matrix P. Each element in P represents the probability of features that are assigned to distinct brain states.

4) Unclearness in the experiment (e.g., How to perform non-cross-dataset methods? How to define the number of brain states?) (R4): We train from scratch on the target dataset and compare it with our w/o pretext model to ensure fairness. The number of brain states is a hyper-parameter and is obtained through parameter tuning. The detailed parameter analysis refers to Appendix A.

5) Missing any of the components leads to poorer results (R5): Each item of loss is specifically designed for the modules, so they need to work together to be effective. Furthermore, compared to the baseline (i.e., methods without proposed modules), individual ablation on both modules shows improvements, and they can be applied to other methods to improve performance.

Our responses to the reviewers’ specific comments. R1: 1) Explain the second challenge: The challenge is how to improve the representation ability of the model to deal with the complex functional characteristics of the brain network. 2) How to obtain W^b: It is a learnable weight obtained by model training. 3) The functional significance of P: P is utilized for computing the representation of each brain state. 4) Explain ThetaL: It is the learnable weight of the l-th GIN layer.

R4: 1) Compare more baselines: We will consider them for future work based on rebuttal guidelines. 2) The method is only for the same neuroimaging and brain network size: Our work focuses on solving the challenges of various brain diseases and complex brain functions. The multi-modality or multi-atlas tasks are out of the scope. 3) Explain adaptive brain network: The structure and representation of brain networks can be continuously optimized during training to adapt to different brain diseases. 4) Discuss limitations: Our method is built on a pre-training and fine-tuning framework, thus relying on a large amount of unlabeled data to improve the performance of the model.

R5: 1) Differentiation from prior work: We propose a learning-based contrastive framework that differs from traditional handcraft methods. 2) Why not use simple classifiers as baselines? They are traditional methods and have been defeated by SOTA methods (e.g., [7,24]). 3) Can the localization of disease be quantified? Yes, we can utilize the gradients to quantify them. 4) Provide a summary of references [4,13]: Cognitive dysfunction is associated with disrupted global and local brain network connectivity.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    A majority of the reviewers were already leaning towards an accept or have modified their scores to recommend acceptance. The objections raised by Rev. 4 were in regard to requiring additional baselines, which is beyond the scope of the rebuttal. Beyond this contention, the responses seem to have clarified a majority of the concerns raised in the original round of reviews.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    A majority of the reviewers were already leaning towards an accept or have modified their scores to recommend acceptance. The objections raised by Rev. 4 were in regard to requiring additional baselines, which is beyond the scope of the rebuttal. Beyond this contention, the responses seem to have clarified a majority of the concerns raised in the original round of reviews.



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



back to top