List of Papers Browse by Subject Areas Author List
Abstract
Recent developments in Graph Neural Networks~(GNNs) have shed light on understanding brain networks through innovative approaches. Despite these innovations, the significant costs associated with data collection and the challenges posed by data drift in real-world scenarios present substantial hurdles for models dependent on large datasets to capture brain activity features.
To address these issues, we introduce the Distributionally-Adaptive Variational Meta Learning (DAML) framework,
designed to equip the model with rapid adaptability to varying distributions by meta-learning-driven minimization of discrepancies between subject sets. Initially, we employ a graph encoder with the message-passing strategy to generate precise brain graph representations. Subsequently, we implement a distributionally-adaptive variational meta learning approach to functionally simulate data drift across subject sets, utilizing variational layers for parameterization and adaptive alignment methods to reduce discrepancies. Through comprehensive experiments on three real-world datasets with both few-shot and standard settings against various baselines, our DAML model demonstrates the state-of-the-art performance across all metrics, underscoring its efficiency and potential within limited data.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2429_paper.pdf
SharedIt Link: https://rdcu.be/dV54i
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72117-5_22
Supplementary Material: N/A
Link to the Code Repository
N/A
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Du_DistributionallyAdaptive_MICCAI2024,
author = { Du, Jing and Dong, Guangwei and Ma, Congbo and Xue, Shan and Wu, Jia and Yang, Jian and Beheshti, Amin and Sheng, Quan Z. and Giral, Alexis},
title = { { Distributionally-Adaptive Variational Meta Learning for Brain Graph Classification } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15010},
month = {October},
page = {229 -- 239}
}
Reviews
Review #1
- Please describe the contribution of the paper
• The paper proposes the Distributionally-Adaptive Variational Meta Learning (DAML) framework, designed to equip the model with rapid adaptability to varying distributions by meta learning-driven minimization of discrepancies between subject sets. • DAML employs a graph encoder with a message-passing strategy to generate precise brain graph representations. • Through comprehensive experiments on three real-world datasets with both few-shot and standard settings against various baselines, the DAML model demonstrates state-of-the-art performance across all metrics, underscoring its efficiency and potential within limited data.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
• Novel application of Neural Processes (NP) and meta-learning in brain graph classification: The paper explores the integration of NP and meta-learning techniques to address the challenges in brain graph classification, such as data drift and limited data availability. This innovative approach demonstrates the potential of applying these advanced machine learning techniques to the field of brain graph analysis, opening up new possibilities for future research. • Clear and understandable methodology: The paper presents a clear and well-structured explanation of the proposed DAML framework. The mathematical formulations and equations used to describe the graph representation learning and variational meta learning components are easy to understand, making the methodology accessible to a broader audience. The clarity in the method section enables readers to grasp the key concepts and reproduce the work if needed. • Addressing the critical problem of data drift: The paper tackles the important issue of data drift, which refers to the discrepancy between the training and real-world data distributions. In real-world applications, the data encountered during deployment often differs from the data used for training the model. By formulating the data drift problem as a functional discrepancy between distributions and proposing adaptive alignment techniques in the latent space, the DAML framework effectively addresses this challenge. The ability to adapt to varying data distributions is crucial for the practical applicability and generalizability of brain graph classification models.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
• Insufficient details on the experimental setup: The paper lacks important information about the experiments, particularly regarding the definition of the context set and target set during training. Without clear explanations of how these sets are constructed and used, it becomes difficult for readers to understand how the proposed DAML model leverages Neural Processes (NP) to address the data drift problem. • Inadequate explanation of the few-shot learning setting: Although the paper mentions conducting experiments in a few-shot learning setting, it does not provide sufficient details on how this setting is implemented. • Questionable effectiveness in addressing data drift: The paper states that the context set (C) belongs to the target set (T) and that they are from the same dataset. This raises concerns about the model’s ability to effectively solve the data drift problem. If both sets come from the same dataset, it is unclear how the model can capture and adapt to the discrepancies between the training and real-world data distributions. The paper does not provide a convincing explanation of how the proposed approach addresses data drift when the context and target sets are derived from the same source. • Dataset Limitation: The experiments are conducted on three datasets: ABIDE, HIV, and PPMI. However, the ABIDE dataset is known to be heterogeneous, which can introduce additional challenges and variability in the results. Moreover, the HIV and PPMI datasets have very limited sample sizes, which may not be representative of the true data distribution and can limit the generalizability of the findings. The paper does not adequately discuss the potential limitations and biases introduced by the dataset characteristics.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not provide sufficient information for reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
It would be best to provide the open-source code for reproduction purposes.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
• Experimental setup clarity: To enhance the reproducibility and interpretability of your work, please provide more details on the experimental setup, particularly regarding the definition of the context set and target set during training. Clearly explain how these sets are constructed and used within the DAML framework to address the data drift problem. This additional information will help readers better understand the novelty and effectiveness of your approach. • Few-shot learning details: Since you mention conducting experiments in a few-shot learning setting, it is important to provide more details on how this setting is implemented. Explain how the few-shot learning scenario is simulated, including the number of labeled samples used for training and any specific techniques employed to handle limited data. This will allow readers to assess the model’s performance and generalizability in this challenging context.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Lack detailed setting in Experiment part
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
This paper proposes a distributionally-adaptive variational meta learning framework, designed to equip the model with rapid adaptability to varying distributions by meta learning-driven minimization of discrepancies between subject sets.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- This paper employs a graph encoder to aggregate both node and edge information, generating comprehensive graph-level brain network representations.
- This paper regards data drift as the functional discrepancy between continuous functions, and encode distributions using variational methods for reconstructing distributional functions before and after observation.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Sensitivity and specificity is improtant for binary classification. However, the authors did not mentioned in the experimental results.
- The description of variational distribution encoder is nor clear. For example, the context set and the target set are simulate the different sets of experimental and real-world scenarios. How did the authors defined the different sets and why it works.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- The experimental results do not include an analysis of sensitivity and specificity, hindering further assessment of the proposed method’s ability to discriminate between positive and negative samples, such as ASD and NC.
- The authors need to provide more detailed information on the Distributionally-Adaptive Variational Meta Learning module, particularly the Variational Distribution Encoder.
- Figure 2 presents two subjects from the HIV dataset. Although the authors demonstrate that DAML accurately classifies these cases, their analysis of the reasons is overly simplistic.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The authors propose a DAML framework to equip the model with rapid adaptability to varying distributions by meta learning-driven minimization of discrepancies between subject sets. The idea is interesting, and has the potential to improve the diagnostic performance of ASD or AD. However, the experimental results do not adequately support their claims, and a more detailed description of the VDE module and experimental analysis is required.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
This work propose a Distributionally-Adaptive Variational Meta Learning for brain graph classification, named DAML. This method outperforms the existing benchmarks under standard and few-shot scenarios, providing a novel framework for brain graph classification.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The method encode distributions using variational methods for reconstructing distributional functions before and after observation.
- The convergence speed is faster than some baselines.
- The classification of proposed method outperforms all of baselines in three clinical datasets.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- How do you tune the different term in the loss function? It can have a different weight for adaptive alignment loss to get a better result.
- There is no sensitivity analysis for hyperparameters.
- Fig 2. list two HIV subjects to analysize the misclassification. How about the group-level result? How about the result in other two dataset?
- Is it possible to add an interpretation inside the model?Because some baselines have the interpretability, your method doesn’t.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
There is no any public link for the code.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
see above
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Need more experiments to show the superior and robust performance of the proposed method.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Author Feedback
Reviewer #1 Q1: In each batch, we form the context set (C) by selecting a subset of samples. The target set (T) includes C’s samples plus additional ones from the same batch. During training, we compute their Gaussian distribution parameters and measure the distributional divergence between C and T’s distributions. This metric is incorporated into the loss function. This process simulates differences between training and real-world data, enabling rapid adaptation of DAML. See Reviewer #2’s Q2 for more details. Q2: Thanks for pointing it out. Small-data regime may be more suitable for our setting. We will correct it in the final version. Q3: Although C and T originated from the same dataset, the small sample sizes and inherent randomness introduce statistical biases between them. By minimizing differences between C and T’s distributions, we enforce DAML to achieve fast adaptation among various distributions, thus addressing drifted statistical properties. Q4: We note that the limitations raised by the reviewer impact all methods fairly. DAML and comparison methods share the same datasets and preprocessing techniques. Moreover, to guarantee thorough coverage of all subjects, we employ five-fold cross-validation, affirming the fairness of our empirical study. Our analysis was done without taking advantage of any datasets. Issues related to dataset size and generalizability are important but beyond this study’s scope. Reviewer #2 Q1: While sensitivity and specificity metrics are not included, we did report the F1 score, which is the harmonic mean of precision and recall. F1 score offers a balanced evaluation of these two metrics. Given brain classification tasks are often biased, F1 score is widely used in prior studies. Q2: The Variational Distribution Encoder manages distributional discrepancies using a latent variable (z) that follows a Gaussian distribution. Data is firstly categorized into context and target sets to simulate experimental and real-world conditions. Brain graph representations (x_i) and labels (y_i) are concatenated to capture both individual features and their correlations. The encoder parameterizes Gaussian parameters for both context and target sets and minimizes the discrepancy between them, measured by JS divergence. Q3: Fig.2(a) shows prominent connections between the right hippocampus (HIP.R) and the right fusiform gyrus (FFG.R) in health control. In contrast, Fig.2(b) shows a weaker connection in HIV patients at the same sparsity level. Patients exhibit atypical connections involving the left hippocampus (HIP.L), the left middle occipital gyrus (MOG.L), the right postcentral gyrus (PoCG.R), and the right angular gyrus (ANG.R)—connections not found in healthy controls. These findings align with prior medical research and can be identified by DAML. Reviewer #3 Q1: Thanks for your suggestion. During training, we did not observe significant impacts of this parameter; therefore, we chose not to include further discussion on this aspect. Q2: Due to page limits, we omit the detailed analysis and prioritize to emphasize the superior adaptation ability. The optimal hyperparameters (as shown in section 4.1) are used for the results reported in Overall Performance Analysis. While eager to provide, the rebuttal policy prohibits us from reporting more results. We are happy to include this in the open-source version upon acceptance. Q3: Due to page limits, we primarily present visualization for HIV. ABIDE and PPMI, which show positive samples often misclassified as negative by other methods, can be recognized by DAML. On a group-level analysis, DAML achieves minimization of distribution discrepancies between context and target sets, addressing the data drift unaddressed by previous work. Q4: Since DAML models the Gaussian for both the context and target sets, it allows us to quantify the likelihood that test samples are being modeled, serving as a measure of uncertainty and enhancing DAML’s interpretability.
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The paper was reviewed by three experts in the field of brain graph classification. The reviewers generally appreciate the idea of using meta-learning to address data drifts in this context. In the initial reviewing stage, some reviewers pointed out that the method and experiment design were vague. I think the authors have addressed most of these issues, although we did not receive responses from the reviewers after several attempts. I incline to recommend acceptance of this paper. However, I highly recommend that the authors clarify the experimental setup as suggested by the reviewers.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
The paper was reviewed by three experts in the field of brain graph classification. The reviewers generally appreciate the idea of using meta-learning to address data drifts in this context. In the initial reviewing stage, some reviewers pointed out that the method and experiment design were vague. I think the authors have addressed most of these issues, although we did not receive responses from the reviewers after several attempts. I incline to recommend acceptance of this paper. However, I highly recommend that the authors clarify the experimental setup as suggested by the reviewers.
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
This is a borderline paper. Although the reviewers have raised several issues, it seems the authors were able to address most of them in the rebuttal. The authors are encouraged to include all details in the next version of the paper.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
This is a borderline paper. Although the reviewers have raised several issues, it seems the authors were able to address most of them in the rebuttal. The authors are encouraged to include all details in the next version of the paper.