Abstract

Brain functional connectivity analysis plays a crucial role in the computer-aided diagnosis of brain disorders. The brain organization is a heterogeneous structure with distinct functional divisions. However, current heterogeneous algorithms often introduce excessive parameters while characterizing heterogeneous relationships, leading to redundancy and overfitting. To address these issues, we propose the Heterogeneous Masked Attention-Guided Path Convolution (HM-AGPC) for functional brain network analysis. The HM-AGPC introduces a heterogeneous masked attention generation mechanism that preserves valuable heterogeneous relationships while minimizing redundant interactions and highlighting crucial functional connections. Moreover, the framework incorporates an attention-guided path convolution strategy, which leverages attention weights to guide the convolution kernel in focusing on the most salient features and pathways. This approach improves model performance without directly introducing extra parameters, thereby enhancing feature learning efficiency. We evaluate HM-AGPC on the ABIDE dataset using ten-fold cross-validation, where it demonstrates superior performance in disease diagnosis task compared to state-of-the-art methods. Additionally, the framework demonstrates high interpretability, making it a promising tool for computer-aided diagnosis and the identification of potential biomarkers.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/1523_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

https://github.com/SCUT-Xinlab/HM-AGPC

Link to the Dataset(s)

ABIDE dataset: https://fcon_1000.projects.nitrc.org/indi/abide/abide_I.html

BibTex

@InProceedings{XuJia_Heterogeneous_MICCAI2025,
        author = { Xu, Jiakun and Zhang, Xin and Xiong, Tong and Chen, Shengxian and Xing, Xiaofen and Hao, Jindou and Xu, Xiangmin},
        title = { { Heterogeneous Masked Attention-Guided Path Convolution for Functional Brain Network Analysis } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15971},
        month = {September},
        page = {384 -- 393}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper proposes a novel Heterogeneous Masked Attention-Guided Path Convolution (HM-AGPC) method for functional brain network analysis.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Heterogeneous Masked Attention Mechanism: A new mechanism is introduced to model the heterogeneity in functional brain networks. It selectively emphasizes valuable heterogeneous relationships while minimizing redundant interactions, thereby enhancing the model’s ability to highlight critical functional connections.
    2. Attention-Guided Path Convolution: A concise and effective convolution strategy is employed, using attention weights to guide the convolution kernels to focus on the most salient pathways and features. This design does not introduce extra parameters, yet improves the model’s ability to capture dynamic relationships between brain regions.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    1. Lack of Comparative Visualization for Interpretability: Although the paper claims interpretability advantages, the gradient back-tracking visualizations are not compared with those of other methods. To substantiate this claim, the authors should include comparative visual analyses and discuss whether their method provides superior interpretability.
    2. Inconsistent Results in Reproduction: The reproduced performance of some baseline methods deviates significantly from the originally reported results. For instance, the ALTER method reportedly achieves 82.8% on ABIDE, while in this paper, it only reaches 73.34%. The authors should provide detailed implementation and training settings to ensure fairness and reproducibility.
    3. Lack of Parameter Efficiency Comparison: The paper argues that many heterogeneous graph methods suffer from excessive parameters and overfitting. To support this claim, the authors should quantitatively compare the parameter counts and model complexity with other methods, thereby reinforcing the efficiency and structural advantage of their approach.
  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (3) Weak Reject — could be rejected, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper claims two major innovations: improved model interpretability and reduced parameter complexity. However, it lacks supporting comparative results or quantitative analyses to substantiate these claims, which undermines the credibility of the stated contributions.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    The author provided a clear response to my question and addressed some of the concerns I had about the paper.



Review #2

  • Please describe the contribution of the paper

    The paper proposes a heterogeneity-masking attention mechanism that emphasizes valuable heterogeneous relationships while minimizing redundant interactions, and an attention-guided path convolution strategy for functional brain network analysis.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    proposes a heterogeneity-masking attention with an attention-guided path convolution

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    1. The experimental analysis is not in-depth enough, and how the heterogeneous mask affects the model performance;
    2. an error in equation (8);
    3. Verification on only one data is not convincing enough;
    4. Ablation experiments show that the performance improvement of each module is not obvious
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The motivation of the paper is interesting, but the experimental support and persuasion are relatively weak

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #3

  • Please describe the contribution of the paper

    This paper proposes HM-AGPC for functional brain network analysis, addressing the limitations of existing heterogeneous graph algorithms, such as parameter redundancy and overfitting. HM-AGPC introduces two key mechanisms: heterogeneous masked attention generation and attention-guided path convolution.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The heterogeneous masked attention mechanism integrates brain region priors to model functional heterogeneity, reducing noise from redundant connections. The attention-guided convolution optimizes kernels without increasing parameters, enhancing efficiency.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    Validation is confined to the ABIDE dataset and 6 ROIs (frontal, parietal, temporal, occipital, insular, cerebellum). Testing on other datasets and other ROIs are needed to assess generalizability.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The requirements of hardware and efficiency of the algorithm are recommended to add. Training parameters (learning rate, batch size) are clear, but initialization details for linear layers in masking thresholds (Eqs. 6–7) are unspecified, potentially affecting replication.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A




Author Feedback

We thank all reviewers for their valuable feedback and acknowledgement of our contributions. Below is our response. 1 Code release and implementation details [R1, R2, R3] We initialize weights with a standard normal distribution and train models on a GeForce GTX3090 with 24GB. Upon acceptance, we will release full code—covering our model, baselines, training scripts, and all parameter settings. 2 Limited datasets and 6-ROI concern [R1-3, R2-1] We fully acknowledge the importance of testing across multiple datasets and sincerely appreciate this recommendation. We recognize that our method can also be evaluated on other public datasets such as ADHD200 and ADNI. Based on our experience, methods that yield strong performance on ABIDE typically generalize well to these datasets. Moreover, ABIDE is a widely adopted dataset—many studies evaluate only on ABIDE—and our competitive performance on ABIDE alone already provides strong evidence of HM-AGPC’s effectiveness. In future work, we will further validate our method on more datasets such as ADHD200 and ADNI. [R2-1] About the remark on ‘6 ROIs’, there may be a misunderstanding. As described in Section 3, we use the AAL atlas (116 ROIs) and divide 116 ROIs into 6 functional groups as our prior. Other priors—such as functional subnetworks—do not directly map onto the AAL atlas, thus can’t be used into our current framework. In the future, we will investigate how different atlas–prior mappings affect our method. 3 ALTER results mismatch [R3-2] The mismatch between our ALTER results and the original paper arises from different data and experiment setup. ALTER used 1,012 ABIDE subjects with the Craddock-200 atlas and a fixed train/test split, whereas we used 871 QC-filtered ABIDE subjects with the AAL atlas and 10-fold cross-validation. As for training settings, we ran the authors’ released code and strictly followed their experimental setup, fine-tuning hyperparameters under our data and reaching the best performance to ensure fairness. We will include the training settings of all baselines in the final version and release our code—covering our model and all baselines. 4 Ablation study details [R1-1,4] Table 2 reports our two ablations. In module ablation, w/o heterogeneity drops ACC by 1.26 %, w/o mask drops ACC by 1.84 %, and without attention guidance (BC-GCN) drops ACC by 2.30 % (shown in Table1), confirming the value of our three core mechanisms—attention guidance, masking, and heterogeneity. In hetero-prior ablation, we evaluated the impact of priors on the heterogeneous mask: left–right partition or random priors all lead to performance drops, indicating heterogeneous mask requires an appropriate prior to be effective. 5 Visualization analysis [R3-1] We compared visualizations across our method and several baselines. Visualizations show consistency in the crucial connections across methods (which is why we did not include them), but our method more effectively leverages heterogeneous links—this supports our interpretability advantage and explains why we outperform the baselines. We will include more comparative visual analyses in the final version. 6 Model size and efficiency [R2, R3-3] Our model has 2.165M parameters—comparable to BC-GCN’s 2.082M (most direct baseline) and less than PH-BTN’s 2.345M(heterogeneous graph method). Despite the smaller size, we observe a stable training-set accuracy around 87 % versus PH-BTN’s 98 %. Under identical hyperparameter settings, our test-set performance still surpasses PH-BTN’s while our training-set accuracy remains moderate; moreover, both training and test losses decrease steadily without rebound, indicating minimal overfitting. We will include a detailed discussion of computational efficiency and model size in the final version. 7 Equation typo [R1-2] We apologize for the error in Eq. (8) and will correct it in the final version. We hope these responses address all concerns and strengthen our paper.




Meta-Review

Meta-review #1

  • Your recommendation

    Invite for Rebuttal

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    N/A

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    Reviewers appreciated the proposed approach, but had several concerns about the experiments. The rebuttal clarified some misunderstandings and offered explanations to address many questions. In the end, the reviewers weighed the strengths to be greater than the weaknesses, and all lean toward acceptance of the work.



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



back to top