List of Papers Browse by Subject Areas Author List
Abstract
The functional brain network exhibits a hierarchical characterized organization, balancing localized specialization with global integration through multi-scale hierarchical connectivity. While graph-based methods have advanced brain network analysis, conventional graph neural networks (GNNs) face interpretational limitations when modeling functional connectivity (FC) that encodes excitatory/inhibitory distinctions, often resorting to oversimplified edge weight transformations.
Existing methods usually inadequately represent the brain’s hierarchical organization, potentially missing critical information about multi-scale feature interactions. To address these limitations, we propose a novel brain network generation and analysis approach–Dynamic Hierarchical Graph Transformer (DHGFormer). Specifically, our method introduces an FC-driven dynamic attention mechanism that adaptively encodes brain excitatory/inhibitory connectivity patterns into transformer-based representations, enabling dynamic adjustment of the functional brain network. Furthermore, we design hierarchical GNNs that consider prior functional subnetwork knowledge to capture intra-subnetwork homogeneity and inter-subnetwork heterogeneity, thereby enhancing GNN performance in brain disease diagnosis tasks. Extensive experiments on the ABIDE and ADNI datasets demonstrate that DHGFormer consistently outperforms state-of-the-art methods in diagnosing neurological disorders.
The code is available at https://anonymous.4open.science/r/MICCAI-DHGFormer.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/1576_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
https://github.com/iMoonLab/DHGFormer
Link to the Dataset(s)
http://fcon_1000.projects.nitrc.org/indi/abide/
BibTex
@InProceedings{XueRun_DHGFormer_MICCAI2025,
author = { Xue, Rundong and Hu, Hao and Zhang, Zeyu and Han, Xiangmin and Wang, Juan and Gao, Yue and Du, Shaoyi},
title = { { DHGFormer: Dynamic Hierarchical Graph Transformer for Disorder Brain Disease Diagnosis } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15971},
month = {September},
page = {269 -- 279}
}
Reviews
Review #1
- Please describe the contribution of the paper
This paper presents DHGFormer, a novel framework for functional brain network analysis that addresses key limitations in existing GNN-based methods. The approach introduces a dynamic attention mechanism inspired by functional connectivity (FC), enabling adaptive modeling of excitatory and inhibitory interactions, which are typically oversimplified in prior work. Additionally, it incorporates a hierarchical GNN architecture that leverages prior knowledge of brain functional subnetworks to capture both intra- and inter-subnetwork dynamics. Extensive experiments on ABIDE and ADNI datasets show that DHGFormer achieves state-of-the-art performance in neurological disease diagnosis. The work is motivated by biological plausibility and emphasizes multi-scale hierarchical structure, offering a promising direction for improving interpretability and performance in brain network modeling.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-
Dynamic Graph Adaptation: The proposed FC-inspired dynamic attention mechanism allows for adaptive, task-aware adjustment of the brain network structure, improving representation of functional interactions.
-
Hierarchical Modeling: The integration of prior knowledge about brain functional subnetworks into a hierarchical GNN framework effectively captures intra-subnetwork homogeneity and inter-subnetwork heterogeneity, aligning well with the brain’s multi-scale organization.
-
The proposed framework has been evaluated on two distinct brain disorders.
-
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- Insufficient Justification for Model Complexity: The paper combines two graph-based paradigms—Graph Transformers and GNNs—into a relatively complex framework. However, the necessity of integrating both architectures is not sufficiently discussed. For example, in the context of Alzheimer’s disease (AD), many studies have shown that GNNs alone can achieve comparable or even superior classification performance compared to the results reported in this work. For relevant literature, please refer to the following reviews:
-
Zhang, L. et al., 2024. Exploring Alzheimer’s disease: a comprehensive brain connectome-based survey. Psychoradiology, 4, p.kkad033.
-
Tanveer, M. et al., 2020. Machine learning techniques for the diagnosis of Alzheimer’s disease: A review. ACM TOMM, 16(1s), pp.1–35.
-
Terminological Inaccuracy Regarding Brain Region Scales: The paper refers to “microscopic brain regions” when contrasting with “macroscopic functional subnetworks.” This terminology is misleading, as “microscopic” in neuroscience typically refers to structures at the cellular or subcellular level (e.g., neurons, dendrites, axons). In contrast, ROIs used in fMRI-based functional brain network analysis fall under the macroscopic scale.
-
Similarity to Prior Work: Section 2.2 on Intra-Subnetwork Graph Computation shares significant conceptual and methodological similarities with the following prior work:
- Bannadabhavi, A., Lee, S., Deng, W., Ying, R., & Li, X. (2023). Community-aware transformer for autism prediction in fMRI connectome. In MICCAI, pp. 287–297.
- For example, the current paper states:
- “Given M functional subnetworks (G₁, …, G_M) and the membership of ROIs in the Yeo [16], we rearrange the rows and columns of the node feature matrix X and the adjacency matrix A of the output of Dynamic Brain Transformer, resulting in X′ and A′.”
- This is conceptually very similar to the formulation in the 2023 MICCAI paper:
- “Given K functional communities and the membership of ROIs, we rearrange the rows and columns of the FC matrix, resulting in K input matrices.” Moreover, the Brain Hierarchical GNNs component of the proposed method also resembles the Local-Global Transformer Encoder in 2023 MICCAI paper.
While the implementation details may differ, these similarities should be explicitly acknowledged and carefully discussed to clarify the novelty and contributions of the proposed method relative to existing work.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
My overall score reflects a balance between the paper’s novel contributions and some notable areas needing improvement.
On the positive side, the paper addresses key limitations in conventional GNN-based brain network modeling by proposing DHGFormer, a hybrid framework that integrates a dynamic attention mechanism with hierarchical representations. The approach is biologically motivated, offering a meaningful way to incorporate excitatory/inhibitory functional connectivity patterns and multi-scale brain structure. The empirical evaluation on two well-known datasets (ABIDE and ADNI) demonstrates promising results, suggesting the framework’s generalizability across different neurological disorders.
However, there are a few concerns that moderate my enthusiasm:
-
Model Complexity vs. Necessity: The combination of Graph Transformers and GNNs increases the model’s complexity, but the necessity for integrating both components is not sufficiently justified, especially when prior work using standalone GNNs has achieved competitive or better performance in similar tasks.
-
Incremental Novelty: While the method is well-executed, certain components—such as using biologically informed connectivity—have already been explored in existing literature, and this overlap is not adequately acknowledged or contrasted.
-
Similarity to Existing Work: Certain parts of the methodology, especially the Intra-Subnetwork Graph Computation module, show strong resemblance to prior studies such as Bannadabhavi et al. (MICCAI 2023), which also reorganize the FC matrix based on community structure for subnetwork-level modeling. Moreover, the hierarchical GNN component bears similarity to the Local-Global Transformer Encoder architecture. These overlaps are not adequately acknowledged or differentiated, raising concerns about the novelty of the proposed approach.
-
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #2
- Please describe the contribution of the paper
The paper present the Dynamic Hierarchical Graph Transformer (DHGFormer), a novel framework that synergizes dynamic graph adaptation with hierarchical representation learning for enhanced brain network analysis.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
1)The paper proposes a hierarchical GNN that models intra- and inter-subnetwork relationships, allowing for a more comprehensive understanding of brain network dynamics. This framework enables cross-scale message passing, which updates information from microscopic brain regions to macroscopic functional subnetworks. This hierarchical structure ensures that the model captures the multi-scale architecture of brain networks, making the method both flexible and effective for complex brain disease diagnoses. 2)By incorporating prior knowledge about functional subnetworks, the method captures both the homogeneity within subnetworks and the heterogeneity between them. This incorporation of domain-specific knowledge significantly enhances the model’s ability to learn meaningful connectivity patterns and improves its performance on brain disease tasks, providing an additional layer of interpretability and clinical relevance.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
1)The paper does not provide a detailed comparison with other advanced or state-of-the-art graph-based methods that incorporate similar dynamic adjustments or hierarchical structures, A more comprehensive evaluation against these methods would better establish the novelty and effectiveness of DHGFormer in a broader context. 2)The paper lacks a deeper exploration of the model’s interpretability in the context of brain disease diagnosis. Although it mentions prioritizing biologically meaningful connections, a more detailed analysis of how specific brain regions or subnetworks contribute to the diagnosis would help clinicians better trust the model’s results.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper presents a novel framework, DHGFormer, that integrates dynamic graph adaptation with hierarchical representation learning to enhance brain network analysis. Its innovative use of an FC-inspired dynamic brain transformer to capture task-specific connectivity patterns.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #3
- Please describe the contribution of the paper
This article proposes a method called DHGFormer, which addresses the issue that traditional Graph Neural Networks (GNNs) are unable to perform multi - level and multi - scale interactions on brain structures. It has achieved excellent results on public datasets.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The most interesting aspect of this article lies in using functional connectivity (FC) as an attention - guiding factor to regulate the attention distribution. This approach is not only feasible and interpretable, but also adopts a multi - scale and multi - level method to enable interactions of brain subnetworks from local to global. From the perspective of the overall architecture, it can better integrate information from the brain network.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
The author adopted FC as an attention-guiding factor to regulate the attention distribution. This method has solved the interpretability limitations that traditional GNNs face when simulating and encoding the FC that differentiates between excitation and inhibition. The non-negative weights in traditional GNNs ensure to a certain extent that the information received by nodes from their neighboring nodes accumulates positively, preventing the situation where positive and negative values cancel each other out. Firstly, following the author’s train of thought, although introducing positive and negative weights can indeed capture the complex dynamic characteristics in the network, has the author considered that this may lead to the situation where some positive and negative values cancel each other out, resulting in information loss? The author integrated the FC information in Step 1, but in Step 2, a multi-scale interactive GNN was still used. It seems that the non-negativity of the weight edges still appears in Step 2. Will this also lead to the loss of the excitatory and inhibitory information of FC?
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(5) Accept — should be accepted, independent of rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The author’s writing logic is very clear, enabling readers to clearly understand what problem the author intends to solve, which demonstrates a strong writing motivation. Moreover, the author utilized a public dataset and achieved excellent results.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Author Feedback
Thanks for all the comments. The main concerns are addressed below: [R1]Existing GNNs require non-negative adjacency matrices, and existing methods discard activation/inhibition information due to taking absolute values of FC. To address this, we introduce positive and negative weights to capture network dynamics, then transform them into a non-negative adjacency matrix via Eq.2 for GNN compatibility. Crucially, while the adjacency matrix itself becomes non-negative, we preserve FC information through: The graph generation strengthens connectivity between nodes sharing similar states, embedding activation/inhibition patterns into the graph; We encode activation/inhibition signatures as node features, enabling their propagation and enhancement in GCNConv. This strategy ensures FC information is retained and amplified without requiring negative edge weights in GNN operations.
[R2]Q1 Compared Methods: Actually, the experiment part already includes two types of SOTA baselines: ALTER (Nips 2024) implements dynamically feature evolution through biased random walks; HHGF (IEEE Access 2024), the latest hierarchical GNNs. Q2 Interpretability Analysis: Our interpretability analysis operates at two levels: In Fig. 2(b), node-level top discriminative ROIs (MFG, PCG) mainly reside in DMN, aligning with prior studies of reduced DMN FC reflecting cognitive decline (Marco et al., 2011). The group-level visualization Fig 2(a) shows pronounced DMN-FPN, DMN-DAN connectivity, aligning with abnormal executive function regulation of ASD, and prior reports of DMN abnormality and aberrant connectivity with DAN/FPN (Padmanabhan et al., 2017).
[R3]Q1 Graph Transformers and GNN: DHGformer operates as a Graph Transformer rather than Graph Transformer+GNN. The FC-Inspired Dynamic Transformer encodes FC information to construct learnable graph structures, while the Brain Hierarchical GNNs perform hierarchical learning based on the learned graph guided by prior subnetworks. This design was motivated by the fact that GNNs alone discard critical excitatory/inhibitory patterns when processing FC matrices through absolute value transformations. Therefore, we propose a graph transformer framework that encodes brain excitatory/inhibitory patterns. Moreover, the comparison (Tab1 MVS-GCN, BrainIB) and ablation (Tab2 DHGformer w/o dynamic) prove the superiority of the graph transformer over GNNs. The works cited by the reviewer (Zhang, L. et al., 2024; Tanveer, M. et al., 2020), the code is not released and not comparable due to different data splits. [R3]Q2 Related Work: Due to space limitations, the discussion of hierarchical approaches requires further elaboration (which will be revised). Tab1 demonstrates advantages over two representative hierarchical methods (Com-TF, HHGF). [R3]Q3 Differences from Com-TF (Bannadabhavi et al.). Due to the same prior template Yeo, some similar formulations arise, as seen in other hierarchical methods (Guo J et al., 2024; Yang Y et al., 2024), and we also compared Com-TF in Tab1. Compared to Com-TF, the innovation of Brain Hierarchical GNNs lies in the separation of adjacency matrix diagonal (intra-subnet) and off-diagonal (inter-subnet) part to design a hierarchical graph that integrates both levels of relationships, rather than mere matrix reorganization. While Com-TF employs M parallel encoders for each subnetwork, our intra-subnet computation achieves efficient feature extraction via a single graph convolution on the diagonal part of the adjacency matrix, reducing parameters while improving accuracy as shown in Tab1. For inter-subnetwork correlations, we introduce feature mapping to project node-level relationships into group-level representations, enabling inter-subnetwork computations at the group level, coupled with a cross-scale message passing that enables bidirectional fusion between node- and group-level features, a critical capability absent in Com-TF. These distinctions are shown in our anonymous code repository.
Meta-Review
Meta-review #1
- Your recommendation
Invite for Rebuttal
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #3
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
I would encourage the authors to carefully address the comment raised by reviewers about the similarities and differences with related work in the introduction.