List of Papers Browse by Subject Areas Author List
Abstract
Functional magnetic resonance imaging (fMRI) is capable of assessing an individual’s cognitive abilities by measuring blood oxygen level dependence. Due to the complexity of brain structure and function, exploring the relationship between cognitive ability and brain functional connectivity is extremely challenging. Recently, graph neural networks have been employed to extract functional connectivity features for predicting cognitive scores. Nevertheless, these methods have two main limitations: 1) Ignore the hierarchical nature of brain: discarding fine-grained information within each brain region, and overlooking supplementary information on the functional hierarchy of the brain at multiple scales; 2) Ignore the small-world nature of brain: current methods for generating functional connectivity produce regular networks with relatively low information transmission efficiency. To address these issues, we propose a \textit{Hierarchical Graph Learning with Small-World Brain Connectomes} (SW-HGL) framework for cognitive prediction. This framework consists of three modules: the pyramid information extraction module (PIE), the small-world brain connectomes construction module (SW-BCC), and the hierarchical graph learning module (HGL). Specifically, PIE identifies representative vertices at both micro-scale (community level) and macro-scale (region level) through community clustering and graph pooling. SW-BCC simulates the small-world nature of brain by rewiring regular networks and establishes functional connections at both region and community levels. MSFEF is a dual-branch network used to extract and fuse micro-scale and macro-scale features for cognitive score prediction. Compared to state-of-the-art methods, our SW-HGL consistently achieves outstanding performance on HCP dataset.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2707_paper.pdf
SharedIt Link: https://rdcu.be/dV17I
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72086-4_29
Supplementary Material: N/A
Link to the Code Repository
https://github.com/CUHK-AIM-Group/SW-HGL
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Jia_Hierarchical_MICCAI2024,
author = { Jiang, Yu and He, Zhibin and Peng, Zhihao and Yuan, Yixuan},
title = { { Hierarchical Graph Learning with Small-World Brain Connectomes for Cognitive Prediction } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15005},
month = {October},
page = {306 -- 316}
}
Reviews
Review #1
- Please describe the contribution of the paper
This paper proposes a Hierarchical Graph Learning with Small-World Brain Connectomes framework for cognitive prediciton by brain networks. This method leverages the hierarchical information processing and the small-world nature of brain connectomes, which are both crucial for the associaition analysi between the coginitive score and the brain network.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The incorporation of both hierarchical and small-world models into the brain connectome analysis is innovative and well-justified. This dual approach potentially offers a more robust and efficient prediction model than existing single-scale methods.
- The paper provides a clear and thorough explanation of the SW-HGL framework, including the pyramid information extraction module, small-world brain connectomes construction module, and hierarchical graph learning module.
- The manuscript presents comprehensive experimental results, demonstrating the superior performance of the SW-HGL framework over several state-of-the-art methods in terms of RMSE, MAPE, and PCC metrics.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
1.This paper proposes building small-world connectomes into GNN using conventional methods in graph theory. This approach might be constrained by the limitations inherent in the established knowledge of constructing small-world networks. The graph neural network (GNN), as a data-driven method, could feasibly construct small-world networks attentively by itself. In this regard, the proposed method may not be optimally constructed. 2.Moreover, the sensitivity analysis of the hyperparameter p demonstrates that its value significantly affects performance, indicating that the framework heavily relies on prior knowledge of the small-world model, which could limit its adaptability. 3.The range of cognitive scores is not provided, which is crucial for evaluating performance improvements.
4.The methods compared are outdated. State-of-the-art methods, such as BrainNetTransformer, should also be included in the comparison. 5.Additionally, multiple metrics in graph theory play a significant role in brain network analysis. I wonder why only the small-world metric is implemented. How about other metrics, and is it possible to model them adaptively within the framework?
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
The framework is well-documented, making it easy to follow and reproduce.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
The authors should thoroughly address the concerns highlighted in the weaknesses section. It is essential for the method to articulate its unique contributions more clearly, rather than merely integrating existing traditional methods into a GNN framework. The style and approach of this paper have the potential to inspire numerous related works. Anyway, if the authors provided a clearer explanation of why the small-world was chosen over other measurements, such as the rich club, I would consider to accpet the paper.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
if the authors provided a clearer explanation of why the small-world was chosen over other measurements, such as the rich club, I would consider to accpet the paper.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
The author propose a framework that utilizes hierarchical graph neural network for brain functional network at different scopes while emphasizes the small-worldness of brain functional networks for cognitive score prediction. Essentially, the framework conduct clustering at two different levels of functional network to extract hierarchical features, and use message passings on rewired networks at different levels to learn a better representation. The validation was conducted on cognitive scores on HCP dataset, the author demonstrates the proposed network outperforms previous works on the evaluation setting.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The biggest strength of the paper seems to be considering the finer regions of the brain functional networks in order to learn better representation of the network. This approach is considered to be novel, given previous works utilize hierarchical graph networks but rarely consider hierarchical network features. And the following rewiring components reveal that considering longer connections is beneficial to the analysis of cognitive score with functional network.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
First, we hope the author could elaborate on the computation complexity of the proposed framework, especially for the PIE module. Second, aside from the proposed micro/macro-level feature extraction, the novelty of other two proposed modules seems to be limited. As shown in the table 1 by the author, the performance gain of SW module and HGL module could be interpreted as limited. Furthermore, some design choices of the framework are not straightforward. For example, we think it lacks explanations on why it is better to use random rewiring in the SW module (there seems to have a notation error for edge). The figure 3 shows that long range connections resulted by random reconnection indeed boosts the performance, but why not using more sophisticated rewiring methods that could possibly result model interpretability?
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not provide sufficient information for reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
This paper is well written and easy to follow. We wish authors could provide more contexts behind some of the design choices. And since the network aims to extracts finer communities and connections and further uses rewiring, it would help if authors could provide the comparisons of computation complexity. For future work, I would humbly recommend to: 1.Explore the possibility of more interpretable rewiring module. 2.Consider light weight framework by possibly compute finer communities in a preprocessing setting.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Based on the comments above.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
The paper is about Hierarchical Graph Learning, which is an interesting topic. The authors propose a Hierarchical Graph Learning with Small-World Brain Connectomes (SW-HGL) framework for cognitive prediction. The paper is well written and well organized. However, there are several concerns in the current version of the paper that addressing them will increase the quality of this paper.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Reasonable writing logic Interesting research ideas Cutting-edge research questions
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
1 In the introduction section, the author should clearly summarize the contribution of the paper to make it easier for readers to understand.
2 In Table 1, could the authors explain why different methods perform very differently on different datasets? Is this related to data distribution or model design?
3 Have the authors considered how to process dynamic brain data? Or, what data sets are there that take dynamic scenarios into consideration? This is a very common but important problem in the real world.
4 Some related works on graph learning can be discussed further [1,2]. [1] Deep Temporal Graph Clustering. ICLR 2024. [2] Transferable Graph Auto-Encoders for Cross-network Node Classification. Pattern Recognition.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
Reasonable writing logic Interesting research ideas Cutting-edge research questions
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
1 In the introduction section, the author should clearly summarize the contribution of the paper to make it easier for readers to understand.
2 In Table 1, could the authors explain why different methods perform very differently on different datasets? Is this related to data distribution or model design?
3 Have the authors considered how to process dynamic brain data? Or, what data sets are there that take dynamic scenarios into consideration? This is a very common but important problem in the real world.
4 Some related works on graph learning can be discussed further [1,2]. [1] Deep Temporal Graph Clustering. ICLR 2024. [2] Transferable Graph Auto-Encoders for Cross-network Node Classification. Pattern Recognition.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Author Feedback
We sincerely thank all reviewers for their invaluable comments. The code will be published for reproducibility. R4(Q1): Discuss about adaptive modeling A:We employed GNN for adaptive brain network modeling. The RMSE on PicSeq, Flanker, ProcSpeed, and ReadEng cognitive scores increased by 3.24, 4.19, 6.35, and 3.62. The learned brain network exhibited an average path length of 0.53 and a clustering coefficient of 0.67. The rewired brain network presented 0.42 and 0.76, suggesting strong small-world characteristics. This demonstrates that constructing a small-world network can enhance model performance and the ability of GNN to automatically learn the small-world characteristics is limited. R4(Q2): Discuss about hyperparameter p A:The choice of p indeed affects the model’s performance, but the increase in RMSE remains around 2, indicating no unacceptable degradation. To address the issue of manually selecting p, we incorporate a grid search for p to accommodate different datasets or tasks. R4(Q3): Provide the range of cognitive scores A:The range for PicSeq is [76.42, 135.55], Flanker [84.9, 142.11], ProcSpeed [51.62, 154.69], and ReadEng [84.2, 150.71]. R4(Q4): More comparison A: BrainNetTransformer’s performance on Picseq (RMSE:21.11, MAPE:18.06, CC:0.14), Flanker (RMSE:12.19, MAPE:10.26, CC:0.24), ProcSpeed (RMSE:15.39, MAPE:12.15, CC:0.32), and ReadEng scores (RMSE:14.92, MAPE:11.94, CC:0.33) will be listed in the final draft. R4(Q5)&R1(Q3): Clarify small-world network A: We constructed a small-world network due to its rapid information processing and transmission capabilities, mirroring the brain’s high-level cognitive functions. The rewiring method aligns with the small-world network’s original definition as an intermediate state between regular and random network. R4(Q5)&R1(Q3): Other metrics A: We implemented the modularity and heterogeneity metrics of the brain network, using clustering coefficients and degree centrality to guide the construction of brain networks. The RMSE of PicSeq, Flanker, ProcSpeed and ReadEng scores increased by (1.26, 1.34, 2.07, 0.92) and (0.52, 0.64, 1.64, 0.03) respectively, verifying the effectiveness of the proposed small-world metric based model. R1(Q1): Clarify computational complexity A: The computational complexity of PIE, SW-BNC and HGL module is O(NlogN), O(EF) and O(nH), where N is the number of vertices, E is the number of edges, F is the feature dimension, n is the number of concatenated features and H is the dimension of the features. R1(Q2)&R3(Q1): Novelty clarification A: Our approach differs from previous methods in three ways: a) PIE identifies representative vertices at micro and macro scales; b) we simulate the information processing method of human brain and use the small-world property to guide the construction of the functional connectivity for the first time; c) we design a cognitive score prediction paradigm using the HGL module. R3(Q2):Discuss on experimental results A: Model performance is closely tied to its design. Models like RegGNN and Meta-reggnn, trained on important samples, lack stability and are highly influenced by data distribution. The BrainGB model might underperform due to edge weight consideration in self-attention, while BrainGNN, using extra regional information, yields subpar results and is difficult to train. Our model integrates multi-level brain information and uses a small-world network for faster GNN information processing. R3(Q3&Q4): Discuss on dynamic brain data A: We use rs-fMRI to predict cognitive scores and form a static brain network. Recognizing brain dynamics is vital for tasks like attention shifts, emotional changes, and disease progression. Our framework can adapt to dynamic feature extraction by segmenting continuous brain data into time windows and detecting dynamic community changes. We plan to further explore dynamic brain data processing.
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The reviewers were mixed in their assessment of this work. They agreed upon common strengths including interest in the proposed hierarchical graph learning method and incorporation of small-world models, demonstrated performance in experiments, and good writing of the paper. However, there were concerns regarding clarifying motivation of the method and missing comparisons to more recent work. While the authors responded to these in the rebuttal, unfortunately it was with the presentation of new empirical results, rather than explanation of motivations/choices. Still, due to the potential interest in the method, I follow the majority of the reviewers and recommend accept.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
The reviewers were mixed in their assessment of this work. They agreed upon common strengths including interest in the proposed hierarchical graph learning method and incorporation of small-world models, demonstrated performance in experiments, and good writing of the paper. However, there were concerns regarding clarifying motivation of the method and missing comparisons to more recent work. While the authors responded to these in the rebuttal, unfortunately it was with the presentation of new empirical results, rather than explanation of motivations/choices. Still, due to the potential interest in the method, I follow the majority of the reviewers and recommend accept.
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
There are two reviewers which give “weakly accept” and only one for “weakly reject”, which only requires further clarifications on the motivations of choosing small-world. The rebuttal has addressed the comments made by the reviewers.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
There are two reviewers which give “weakly accept” and only one for “weakly reject”, which only requires further clarifications on the motivations of choosing small-world. The rebuttal has addressed the comments made by the reviewers.