Abstract

A brain network is defined by wiring anatomical regions in the brain with structural and functional relationships. It has an intricate topology with handful early features/biomarkers of neurodegenerative diseases, which emphasize the importance of analyzing connectomic features alongside region-wise assessments. Various graph neural network (GNN) approaches have been developed for brain network analysis, however, they mainly focused on node-centric analyses often treating edge features as an auxiliary information (i.e., adjacency matrix) to enhance node representations. In response, we propose a method that explicitly learns node and edge embeddings for brain network analysis. Introducing a dual aggregation framework, our model incorporates a novel spatial graph convolution layer with an incidence matrix. Enabling concurrent node-wise and edge-wise information aggregation for both nodes and edges, this framework captures the intricate node-edge relationships within the brain. Demonstrating superior performance on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, our model effectively handles the complex topology of brain networks. Furthermore, our model yields interpretable results with Grad-CAM, selectively identifying brain Regions of Interest (ROIs) and connectivities associated with AD, aligning with prior AD literature.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2758_paper.pdf

SharedIt Link: https://rdcu.be/dV18T

SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72086-4_50

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2758_supp.pdf

Link to the Code Repository

N/A

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Hwa_Multiorder_MICCAI2024,
        author = { Hwang, Yechan and Hwang, Soojin and Wu, Guorong and Kim, Won Hwa},
        title = { { Multi-order Simplex-based Graph Neural Network for Brain Network Analysis } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15005},
        month = {October},
        page = {532 -- 541}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    The authors propose a method that models not only node but also edge representations of a graph by leveraging a topological framework. The method is evaluated on the ADNI dataset in terms of quantitative results and interpretability aspects.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1) The formulation of the method is well presented and the preliminary section is helpful in setting the stage. 2) The quantitative comparison involves both baselines and competing methods. 3) The results are placed within a clinical context and the extensive discussion provides further affirmation on the validity of the method.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) Interpretability is an aspect that is highlighted across the paper and is notably referred to as one of the paper’s contributions. I feel that this claim is a bit misleading as, at a first glance, the reader is expecting to see a method that is inherently interpretable. However, the experimental section reveals that any form of interpretability occurs from deploying Grad-CAM.

    2) While competing methods are used to provide a quantitative evaluation, there is no discussion that provides insights into why the proposed method is potentially better than the other methods or tries to explain where the discrepancy in performance might be stemming from. Additionally, there is no comparison with competing methods on the qualitative/interpretability aspect.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    The authors claim in the rebuttal that their code will be made publicly available upon acceptance.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • The contribution regarding interpretability should be rephrased so that it clearly states the proposed method is not inherently interpretable.
    • Additional insights in the evaluation comparison should be provided that explain why the proposed approach outperforms the competing methods. It would also be interesting to see the interpretability framework deployed on the competing methods as well.
    • The comparison methods that the authors consider are either simple baselines or other graph-based methods. It would be nice if the authors could provide a comment on where their approach sits within the entire literature on ADNI classification.
    • What are the total numbers of features for each type? How many subjects are used for each experiment (since not all types of features are available for all subjects)? Does each experiment contain all subjects that have the corresponding features or only subjects with all feature types used in all of the experiments?
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The evaluation of the method

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    The authors have addressed my main comments in the rebuttal. They mention that the contribution on interpretability will be toned down in the manuscript. Additionally, in terms of the qualitative comparisons, the authors claim that these were not shown due to space limitations. I feel that the presence of such a comparison -even in the supplementary- would be more informative compared to e.g. Figure 1 of the supplementary, which show the same visualization as in the main paper but along a different axis. Given that the other reviewers view the contributions of this work favorably, I am inclined to increase my rating to weak accept as well.



Review #2

  • Please describe the contribution of the paper

    The paper introduces a dual node- and edge-based aggregation in GNNs with application to brain connectivity classification.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Novel dual edge and node-based aggregation in message passing in GNNs.
    2. Modeling the graph as a set of simplices.
    3. Providing a clinical interpretation of the GNN classification results using GradCam.
    4. Clear writing and easy to follow.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The extent of novelty is to be further justified with respect to the existing GNN/aggregation/message passing literature
    2. Limited comparison against variants of hybrid node/edge aggregation methods in GNNs
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    1. Novelty justification. Did the authors search all hybrid edge/node aggregation methods in GNNs? The following reviews may be helpful. ** Bessadok et al. “Graph neural networks in network neuroscience.” IEEE Transactions on Pattern Analysis and Machine Intelligence 45.5 (2022): 5833-5848. ** Zhou et al. “Graph neural networks: A review of methods and applications.” AI open 1 (2020): 57-81. ** Wu et al. “A comprehensive survey on graph neural networks.” IEEE transactions on neural networks and learning systems 32.1 (2020): 4-24.

    2. Justification of the comparison methods. The authors can add a few lines to justify the choice of the comparison methods such as CensNet. For instance, why include SVM and exclude GNN models with varying node/edge aggregation methods? The comparison seems to lack rigorous design and fairness.

    3. Lack of rootedness and appropriate references. The Method subsections should include references to prime papers introducing the key used components such as multi-hop aggregation. No references were cited.

    4. Poor rationale/motivation. The choice of mixing L and B is not well justified —a vague line was dropped without explaining the rationale behind such choices and rooting them in the literature using references.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Good paper with clear writing satisfactory claimed novelty.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Reject — should be rejected, independent of rebuttal (2)

  • [Post rebuttal] Please justify your decision

    The rebuttal did not address the issues of novelty with clarity as well as the fairness of the selected benchmarks. A few fundamental responses were quite vague.



Review #3

  • Please describe the contribution of the paper

    The paper proposes a new technique via a spatial graph convolution layer to aggregate node and edge-embeddings during learning to better represent complex, brain network topology. The layer implements dual aggregation that uses an incidence matrix for modeling inter-simplex relationships.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • A topological approach to node-edge structural brain network analysis via spatial graph convolution and simplices to capture richer, geometric information compared to conventional GNNs that focus mainly on nodes.
    • Explicit learning of both node and edge representations, in contrast to prior models that treat edge features as auxiliary information. The model jointly trains embeddings for ROIs and connectivity measures.
    • Strong performance on using varied structural brain connectivity measures to predict AD progression—outperforms benchmarked models with high accuracy, precision, recall and F1-scores.
    • Interpretable and consistent results that align with clinical literature. Using Grad-CAM, the model identifies top-10 ROIs and connectomes with the highest activation to classify AD. Additional visualizations provided to visualize activation strength between ROIs and connectomes.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • Implementation details are very scare, other than number of layers and K. How many trials were experiments run for? Epochs? Were metrics aggregated/weighted, especially due to multi-class setting?
    • Might be helpful to directly include in the paper differences in the individual contribution of the dual aggregation vs. the multi-hop aggregation. Even if pushed to the supplementary materials, you could briefly summarize those findings.
    • Curious about class imbalance for SMC cohort and how that was addressed. There also a mention of this technique being useful for cases w/ low number of samples, but the dataset used is large so not sure how that connects.
    • How scalable would this computation be for higher resolution connectomes i.e., brain parcellations with more nodes and edges. Briefly discuss computational efficiency and cost/impact.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?
    • There wasn’t a mention of the code being available currently/post-acceptance. Strongly suggest including this.
    • Hyperparameters for the primary 1-hop/2-hop models were not included e.g. learning rate, dropout, etc.
    • Could also add a url/citation to the ADNI dataset.
    • Might want to reference “Convolving Directed Graph Edges via Hodge Laplacian for Brain Network Analysis” by Park et al., 2023.
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • The approach is novel and interesting, particularly the dual aggregation framework and use of an incidence matrix within a simplicial complex. The authors could better position their contributions in the context of related methods e.g. hypergraphs, spectral convolution.
    • The methods are well-described, and experiments conducted on the ADNI dataset are strong. The interpretability analysis aligns with clinical findings. There could be a bit more discussed on any gaps/areas for improvement found e.g. scalability to higher-D connectomes. Also suggest including statistical testing/significance between performance of your models vs. others (if any, e.g. Wilcoxon test).
    • The paper is written well, is organized and clear, good visualizations, and a few syntaxical errors. Would suggest proofreading for these. Additionally, for denoting matrices, bold variables like X, E, and A.
    • The work makes a valuable contribution to brain network analysis and has potential for clinical impact in understanding AD and related diseases. Releasing the code and models would facilitate reproducibility and adoption.
    • Discussing future directions, such as extension to other diseases, types of brain connectivity, and integration with other imaging modalities, could also be helpful to mention.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall, the paper proposes a strong approach rooted in TDA for structural brain network analysis. The methods were described well mathematically and the results showed demonstrative improvements over some existing, related works. Interpretable findings grounded in clinical literature were also helpful to have alongside explanations. However, I have major concerns with the technical/implementation details being almost entirely missing–not reproducible and this hinders rigor in research.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Accept — should be accepted, independent of rebuttal (5)

  • [Post rebuttal] Please justify your decision

    The authors acknowledged my concerns in the rebuttal. They revised their stance on open-source code and sharing their experimental setup. They also moved up some important results on experiment comparisons and emphasized their contributions against suggested similar works.

    I would suggest excluding the claim of low-sample size advantage—n=2000 is not a low sample size, especially for a medical task (despite other datasets being larger, which is also rare). Also not statistified with the response to SMC class imbalance by simply pointing to the results—again, for reproducibility and generalization-sake, not clear what can be done to address this for different datasets/tasks (excluding exact pipeline).

    Overall, I think with the intentions to revise as stated by the authors, this should be accepted as it could serve as a useful technique for related works.




Author Feedback

We appreciate the reviewers’ constructive feedback. We will address all concerns and revise the text accordingly. For reproducibility, our code will be released upon acceptance.

[R3,R4] Novelty justification / Evaluation comparison Our approach represents a graph as a simplicial complex. It utilizes incidence matrix and Hodge Laplacians for message passing, facilitating explicit learning of node and edge combination. It preserves edge information and captures inter-simplex relationships, unlike other methods, leading to a deeper understanding of topological properties.

[R3] Baseline justification SVM and GCN are standard conventional methods. CensNet and EGNN are the most commonly referenced and publicly available graph classification for node-and-edge aggregation.

[R3,R5] Lack of references / Syntax errors We will cite suggested references and correct syntactical errors.

[R3] Rationale / Motivation NENN [Yang et al., ACML 2020] aggregated information from neighboring nodes and edges, showing improved performance on typical graph tasks. With this rationale, we obtain richer inter-simplex embeddings by mixing L and B.

[R4] Misleading contribution for interpretability We will adjust the contributions to avoid any potential misunderstanding, stating that our model yields interpretable results ‘with Grad-CAM’.

[R4] No qualitative comparison Qualitative results from other methods are not included due to page limit. The most influential ROIs differ across different methods as they derive different representations, but several key ROIs, e.g., Putamen, show up in common. Moreover, only ROI-wise Grad-CAMs are available for other baselines, while it can be derived for both ROIs and edges in our method. Last, our result demonstrated symmetric and concentrated ROI/edges (Fig. 3), while the results from other methods show more scattered patterns.

[R4] Position within the entire ADNI classification There exist various ADNI analyses and they usually cannot be compared one-to-one. Most studies focus on binary classification i.e., AD vs control. Results in [Sheng et al., IEEE JBHI 2022] report accuracy in ~0.90 for binary classification and results in [Kolahkaj et al., Neuroscience Informatics 2023] reports accuracy in ~0.92 for 3-way classification, but our method demonstrates even higher accuracy 0.93 for 5-way classification.

[R4] Number of features and subjects for each experiment Three types of features (CT, Amyloid and FDG) for four experiments (three individual analyses and a combined one) are used, and each experiment encompassed all subjects possessing the respective features (See supplementary).

[R5] Implementation details Our model was trained with Adam optimizer, LR 0.001 for 200 epochs on a NVIDIA RTX A6000 GPU. Model performance was evaluated using average acc, macro-precision, -recall, and -F1-score.

[R5] Dual vs. multi-hop aggregation We will take the result w/o dual aggregation in Appendix Table 3 to the main manuscript (Table 1) and discuss it as the reviewer recommended.

[R5] Class imbalance for SMC? As seen in Tab 1., high precision, recall and F1-score conclude that class imbalance was not an issue.

[R5] Handling low sample-size? Indeed, ADNI is the largest dataset in Alzheimer analysis, however, its current sample size of ~2000 is far less than typical graph benchmark datasets such as Tox21 (N=7,831) and Lipophilicity (N=4,200) used in CensNet and EGNN.

[R5] Contributions in the context of related methods While methods in [Jo et al., NeurIPS 2021] and [Huang et al., IPMI 2023] learn edge embeddings via hypergraph transformation and spectral filtering, respectively, they do not leverage relationships between nodes and edges as we do.

[R5] Computational efficiency / future directions The computational cost scales quadratically with edge numbers, and addressing this challenge is part of our future work. We will also apply our method to other brain disorders with statistical experiments for journal extensions.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    Reviewers were mixed in their assessments of this paper. Many agree that strengths of the paper include that the presented method was interesting and novel, the results showed good performance, the interpretation analysis/discussion provided context of the results, and the paper was well-written. However, there were also serious concerns still remaining after rebuttal regarding placing the proposed method in context of related work, missing comparisons to other more related and recent baselines, and misconstruing interpretability of the model (authors say will address this in final version). Weighing these concerns, I decide to follow the majority of the reviewers’ recommendations, as I believe the presented method would be of interest, and the presented experimental results and analysis still demonstrate the potential of the approach.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    Reviewers were mixed in their assessments of this paper. Many agree that strengths of the paper include that the presented method was interesting and novel, the results showed good performance, the interpretation analysis/discussion provided context of the results, and the paper was well-written. However, there were also serious concerns still remaining after rebuttal regarding placing the proposed method in context of related work, missing comparisons to other more related and recent baselines, and misconstruing interpretability of the model (authors say will address this in final version). Weighing these concerns, I decide to follow the majority of the reviewers’ recommendations, as I believe the presented method would be of interest, and the presented experimental results and analysis still demonstrate the potential of the approach.



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Reject

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    The reviews of this paper were mixed before and after the rebuttal, making it a borderline case. Although some concerns of R4 and R5 could be addressed in the rebuttal, R3 pointed towards issues with the selection of benchmark methods and justification of novelty that remains unclear after the rebuttal. Overall, although the paper already provides some merit, I think it would need another revision before it is ready for publication.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    The reviews of this paper were mixed before and after the rebuttal, making it a borderline case. Although some concerns of R4 and R5 could be addressed in the rebuttal, R3 pointed towards issues with the selection of benchmark methods and justification of novelty that remains unclear after the rebuttal. Overall, although the paper already provides some merit, I think it would need another revision before it is ready for publication.



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    While a borderline case case, the paper is generally interesting to the audience and with certain technical merits.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    While a borderline case case, the paper is generally interesting to the audience and with certain technical merits.



back to top