Abstract

Brain network is an important tool for understanding the brain, offering insights for scientific research and clinical diagnosis. Existing models for brain networks typically primarily focus on brain regions or overlook the complexity of brain connectivities. MRI-derived brain network data is commonly susceptible to connectivity noise, underscoring the necessity of incorporating connectivities into the modeling of brain networks. To address this gap, we introduce a differentiable module for refining brain connectivity. We develop the multivariate optimization based on information bottleneck theory to address the complexity of the brain network and filter noisy or redundant connections. Also, our method functions as a flexible plugin that is adaptable to most graph neural networks. Our extensive experimental results show that the proposed method can significantly improve the performance of various baseline models and outperform other state-of-the-art methods, indicating the effectiveness and generalizability of the proposed method in refining brain network connectivity. The code is available at https://github.com/Fighting-HHY/D-CoRP.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2022_paper.pdf

SharedIt Link: pending

SpringerLink (DOI): pending

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2022_supp.pdf

Link to the Code Repository

https://github.com/Fighting-HHY/D-CoRP

Link to the Dataset(s)

http://umcd.humanconnectomeproject.org/umcd/default/index

BibTex

@InProceedings{Hu_DCoRP_MICCAI2024,
        author = { Hu, Haoyu and Zhang, Hongrun and Li, Chao},
        title = { { D-CoRP: Differentiable Connectivity Refinement for Functional Brain Networks } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15002},
        month = {October},
        page = {pending}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    A new deep learning based edge-denoising method is proposed for refining brain connectivity.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The strength of this approach lies in its innovative methods for enhancing the learnability and optimization of the network for refining brain connectivity.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Figure 2. shows visualization of refined brain network. But how the refinement in functional connectivity is evaluated is not clear.

    2. Experiments and Results section is poorly written. Table.2 should be discussed more elaborately to highlight the efficacy of the methodology.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    1. It is written that the valuation of brain age is shown as associated with brain network features, serving as an ideal benchmark for model evaluation. Is the proposed model is used for brain age prediction performance? If yes, the Table-1 shows high value MAE and RMSE which is very high which not a good result if it is brain age prediction.

    2. Figure 2. shows visualization of refined brain network. However the refinement is evaluated is not clear.

    3. Address the comments as written in Weekness section

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Results and discussion sections needs further modification to make the study objective clear and inference from the study to be justified.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This paper proposes a method for refining brain connectivity based on information bottleneck theory, which appears useful and effective, compared with other existing methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The proposed method addresses the complexity of brain networking which was not adequately considered in previously methods, which mostly considered only nodes in the network. The experiment results show that this proposed method outperformed other methods, and may improve the performance of a number of baseline models as included in this work.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The presentation needs improvement. For example, the 1st line on page 2. “However, these models only focus on node features without considering edge connections” not sure what exactly this refers to, whereas readers would expect a description with more information. The introduction has mentioned multiple previous work/projects, which should be described with more details. for example, how does GSL exactly do to denoising a graph structure? How is the IB theory actually focusing on “model-processed features of the input graph”? Brief descriptions using one or two sentences detailing the methods are necessary to subsequently demonstrate their shortcomings. Also the concept “plugin” should be clearly defined in the context of this work, as readers may think of various types of plugins, while it seems to be just an additional processing step here in the context. The method section needs to improve its clarity. The current version is written in a way for expert readers knowing the work very well, but readers interested in the work but not working in the closely related fields would be puzzled. For example, concretely describe the “node features”—should list a few instances for readers to conceptualize the idea; “learnable edge” – what exactly does “learnable” mean? The caption to Figure 1 should also be expanded to explain the flow/arrows. Page 4, contents concerning equation 4 is inadequately presented. For example, “Differentiable sampling for masking matrix”, why the relaxation of Bernoulli distribution was used is not clear and not explained; same place, the situation when Tuo approximates 0 is just presented without a clear explanation. Section 2.3 has similar issues. Also, Bernoulli distribution is based on binary experiments, while the connectivity to be dealt with in the work seems to be non-binary. How is this compatible with each other is not clear. Figure 2, what the authors aim to demonstrate is not exactly clear. Please expand the caption, and highlight in the results to guide reading

  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Do you have any additional comments regarding the paper’s reproducibility?

    Please rewrite the method section to make the presentation clearer.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    As a method paper, the presentation of the proposed method should be very clear. Please refer to the comments in the weakness portion and clarify the points accordingly. Limited by the length of the paper, the text may briefly describe the idea and leave the detailed information to the Supplementary to facilitate understanding and reproducibility of the work. In particular, the authors may want to expand the method section to explicitly elaborate the unique terms adopted in the proposed method but not available in other methods. The current version is a kind of simplified, relying on readers to explore the effectiveness, although the experiments show a positive outcome over the other methods.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed method appears reasonable and the experiment results appear effective in improving the performance of a number of baseline models and outperforming other methods. However, the method is convincing only if the presentation clarity of the idea and especially the technical details of the method can be improved.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Accept — should be accepted, independent of rebuttal (5)

  • [Post rebuttal] Please justify your decision

    The authors’ plan to revise and improve the manuscript in the rebuttal has addressed the concerns raised by the reviewers. I Therefore recommend “accept”.



Review #3

  • Please describe the contribution of the paper

    The paper propose an brain functional network refining method to remove noisy or irrelevant connections, which achieved better and more stable results compared with other SOTA.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper is overall nicely written, the method is solid, and the topic is interesting.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The description about the ground truth is not very clear. Is the ground truth manually defined connections? or is calculated by certain methods? How could you make sure that the ground truth connection are really the “real” connections.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Do you have any additional comments regarding the paper’s reproducibility?

    No.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    major comments: see in weakness parts.

    minnor suggestions:

    • In figure3, it might be better not overap legend with the charts.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall the paper is nicely written but there’s still somethings need to be improved.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Author Feedback

We appreciate all the reviewers for their constructive comments. [Overall presentation and important concepts (R3)]: We appreciate this comment and will check throughout the paper to clarify key concepts including: 1) The description ‘previous models focusing on node features: we intended to identify the gap that existing GNNs typically process node features by aggregating connected nodes without updating edge weights. This could challenge the learning schemes based on brain networks with much edge noise. 2) Conventional denoising in graph structure learning (GSL): GSL typically denoises the graph according to specific properties, e.g., the relationship of a community when optimizing a community network, where edges are optimized by masking irrelevant or noisy edges. 3) Information bottleneck (IB): IB minimizes the mutual information between processed features and the input to optimize information compression while maximizing the mutual information between features and the output to improve the model prediction performance. Since calculating mutual information only involves data distribution and does not need specific restrictions on graph types, IB is selected as the theoretic basis to develop our method for denoising brain connectivity effectively. 4) The concept of ‘plugin’: we would appreciate this comment and will clarify it. 5) Node features: vectors showing the properties of brain regions, including rows in the connectivity matrix and time series brain activity recordings. We will expand our explanations in both main texts and Figure 1. 6) Learnable edge mask: the masking matrix is learnable, i.e., masks in the matrix are updated during model training for better model performance. 7) We have introduced in 2.1 that the edge refinement is achieved by a masking process; in 2.2, we mentioned that Bernoulli is used to create a binary masking matrix to mask the original connectivity matrix; as Bernoulli is not differentiable, we need the relaxation of it. We appreciate the reviewer’s constructive comments and will expand all the above explanations further to improve the paper presentation. [Figure captions (R3, R4, R7)]: We would like to thank reviewers for this comment. Figure 2 presents exemplar refined networks using different approaches for visualized comparisons. For a comprehensive comparison, we compared our method with widely used traditional filtering method and other SOTA GSL methods, such as VIB-GSL. We used brain age prediction as an indirect evaluation of the model performance. Our experiments show that our method outperforms other methods in quantitative evaluations. For a qualitative evaluation, we present an example in 3.4, where the refined network demonstrates network modularity, a vital property of the human brain network. This result implies that the proposed approach could effectively refine the network connections to further characterize the brain. In the final version, we will expand all these explanations in the main text and captions of Fig 1-3. [Model performance (R4)]: Due to the heterogeneity of the utilized dataset, it is acknowledged that the age prediction task is challenging. However, our experiments show that by adding the proposed method, the model performance has significantly improved, supporting the usefulness of the proposed edge refinement module. [Model efficacy (R4)]: In 2.3, we have described the computational complexity of the efficient version of D-CoRP. In the final version, we will add metrics on model efficiency, including inference time, to Table 2. [Refinement evaluation (R7)]: Since brain age reflects vital properties of the human brain and has been widely studied, we chose brain age prediction to evaluate model performance. Down to the model frame, we only replaced our method with comparison methods in network refinement step. Combined the above, better performance on age prediction task from our experiment results indicate better refinement from our method.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    Post rebuttal, two of three reviewers are leaning towards an accept (one weak, one clear). Although Rev. 4 has not modified their score post rebuttal from weak reject, their major concerns regarding the performance of the D-CoRP module (as an add on to existing GNNs), the qualitative utility of including this module, and the efficiency of the framework seem to have been addressed adequately in the rebuttal.

    Overall, after considering the reviews, paper and rebuttal, I would like to accept this work. I think the contribution is interesting to the community and that the provided baselines/comparisons and experiments make a compelling argument supporting the utility of the methodology.

    However, I would recommend the authors pay close heed to the following in the final version if accepted:

    (1) Provide clarity on the node features, how the initial graph connectivity is generated from the FC matrices and the edge features used in the manuscript. (2) Mention the parcellations used for each dataset (3) Provide clearer distinction between the regular and efficient versions (and the computational complexity) (4) Please fix the heading in Table 2, where the first column suggests that higher MAE is better (also in the supplementary). (5) Please fix the inline references

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    Post rebuttal, two of three reviewers are leaning towards an accept (one weak, one clear). Although Rev. 4 has not modified their score post rebuttal from weak reject, their major concerns regarding the performance of the D-CoRP module (as an add on to existing GNNs), the qualitative utility of including this module, and the efficiency of the framework seem to have been addressed adequately in the rebuttal.

    Overall, after considering the reviews, paper and rebuttal, I would like to accept this work. I think the contribution is interesting to the community and that the provided baselines/comparisons and experiments make a compelling argument supporting the utility of the methodology.

    However, I would recommend the authors pay close heed to the following in the final version if accepted:

    (1) Provide clarity on the node features, how the initial graph connectivity is generated from the FC matrices and the edge features used in the manuscript. (2) Mention the parcellations used for each dataset (3) Provide clearer distinction between the regular and efficient versions (and the computational complexity) (4) Please fix the heading in Table 2, where the first column suggests that higher MAE is better (also in the supplementary). (5) Please fix the inline references



back to top