Abstract

Recent advancements in deep learning have significantly improved brain tumour segmentation techniques; however, the results still lack confidence and robustness as they solely consider image data without biophysical priors or pathological information. Integrating biophysics-informed regularisation is one effective way to change this situation, as it provides an prior regularisation for automated end-to-end learning. In this paper, we propose a novel approach that designs brain tumour growth Partial Differential Equation (PDE) models as a regularisation with deep learning, operational with any network model. Our method introduces tumour growth PDE models directly into the segmentation process, improving accuracy and robustness, especially in data-scarce scenarios. This system estimates tumour cell density using a periodic activation function. By effectively integrating this estimation with biophysical models, we achieve a better capture of tumour characteristics. This approach not only aligns the segmentation closer to actual biological behaviour but also strengthens the model’s performance under limited data conditions. We demonstrate the effectiveness of our framework through extensive experiments on the BraTS 2023 dataset, showcasing significant improvements in both precision and reliability of tumour segmentation.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2262_paper.pdf

SharedIt Link: https://rdcu.be/dY6fu

SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72390-2_1

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2262_supp.pdf

Link to the Code Repository

https://github.com/uceclz0/biophy_brats

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Zha_Biophysics_MICCAI2024,
        author = { Zhang, Lipei and Cheng, Yanqi and Liu, Lihao and Schönlieb, Carola-Bibiane and Aviles-Rivero, Angelica I},
        title = { { Biophysics Informed Pathological Regularisation for Brain Tumour Segmentation } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15012},
        month = {October},
        page = {3 -- 13}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    he authors propose a new regularization strategy for tumour segmentation on MRI data. This new strategy is “biophysics informed” meaning it takes priors during training through integration of boundary conditions, assumed temporal dimension (treating each segmentation result from each optimization step as a temporal progression), and partial differential equations. The goal is model segmentation output similar to tumour growth itself.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    the results presentation, as it is clear the proposed method has benefits to segmentation performance both quantitatively and qualitatively. Presentation is clear and strategy proposed is novel to the best of my knowledge.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    the use of a single evaluation dataset. The improvement in DICE score performance was small, but observed in every experiment. Showing similar improvement under another dataset would be beneficial to the paper. Especially in the health domain, where scans and pathologies are heterogeneous across cohorts and sites.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Do you have any additional comments regarding the paper’s reproducibility?

    Only codes are needed as dataset is publicly available.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Overall I enjoyed the read. But please note part C of figure 1 needs further revision as it may confuse the reader regarding du/dt and how it is acquired (as it is not a product of the neural network as currently understood).

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Benefits of the proposed method are clear on the results. I wish there were additional datasets given the improvement was very minor. Method proposed is novel to the best of my knowledge.

  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    The issue of single random seed brought up by reviewer #6 was confirmed, which is a concern for the significance of results especially when only one dataset is used. Other reviewers share the concern with a single evaluation dataset. These concerns raise questions regarding the generalizability of the framework proposed. Weak Accept score is kept.



Review #2

  • Please describe the contribution of the paper

    This paper proposes a regularization term based on PDE that models tumor growth.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Five baseline networks are compared with and without the regularization term.
    • The potential advantageousness of the proposed method is measured in different conditions: with fewer training data, with different loss functions, with different MRI modalities
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • Only one dataset was used
    • It seems that each experiment was run only once, thus, not accounting for the potential differences in performance due to the random neural network initialization.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    Code is not provided or promised. Since many MICCAI readers may not be familiarized with this research area, the method might not be straightforward to implement. Therefore, sharing the code would increase the impact of the paper.

    Additionally, not all the details of the training were shared. For example, the exact architecture of the networks, which could be, e.g., provided in the Appendix.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Method

    • Specify the details regarding the exact architectures used. For example, how many layers or downsampling blocks had nnUNet? The architecture of the multilayer perceptron that is a central part of the proposed method is not described at all.
    • I’m not sure I understood the input size/shape of the multilayer perceptron. The time dimension means that the flatten feature maps (Fig 1, A) are concatenated over time? In this case, can this regularization term be used during the beginning of the optimization (where we only have one set of feature maps)?

    Experiments

    • The difference in performance after using the proposed regularization method is very small (e.g., 0.8728 vs. 0.8746, or 0.9248 vs. 0.9269). I recommend authors, if not for this paper for future work, that experiments should be run with different random seeds to account for the difference in performance due to the random initialization of the neural networks.

    Other

    • As described in the paper, one of the advantages of regularization terms is that are plug-and-play and can be used easily. To facilitate researchers doing this, I strongly suggest to share the source code of the method.

    Minor

    I found several typos throughout the paper:

    • Page 3, “The parameters \theta are obtained”
    • Page 3, “loss functional” -> “loss function”
    • Page 4, “incorporates an assumed”
    • Page 4, “feature maps of shape with shape”

    Writing suggestions:

    • Page 4, “This equations describes” -> “Equations X, Y, Z.. describe”
    • Figure 1. For consistency with the rest of the paper, I suggest writing “biophysics instead of bio-physics. Also, in this figure it is written that this method is a loss function, but somewhere else in the paper the method is referred as a regularization term.

    • Citations missing in page 6: Dice loss and Ranger optimizer
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    • Although the experiments were not run multiple times with different seeds, I believe that the slight increase in performance across the diverse set of conditions (different architecture, loss function, training set size) signals that the proposed regularization term is responsible for the performance gain.
  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    Post-rebuttal paper stack: 1 (n=4).

    After reading the authors’ response and the reviewers’ comments, I decided keep my score to Weak accept. Many of my concerns were clarified, but my biggest concern, the lack of having multiple runs with different random seeds, was confirmed, and it flaws the paper. I still think that the main strengths outweigh this (very big) weakness.



Review #3

  • Please describe the contribution of the paper

    The authors of this study present a novel method for improving brain tumor segmentation accuracy of (deep) image segmentation models.They suggest incorporating a biophysical model of tumor growth, with the deep learning segmentation loss as an additional regularization term. The authors validate their method using the BraTS 2023 dataset, demonstrating that this additional regularization significantly boosts both the accuracy and consistency of tumour segmentation across various deep learning architectures such as UNet and SegResNet. Such deep learning models with inductive bias informing the model about the biological characteristics of the disease would especially be beneficial for clinical scenarios that are data-scarce.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This study has several strengths. First, it formulates a biophysical model of tumor growth into a regularization term that can be easily plugged in with any deep learning image segmentation architecture. The ideas have been clearly presented and the experiments sufficiently demonstrate the benefit of using their method.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The study does not explain the rationale behind equations (5) and why it really helps to embed the assumed time t with the input x. The benefit of using the temporal dimension t is not clear.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Please explain the rationale behind adding the temporal dimension t to the tumour cell density estimator network.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The authors formulate a biological knowledge about the target task as an inductive bias that can integrated with any image segmentation model as an additional regularization term. The mathematically-sound formulation, the validation experiment, and the clear presentation of the ideas in this paper warrants an acceptance. Furthermore, the proposed solution addresses a crucial issue of training deep learning models on limited data that is a common in medical image datasets.

  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Author Feedback

We sincerely thank all comments. [Reviewer #4] Q1. Equations (5)

Thanks for the comment. This kernel is essential for preserving spatial information when computing the second derivative related to location. We reshape predicted cell density vectors into a 3D format (B, C, H, W, D) and use a 3D Laplacian kernel instead of the traditional central finite difference method. This approach allows for more efficient implementation via F.conv3D, reducing computational budget. We will add a clarifying note.

Q2. Embed time t with the input x? And the benefit of t.

We clarify that in the 2nd paragraph of page 4 conceptualises each segmentation optimisation step as a progression in time. Time embedding is crucial for simulating tumour dynamics during training, ensuring the model’s output aligns with expected growth dynamics. It also facilitates the calculation of the first-order time derivative, integrating temporal dynamics into our model. This will be clearer in the camera-ready.

[Reviewer #5 & Reviewer #6] Q1. Single evaluation dataset and heterogeneity across cohorts and sites

The BraTS 2023 dataset comprises diverse, multi-institutional pre-operative structural MRI scans reflecting various clinical settings. This dataset is recognised for its heterogeneity in glioblastoma sub-regions, providing a robust model of variability across different cohorts and sites. It is widely used as a standard protocol for evaluating models in brain disease research. As future work, we plan to test as well in functional MRI.

Q2. The improvement in DICE

It is important to highlight the consistent improvement across all experiments. We conducted a Wilcoxon test on all results generated with and without the biophysics regularisation. The results demonstrated statistically significant improvements in DICE scores across each evaluated region and mean of three regions, with a confidence level exceeding 95%.

[Reviewer #6] Q1. Detail of estimator’s architecture

The feature maps (B, C, H, W, D) are initially flattened to vectors (B, C, H × W × D) with respect to C. These are then concatenated with a time matrix T of the same size, resulting in an input size of (B, 2C, H × W × D). The estimator includes three fully connected layers with two sine activation functions (nn.linear + sine + nn.linear + sine + nn.linear). Outputs are (B, C, H × W × D) to compute the first-order time derivative, and finally back to (B, C, H, W, D) for subsequent calculations of the second derivative related to location and boundary conditions.

Q2. Detail of the other compared networks

For our study, the nn-UNet model was reimplemented using MONAI’s DynUNet, adhering to the original paper with a kernel size of [3, 3, 3], five downsampling and upsampling steps, instance normalisation, and no deep supervision, matching the configurations used in the BraTS challenges. The UNet-TR setup follows the original design with a feature size of 16, hidden size of 768, MLP dimension of 3072, 12 heads, perceptron-based positional embedding, and instance normalisation. For SegResNet and SegResNet-VAE, while based on default MONAI settings, we adapted instance normalisation for consistency across all networks.

Q3. Regularisation be used during the beginning of the optimization?

For early in the training, lower-level features such as texture are more prevalent. We suggest applying the regularisation to smaller feature maps at middle stages of training. At these stages, the feature maps contain higher-level, more pathological representations.

Q4. Random seed and neural network initialization

Thank you for your suggestion. We are taking your recommendation to include multiple runs with different random seeds in our future work to better assess our model’s performance.

[Other minor comments]

All typos will be corrected in our camera-ready version. More detail of networks will be provided in the supplementary file. The code will be released upon acceptance.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Reject

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    All reviewers’ confidence levels are “Somewhat confident (2)”. I think there are two main reasons: the paper is difficult to follow, and the reviewers are not familiar with the physics of tumor growth. As I have published papers of both tumor growth personalization and tumor segmentation, I would like to provide more details of why I suggest rejection for the paper.

    I am happy to see the combination of the physics of tumor growth and tumor segmentation. Nevertheless, this paper is very difficult to follow, and its scientific rigor is low. The diffusion-reaction equation is used when the initial shape and location of the tumor are available, and they are crucial. As the shape and location are unavailable before segmentation, the authors need to explain and justify their approach carefully in details. However, the appropriate rationales and explanations are missing, and eq (3) is incomprehensible. As mentioned by the reviewers, for y = (x_1, …, x_C, t_1, …, t_C), the time vectors t_i are not clearly defined. Are they identical for all channels? Are they changing with the learning steps? Furthermore, which layer are X from? Why there is a sin function in (3)? These are all unclear, even after the rebuttal.

    In the implementation, the spatial domain for eq (3) and (4) is 16x16x16, which is too small for reaction-diffusion. In my point of view, with the specific approach in this paper, the physics-informed part is unnecessary. This is probably why the differences between the proposed and existing frameworks are very small (less than 1%), especially in figure 2(c) for different training sizes.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    All reviewers’ confidence levels are “Somewhat confident (2)”. I think there are two main reasons: the paper is difficult to follow, and the reviewers are not familiar with the physics of tumor growth. As I have published papers of both tumor growth personalization and tumor segmentation, I would like to provide more details of why I suggest rejection for the paper.

    I am happy to see the combination of the physics of tumor growth and tumor segmentation. Nevertheless, this paper is very difficult to follow, and its scientific rigor is low. The diffusion-reaction equation is used when the initial shape and location of the tumor are available, and they are crucial. As the shape and location are unavailable before segmentation, the authors need to explain and justify their approach carefully in details. However, the appropriate rationales and explanations are missing, and eq (3) is incomprehensible. As mentioned by the reviewers, for y = (x_1, …, x_C, t_1, …, t_C), the time vectors t_i are not clearly defined. Are they identical for all channels? Are they changing with the learning steps? Furthermore, which layer are X from? Why there is a sin function in (3)? These are all unclear, even after the rebuttal.

    In the implementation, the spatial domain for eq (3) and (4) is 16x16x16, which is too small for reaction-diffusion. In my point of view, with the specific approach in this paper, the physics-informed part is unnecessary. This is probably why the differences between the proposed and existing frameworks are very small (less than 1%), especially in figure 2(c) for different training sizes.



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    All reviewer recommend acceptance. It is acknowledged that one of the ACs raises valid concerns which should be discussed in the final version.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    All reviewer recommend acceptance. It is acknowledged that one of the ACs raises valid concerns which should be discussed in the final version.



back to top