Abstract

Grading of prostate cancer plays an important role in the planning of surgery and prognosis. Multi-parametric magnetic resonance imaging (mp-MRI) of the prostate can facilitate the detection, localization and grade of prostate cancer. In mp-MRI, Diffusion-Weighted Imaging (DWI) can distinguish a malignant neoplasm from benign prostate tissue due to a significant difference in the apparent diffusion sensitivity coefficient (b-value). DWI using high b-value is preferred for prostate cancer grading, providing high accuracy despite a decrease signal-to-noise ratio and increased image distortion. On the other hand, low b-value could avoid confounding pseudo-perfusion effects but in which the prostate normal parenchyma shows a very high signal intensity, making it difficult to distinguish it from prostate cancer foci. To fully capitalize on the advantages and information of DWIs with different b-values, we formulate the prostate cancer grading as a multi-view classification problem, treating DWIs with different b-values as distinct views. Multi-view classification aims to integrate views into a unified and comprehensive representation. However, existing multi-view methods cannot quantify the uncertainty of views and lack a interpretable and reliable fusion rule. To tackle this problem, we propose uncertainty-aware multi-view classification with uncertainty-aware belief integration. We measure the uncertainty of DWI based on Evidential Deep Learning and propose a novel strategy of uncertainty-aware belief integration to fuse multiple DWIs based on uncertainty measurements. Results demonstrate that our method outperforms current multi-view learning methods, showcasing its superior performance.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2652_paper.pdf

SharedIt Link: pending

SpringerLink (DOI): pending

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2652_supp.pdf

Link to the Code Repository

N/A

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Don_UncertaintyAware_MICCAI2024,
        author = { Dong, Zhicheng and Yue, Xiaodong and Chen, Yufei and Zhou, Xujing and Liang, Jiye},
        title = { { Uncertainty-Aware Multi-View Learning for Prostate Cancer Grading with DWI } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15010},
        month = {October},
        page = {pending}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    The study treats DWI images taken at different b-values as distinct views in a multi-view classification problem.

    “uncertainty-aware belief integration” is proposed. This method integrates views based on their measured uncertainties.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper propose a new method to integrate several EDL predictions which is interesting.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The proof of Proposition 1 is confusing and unclear. Additionally, the proposed cross-view conflict regulation needs further discussion. 2 .The formulation of the “integrated” alpha and “integrated” Dirichlet distribution is not introduced clearly, which adds to the confusion surrounding the overall loss function.
    2. Other methods that conduct uncertainty estimation should also be included in the baselines. Moreover, widely-used metrics of uncertainty estimation, such as confidence calibration, Brier score, and OOD detection… are missing.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    1. The formulation of “integrated” Dirichlet distribution should be introduced more clearly. The proof of Proposition 1 is unclear.
    2. The proposed cross-view conflict regulation needs to be discussed more.
    3. Experiments estimating the performance of uncertainty prediction should be added.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Although this paper proposed an insteresting method to integrate several EDL prediction, the formulation of such method is not discussed clearly. Also, this paper lacks some important experiments regarding the performance of uncertainty prediction.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    -The paper proposed a method based on evidential deep learning for multi-view prostate cancer grading (classification).

    • It empirically demonstrates on a prostate cancer dataset that the proposed method outperforms multiple baselines in terms of multiple metrics.
  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The authors propose a fusion (belief integration) method for multi-view evidential classification.
    • They provide 2 theoretical guarantees for the proposed fusion rule.
    • They empirically evaluate their method by comparing it with multiple baselines and show superiority through multiple metrics.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The idea of combining evidential deep learning with multi-view classification is not new and the only novelty of this paper’s proposed method ls the introduction of a belief combination rule. This is not a significant contribution as one may propose a new combination rule only by ensuring the outcome beliefs of the the rule sum to 1. A combination rule is valuable if it adds a pleasant characteristic to the existing rules.
    • The paper doesn’t compare its results with an important relevant paper “https://arxiv.org/pdf/2204.11423”.
    • The comparison made in Fig.3(a) of the paper is not fair as it compares the accuracy of different single-view models with a the proposed multi-view method. In this figure, the authors must compare their results with other state of the art multi-view classification methods.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • In proposition1 the \epsilon must be replaced by \delta.
    • Eq.10:Aren’t all the entries of the gradient vector g_i^v for all values of i and v, zero except for the T^th value with T being the index of the true class of sample i? If so, isn’t the value of the consistency loss in Eq11 always zero? If so, why adding this loss changes the performance in ablation studies? Please clarify.
    • The claim of proposition1 is “can not significantly impact the classification accuracy” but the proof only shows that the outcome uncertainty doesn’t significantly changed under the stated conditions. Please fix.
    • Compare with other relevant methods e.g. the one mentioned above.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    -The novelty of the proposed method is not significant enough for granting direct acceptance. However the paper can potentially get accepted due to good empirical results depending on their answers to the questions above which determines the validity of their experiments and results. Also, there are many conceptual errors that prohibits an early acceptance.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    This paper presents a method for the grading (classification) of multi b value DWI MR for prostate cancer. The method uses multi-view learning with uncertainty-based integration and quantification of uncertainty using evidential deep learning. The method was evaluated using a private dataset (N=134) where each subject was imaged at 3 b-values for a total of 2144 views. Ablation studies and comparison to alternate multi-view techniques were presented with promising results.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This is a clinically significant problem.

    The formulation is very solid, using evidential deep learning and uncertainty weighting.

    The Uncertainty-Aware Belief Integration is novel, well-designed and appears to have good performance.

    The evaluation is well-executed and convincing.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The evaluation of uncertainty estimation is qualitative and the results are plausible but don’t support the claim that the approach can “accurately measure uncertainty”.

    Results on a separate test set and/or public dataset would be helpful.

    Image “corruption” in Figure 3b not explained.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    More comprehensive evaluation would make this paper stronger.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Although there is room for improvement, this is an interesting method for a clinically significant problem with a good evaluation.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    I found the other reviews and the rebuttal helpful. I remain of the opinion that this paper has the potential to make a contribution due to its formulation and results, and will stay at my score.




Author Feedback

Q1: Novelty (R3). Thank you for your insight. While we acknowledge that evidential deep learning combined with multi-view classification has been explored, we believe our method offers a novel and valuable approach. The current popular method, the Dempster-Shafer (D-S) fusion rule, completely ignores conflict, attributing any mass associated with conflict to the empty set. As shown in the following example, in a three-class classification problem, if one view conflicts highly with another, the D-S fusion may incorrectly indicate a high belief for the second class. In contrast, our fusion method adheres to common sense and effectively reduces the likelihood of misclassifications. Mass | Views | D-S | Ours \ b1 | 0.99 0.00 | 0.00 | 0.495 \ b2 | 0.01 0.01 | 1.00 | 0.010 \ b3 | 0.00 0.99 | 0.00 | 0.495 \ u | 0.00 0.00 | 0.00 | 0.00. Q2: Performance and statistical analysis (R1, R2, R3). We compare the ECE values of various models in Table 1. ECE is a commonly used metric for measuring model calibration. Additionally, we compare our results with other commonly used multi-view methods, such as ETMC (2204.11423), TMDL-OA (AAAI 2022), and MMD (CVPR 2023). Taking ETMC as an example, we outperform it in accuracy, ECE, and F1-Score by 3.7%, 0.012, and 3.78% respectively. Furthermore, we conducted experiments on OOD detection. As we cannot present the experimental results in the rebuttal, we will release the results and the code on GitHub after acceptance. Q3: Clarification of Proposition 1 (R2). The role of a view in multi-view fusion is determined by its uncertainty level. High uncertainty results in a weaker impact on the fusion. For instance, if a DWI has high uncertainty due to low image quality, its integration will not significantly alter the overall uncertainty. Even in the presence of severe image corruption like Gaussian noise, the high uncertainty of the DWI will have a negligible effect on the fusion, thereby ensuring the robustness of our fusion method. Q4: Evaluation of uncertainty estimation (R1). The accuracy of our uncertainty measurement has two aspects: 1. Our method clearly measures the quality of DWI at different b-values, indicating higher uncertainty for both high b-value and low b-value images. 2. Additionally, the uncertainty measurement results after fusion align with Proposition 2. We will provide a more detailed explanation of this in the final version. Q5: Discussion and revision of cross-view conflict regulation (R2, R3). Sorry for the misstatement in Eq. 11. We use the Manhattan distance as our cross-view conflict regulation. By measuring the loss in this way, the gradients of the same sample under different views tend to approximate each other, ensuring view consistency. We will revise Eq.11 to the form of the Manhattan distance in the final version. Q6: Description of Fig. 3 (a) (R3). The purpose of Fig. 3 (a) is to compare the effectiveness of using single-view DWI with our fusion method. It aims to demonstrate the improvement in classification accuracy achieved by our fusion method, while also indicating the limitations in model performance when using DWI with particularly high b-values and low b-values due to their low image quality. Q7: Writing misunderstandings and mistakes. (R1, R2, R3). We will address and correct these issues in the final version. 1. “\epsilon”->” \delta” in Proposition 1. 2. “cannot significantly impact the classification accuracy”-> “cannot significantly impact the integrated uncertainty” in Proposition 1. 3. In Fig.3 (b), we indicate in the legend that the data corruption method we used is Gaussian noise. The specific intensity of the Gaussian noise is $\epsilon = 0.5$. 4. The form of integrated alpha is similar to that of belief and uncertainty; it is a weighted average of the alpha values from each view. Q8: Reproducibility of our work (R1). We will make our code publicly available after acceptance.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Reject

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    Two reviewers who initially recommended weak rejection did not update their reviews. However, the rebuttal indicates that the authors know what they are doing. If this application is of significant interest, I would green light its acceptance.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    Two reviewers who initially recommended weak rejection did not update their reviews. However, the rebuttal indicates that the authors know what they are doing. If this application is of significant interest, I would green light its acceptance.



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    The paper addresses a relevant problem with a solid formulation. There is also some novelty, namely Uncertainty-Aware Belief Integration, in the method design. The results are interesting and promising. Although there are some weaknesses, considering the page limit and the authors’ rebuttal for clarification, I believe the strengths warrant acceptance.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    The paper addresses a relevant problem with a solid formulation. There is also some novelty, namely Uncertainty-Aware Belief Integration, in the method design. The results are interesting and promising. Although there are some weaknesses, considering the page limit and the authors’ rebuttal for clarification, I believe the strengths warrant acceptance.



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    Paper presents a method for the grading of multi b value DWI MR for prostate cancer using multi-view learning with uncertainty-based integration. Reviewer concerns included qualitative measurement of uncertainty, lack of separate test set, confusion in methodological conceptualization. The authors have reasonably clarified the questions posed. While borderline, I would lean towards accept.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    Paper presents a method for the grading of multi b value DWI MR for prostate cancer using multi-view learning with uncertainty-based integration. Reviewer concerns included qualitative measurement of uncertainty, lack of separate test set, confusion in methodological conceptualization. The authors have reasonably clarified the questions posed. While borderline, I would lean towards accept.



back to top