Abstract

In the field of medical imaging, particularly in tasks related to early disease detection and prognosis, understanding the reasoning behind AI model predictions is imperative for assessing their reliability. Conventional explanation methods encounter challenges in identifying decisive features in medical image classifications, especially when discriminative features are subtle or not immediately evident. To address this limitation, we propose an agent model capable of generating counterfactual images that prompt different decisions when plugged into a black box model. By employing this agent model, we can uncover influential image patterns that impact the black model’s final predictions. Through our methodology, we efficiently identify features that influence decisions of the deep black box. We validated our approach in the rigorous domain of medical prognosis tasks, showcasing its efficacy and potential to enhance the reliability of deep learning models in medical image classification compared to existing interpretation methods. The code is available at: \url{https://github.com/ayanglab/DiffExplainer}.



Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/0078_paper.pdf

SharedIt Link: https://rdcu.be/dV53Y

SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72117-5_20

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/0078_supp.pdf

Link to the Code Repository

https://github.com/ayanglab/DiffExplainer

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Fan_DiffExplainer_MICCAI2024,
        author = { Fang, Yingying and Wu, Shuang and Jin, Zihao and Wang, Shiyi and Xu, Caiwen and Walsh, Simon and Yang, Guang},
        title = { { DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15010},
        month = {October},
        page = {208 -- 218}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper presents a post-hoc explainability method for DNN-based classifiers on CT images. It does not contain any key new strategy, but instead shows how to combine teacher-student learning and Diffusion Autoencoders for generating counterfactual images that explain the DNN predictions. For instance, an explanation can be ‘what would the image look-like if the predicted survival rate score was higher’. To the authors and my knowledge, this paper is the first one to show this kind of methods on CT images.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • This kind of application is of high interest for the MICCAI community.
    • The paper is well motivated and very well written.
    • The results look convincing.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    To further assess the explainability power of the generated counterfactuals, this would have been particularly interesting to evaluate their plausibility by clinicians.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    As mentioned above, I believe that assessing the explainability power of the method with clinicians would be a plus in the paper.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Although the paper does not present key novel methodology, it is the first in my knowledge to show how to use diffusion autoencoders to explain the decisions of DNN classifiers en CT images.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    The main contribution of this paper is to propose a method called DiffExplainer to generate counterfactual images of CT data with diffusion-based generation framework. Counterfactuals are heavily important to incorporate trustworthiness and interpretability of black-box AI models especially for highly sensitive task like medical image diagnosis or prognosis

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strength of the paper lies in the formulation of the novel combination of teacher- student learning and Diffusion Autoencoders. The paper is clearly written with sufficient results with experiment from OSCI data. It was also shown that these methods can more accurately locate the important features from the general post-hoc XAI methods like GradCAM.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The paper claims it is the first paper to work on counterfactuals for CT. I am not sure if that is completely accurate as there exist some paper that deals with counterfactuals and experimented on CT : “ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging”. It will be good to have a discussion/citation of this paper. There are some inherent issue of explaining a black-box model. Please see (“Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead”). Can the authors have a small discussion about it and if counterfactuals can provide a better alternative for that? There is one minor typo in the introduction : “computing attribution mapsinclude backpropaga-“ should be “computing attribution maps include backpropaga”

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Please see the strength and weakness comments above.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper is overall well-written and very much relevant to the MICCAI community. I recommend it weak accept as the authors might want to address the comments from the weakness section.

  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    The work presents a novel approach to address the challenge of identifying decisive features in medical image classifications, particularly when discriminative features are subtle or not immediately evident. The authors propose an agent model that generates counterfactual images, which, when fed into a black box model, prompt different decisions. This method allows for the uncovering of influential image patterns that impact the black box model’s final predictions. By employing this agent model, the paper effectively identifies features that influence the decisions of deep learning models. The approach is validated in the context of medical prognosis tasks, demonstrating its efficacy and potential to enhance the reliability of deep learning models in medical image classification compared to existing interpretation methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Overall, the paper is well-written and easy to follow—however, a few portions of the methods section needed clarification.
    2. The experiments section is clear, and most of the required details are provided.
    3. The authors perform plenty of ablation experiments to demonstrate the effectiveness of our proposed components.
    4. There are enough comparisons provided from the literature.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Lack of research status of some others in related fields.
    2. It would be good to use multiple backbone models such as ViT and also compare with XAI models like LIME and SHAP.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Do you have any additional comments regarding the paper’s reproducibility?

    The authors provide the implementation code, and the method is clearly described. Therefore, proposed method is highly reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Please reference the comments of weakness.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper proposes a novel idea that is beneficial for practical use. The method is presented clearly, and ample evidence supports its effectiveness through extensive results.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Author Feedback

We appreciate the time and effort the reviewers have devoted to providing their valuable feedback. We are encouraged by the reviewers’ recognition of our work’s motivation, novelty, clear presentation, and high clinical significance. Below, we present our replies to the reviewers’ comments.

Clinical Evaluation (R1 & R4): Thank you for your suggestions. We are currently collaborating closely with experts in lung diseases for the initial clinical evaluation of the proposed method. In the near future, we plan to conduct a more comprehensive evaluation of the proposed work, extending beyond CT modalities to include various tasks and assess efficacy in explaining different backbones. Given the constraints on new experiments this year, we will also include comparisons with XAI methods like LIME and SHAP in our future work.

Related Work (R3 & R4): We will address some overlooked related work and supplement the citations and discussion in the second paragraph. Reference [1] presents a novel approach that leverages an existing counterfactual generation method to produce an enhanced saliency map, aiming to improve the classification model and its attention map. This showcases another utility of counterfactual images. However, the authors did not report the outcomes of the counterfactual images they generated, their efficacy in achieving the desired classification decisions, or the quality of these generated images. We will refine our claim to being the first to “develop” this kind of method for CT images to more accurately reflect our contribution and will cite this work to highlight different benefits that can be derived from counterfactual images. [1] ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging.

Comparison to Explainable Models (R3): Thank you for your question. We are highly inspired to compare our method to the interpretable models proposed in [2] across the modalities they use. We would like to conduct a comprehensive comparison of the two approaches in terms of their classification abilities, explainability, and the features utilized for decision-making. According to our past works, the classification performance of explainable models can be inferior to deep, end-to-end, but unexplainable models to some extent. This is often because explainable models are constrained to using features within the domain knowledge, sacrificing some performance for increased explainability. A significant advantage of using counterfactual explanations is the ability to achieve optimal performance and explainability simultaneously. Another advantage is that explaining deep models can potentially uncover novel features that are overlooked by humans but leveraged by these models to achieve high performance. We will further elaborate on this discussion in the final version of our paper. [2] Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.

Minor Issue (R3): Thank you for pointing out the minor error. We will correct it and conduct thorough proofreading before submitting the final version.




Meta-Review

Meta-review not available, early accepted paper.



back to top