Abstract

In this paper, we introduce IHRRB-DINO, an advanced model designed to assist radiologists in effectively detecting breast masses in mammogram images. This tool is specifically engineered to highlight high-risk regions, enhancing the capability of radiologists in identifying breast masses for more accurate and efficient assessments. Our approach incorporates a novel technique that employs Data-Driven Instance Noise (DINO) for Object Localization, which significantly improves breast mass localization. This method is augmented by data augmentation using instance-level noise during the training phase, focusing on refining the model’s proficiency in precisely localizing breast masses in mammographic images. Rigorous testing and validation conducted on the BI-RADS dataset using our model, especially with the Swin-L backbone, have demonstrated promising results. We achieved an Average Precision (AP) of 46.96, indicating a substantial improvement in the accuracy and consistency of breast cancer (BC) detection and localization. These results underscore the potential of IHRRB-DINO in contributing to the advancements in computer-aided diagnosis systems for breast cancer, marking a significant stride in the field of medical imaging technology.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/3797_paper.pdf

SharedIt Link: https://rdcu.be/dVZec

SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72378-0_11

Supplementary Material: N/A

Link to the Code Repository

N/A

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Kas_IHRRBDINO_MICCAI2024,
        author = { Kasem, Mahmoud SalahEldin and Abdallah, Abdelrahman and Abdelhalim, Ibrahim and Alghamdi, Norah Saleh and Contractor, Sohail and El-Baz, Ayman},
        title = { { IHRRB-DINO: Identifying High-Risk Regions of Breast Masses in Mammogram Images Using Data-Driven Instance Noise (DINO) } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15001},
        month = {October},
        page = {113 -- 122}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper presents an object detection framework for mammographic images.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper tackles a very important problem that is relevant to the MICCAI community.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The description of one of the main component DINO is unclear. Also the name DINO usually referred to a well-known self-supervised learning method. I’m not sure if using this name here is ideal.
    2. Why did the authors not compare to any other baseline with the same backbone?
  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    the paper would benefit from significant revision to clearly convey each of its intended contribution.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Reject — should be rejected, independent of rebuttal (2)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    It is hard for me to see any useful insights from the paper (‘‘ViT is an effective backbone’’ is well known).

  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Reject — should be rejected, independent of rebuttal (2)

  • [Post rebuttal] Please justify your decision

    I have read the authors’ rebuttal and the comments from other reviewers. My evaluation of the contribution of the current draft remains unchanged.



Review #2

  • Please describe the contribution of the paper

    This manuscript introduces a vision transformer model for breast mass detection in mammographic images.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1) Highly significant research topic. 2) Use of vision transformers enhances novelty.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1) Potential data leakage in model evaluation - unclear if data splits were at the patient/image/mass level. Given that each mammographic exam contains multiple views, a mass may be visible in more than one view, and an exam may contain more than one masses, the way data was split is critical. 2) Limited reproducibility given that code is/will not be provided and in-house data was used. 3) Poor description of in-house data and respective data curation steps. There are various publicly available datasets in TCIA and beyond that could be used for this detection task. 4) Without standard deviations or confidence intervals, and p-values for comparisons, the statistical significance in the comparisons shown in Tables 1-2 is unclear. 5) Although several tools for breast mass detection are currently available, no comparisons with previous tools are reported. 6) The use of 2D versus 3D mammograms is a limitation that needs to be acknowledged.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    There are several major points to be addressed before this manuscript can be considered for publication. Please see detailed comments above.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The lack of clarity in dataset description and the lack of comparisons with other relevant methods limit the Reviewer’s ability to evaluate the novelty/impact of this work.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Reject — should be rejected, independent of rebuttal (2)

  • [Post rebuttal] Please justify your decision

    Dataset descriptions remain poor, limiting the ability to fully evaluate the reported results, as well as the study’s reproducibility.
    Standard deviations or confidence intervals are essential for the reported comparisons (Tables 1-2). Given the abundancy in available tools for breast mass detection, lack of comparisons with any of those is a major limitation.



Review #3

  • Please describe the contribution of the paper

    The authors proposed a transformer-based architecture that help to detect breast cancer. They evaluated the proposed method on a curated 12476 mammograms.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strength of the paper is the introduction of Contrastive Denoising (CDN) training approach which improves the model discrimination. They use look forward twice method to update a layer not only based on loss on that layer but also subsequent layer The experiments on a curated dataset

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    In section 2.2, CDN was not explained thoroughly. No formular or figure to illustrate integration of two hyper-parameters lamda1 and lamda2 In Section 3.1, there was no information on how to add noise, how to lamda1 and lamda2. The authors did not prove that introducing their DINO improved model performance

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    The authors should supplement results proving that their DINO improves the model performance

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The author’s introduction of CDN.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Reject — should be rejected, independent of rebuttal (2)

  • [Post rebuttal] Please justify your decision

    Actually, there was no any T-statistic of a p-value comparing IHRRB-DINO vs. others listed in the manuscript. The author did not prove the efficiency of introducing DINO again no DINO




Author Feedback

Response to Reviewer #1 Concern on Data Leakage: Reviewer #1 highlighted the criticality of data splits due to the multiple views and masses in mammographic exams. We confirm that our dataset was split at the patient level, ensuring that all images, views, and masses associated with a single patient were contained within the same subset, thus effectively preventing data leakage and maintaining the integrity of the model’s evaluation. Reproducibility and Data Access: Our study utilises in-house data, and we are committed to providing access to this data and the computational code upon reasonable request. This approach maintains confidentiality while upholding transparency necessary for scientific validation.

Data Description: We curated our dataset to ensure high quality and control, processed according to BI-RADS standards by expert radiologists. A more detailed account of our data collection, cleaning, and ROI extraction was indeed included in our paper to provide clarity on our rigorous data management. Statistical Analysis Clarity: We performed T-tests to assess the statistical significance of our model comparisons, with p-values such as 1.22×10^−12 for the IHRRB-DINO vs. CAM with Resnet50 indicating significant differences. These details underscore our commitment to rigorous statistical evaluation. Limitations in Technology Used: We acknowledge the use of 2D over 3D mammograms due to page constraints, and will ensure to address this discussion more thoroughly in future discussions to provide clarity. Combined Response to Reviewers 4 and #3 Explanation of CDN and Noise Addition: Both reviewers noted the need for a clearer explanation of CDN and our noise addition strategy. CDN is defined by two hyper-parameters, lambda_1 and lambda_2, which control the noise scale for positive and negative queries, enhancing the model’s accuracy. This bifurcation is critical, as positive queries aim to reconstruct ground truth boxes, while negative queries predict “no object,” helping to refine model training against irrelevant anchors. Evidence of DINO’s Effectiveness: Questions regarding the proof of DINO’s impact on model performance were addressed by detailed comparisons in our manuscript. Results from our tests, such as a T-statistic of a p-value of 1.59×10^−17 comparing IHRRB-DINO vs. ACOL with Inception v3, demonstrate significant improvements and validate DINO’s efficacy. Response to Reviewer #4 Clarity and Novelty in Methodology: Reviewer #4 raised concerns about the clarity around the DINO component and its naming, which might be confused with a known self-supervised learning method. DINO, in our context, stands for Data-Driven Instance Noise, a unique adaptation for object localization in breast mass detection. It enhances the model by introducing instance-level noise during the training phase, significantly improving detection accuracy. Baseline Comparisons: Our study’s primary objective was to explore the impact of integrating various pretrained models(swin-l, … etc) with our novel approach. We believe this focus allows us to contribute valuable insights into the potential enhancements these models can bring to breast mass detection methodologies. While we recognise the merit in comparing with existing tools, our intent was not to establish superiority but to highlight how pre-trained models can be effectively adapted within our framework to improve detection capabilities. Conclusion The comments from all reviewers have been invaluable in refining our presentation and explanations of complex methodologies. We have addressed each point raised, ensuring that our research contributions are clear and well-supported by empirical evidence. The discussions on CDN, data handling, and statistical validations are aimed at providing a comprehensive understanding of our work’s robustness and innovation.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Reject

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    I appreciate authors for submitting well-written rebuttal. The reviewers have reviewed the rebuttal and felt that the rebuttal does not address their major concerns and I agree with them. Due to this, I recommend reject.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    I appreciate authors for submitting well-written rebuttal. The reviewers have reviewed the rebuttal and felt that the rebuttal does not address their major concerns and I agree with them. Due to this, I recommend reject.



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Reject

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    All three reviewers after rebuttal rated the paper with low scores that are

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    All three reviewers after rebuttal rated the paper with low scores that are



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    This paper studies an important topic of mammogram image diagnosis, which has good clinical needs and is interesting to MICCAI community. The used dataset is of large-scale. Major concerns come from improvement on paper writing, details about implementation and statistical testing. In the rebuttal, the authors clarified that they used in-house data, and the authors are committed to providing access to this data and the computational code upon reasonable request. The authors also provided reasonable feedback to other comments. This meta reviewer considers that the major concerns could be mostly addressed through editing of the manuscript. The value of the work relies on the dataset to be released upon request from the scientific community.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    This paper studies an important topic of mammogram image diagnosis, which has good clinical needs and is interesting to MICCAI community. The used dataset is of large-scale. Major concerns come from improvement on paper writing, details about implementation and statistical testing. In the rebuttal, the authors clarified that they used in-house data, and the authors are committed to providing access to this data and the computational code upon reasonable request. The authors also provided reasonable feedback to other comments. This meta reviewer considers that the major concerns could be mostly addressed through editing of the manuscript. The value of the work relies on the dataset to be released upon request from the scientific community.



back to top