Abstract

To acquire information from unlabeled data, current semi-supervised methods are mainly developed based on the mean-teacher or co-training paradigm, with non-controversial optimization objectives so as to regularize the discrepancy in learning towards consistency. However, these methods suffer from the consensus issue, where the learning process might devolve into vanilla self-training due to identical learning targets. To address this issue, we propose a novel \textbf{Re}ciprocal \textbf{Co}llaboration model (ReCo) for semi-supervised medical image classification. ReCo is composed of a main network and an auxiliary network, which are constrained by distinct while latently consistent objectives. On labeled data, the main network learns from the ground truth acquiescently, while simultaneously generating auxiliary labels utilized as the supervision for the auxiliary network. Specifically, given a labeled image, the auxiliary label is defined as the category with the second-highest classification score predicted by the main network, thus symbolizing the most likely mistaken classification. Hence, the auxiliary network is specifically designed to discern \emph{which category the image should \textbf{NOT} belong to}. On unlabeled data, cross pseudo supervision is applied using reversed predictions. Furthermore, feature embeddings are purposefully regularized under the guidance of contrary predictions, with the aim of differentiating between categories susceptible to misclassification. We evaluate our approach on two public benchmarks. Our results demonstrate the superiority of ReCo, which consistently outperforms popular competitors and sets a new state of the art.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1844_paper.pdf

SharedIt Link: pending

SpringerLink (DOI): pending

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/1844_supp.pdf

Link to the Code Repository

N/A

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Zen_Reciprocal_MICCAI2024,
        author = { Zeng, Qingjie and Lu, Zilin and Xie, Yutong and Lu, Mengkang and Ma, Xinke and Xia, Yong},
        title = { { Reciprocal Collaboration for Semi-supervised Medical Image Classification } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15011},
        month = {October},
        page = {pending}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper presents a novel approach for semi-supervised medical image classification. The main idea involves utilizing the second-highest classification score to align the prediction score distribution between the main and auxiliary networks. Additionally, the authors applied a contrastive approach in the loss term to decrease the distance between intra-class instances while increasing the distance between dissimilar classes.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Through auxiliary loss, the authors want to increase the reliability of the model’s prediction for unlabeled data by setting the loss term of the highest score and the second highest category for prediction.
    2. By applying contrastive learning approach, the authors inteded to make more clear decision boundaries under semi-supervised learning scenario.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Need to compare with other contrastive learning methods
      • Semi-supervised contrastive learning with similarity co-calibration, Zhang et al., 2022.
      • Class-aware contrastive semi-supervised learning, Yang et al., 2022.
      • Comatch: Semi-supervised learning with contrastive graph regularization, Li et al., 2021.
    2. The idea that utilization of the second-highest prediction score is interesting. Why the authors use the “second-highest” prediction score? rather than “third-highest” and so on?
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Applying semi-supervised learning to medical imaging is a promising research direction. The proposed method appears to possess originality compared to existing research. To further improve the paper, the authors need to provide insight into why they utilized the second-highest predicted class. It would be helpful to understand whether this decision stems from the difficulty difference between classes or for some other reasons.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The absolute performance evaluation of the utilized dataset significantly influenced the review’s decision. However, if the issues related to the mentioned shortcomings are addressed, there is also an intention to improve the score.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    A novel approach to learning with dual, but indirectly consistent objectives. The introduction of contrary predictions to reduce misclassification and maintain diversity in the learning process. Validation of the model’s effectiveness through extensive testing, showing marked improvements over existing SSL methods.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Innovative Approach: The reciprocal learning concept is innovative and well-suited to addressing common pitfalls in semi-supervised learning, such as overfitting to noisy labels.
    2. Significant Improvements: The reported results show significant performance improvements over existing methods, which is compelling for advancing the field.
    3. Relevance: The application to medical image classification is highly relevant and timely given the growing demand for robust medical imaging technologies.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Experimental Details: The paper could benefit from more detailed descriptions of the datasets used and the statistical methods for validating the results.
    2. Comparison to State-of-the-Art: While the paper compares the proposed method with other state-of-the-art techniques, a more detailed discussion on why ReCo outperforms these methods could provide deeper insights into the effectiveness of the approach.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    1. Broader Impact Discussion: It would be beneficial to include a discussion on the broader impact of the method, including potential clinical implications and limitations.
    2. Additional Validation: Consider additional validation of the model robustness across other medical imaging datasets or in a real-world clinical setting to further establish the utility of the framework.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This submission is a strong candidate for presentation at MICCAI 2024. It presents a novel and well-executed study that is likely to contribute significantly to the field of medical image analysis and generate discussion among conference participants.

  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    This paper presents a novel approach to semi-supervised learning (SSL) in medical image classification, introducing a new model architecture and loss functions tailored to this task. The authors propose integrating an auxiliary network alongside the main network, which identifies the class that a given sample does not belong to, complementing the main network’s focus on learning the ground truth target. This auxiliary network provides additional self-supervision by pinpointing the category with the second-highest class score, thereby offering a more robust signal for training.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. By incorporating this auxiliary information, the main network is better equipped to resist becoming a weakly self-supervised network, maintaining its robustness throughout training. Specifically, the auxiliary label, which signifies the class most susceptible to misclassification, is utilized for logit-level alignment. This process involves supervising the training on unlabeled data by maximizing the agreement between the class with the lowest score in the auxiliary prediction and the class score with the maximum score in the main prediction.

    2. Furthermore, the authors introduce a feature-level contrastive loss that leverages the learned auxiliary label information. This loss function aims to maximize or minimize the feature distance using a contrastive approach, further enhancing the network’s ability to learn discriminative features and improve classification performance.
    3. The authors suggest a simple framework with sound reasoning to bolster the performance of image classification using self supervision.
    4. The proposed framework is flexible for multi-class classification and can be readily extended to other domains as well.
    5. No extra computational overhead in terms of the proposed architecture changes as it is very similar to existing networks that use student-teacher frameworks.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    If there are any hyperparameters to weight the various loss functions, they are not mentioned so not clear about the sensitivity of the proposed loss functions.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    A good set of comprehensive ablation experiments that analyze the robustness of the model w.r.t to the training size and the effect of each individual proposed loss function separately.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    1. Overall, this is a well-written paper with a comprehensive set of experiments.
    2. The idea is very simple and straightforward, and the results clearly indicate that the proposed approach outperforms the existing SSL frameworks.
  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Author Feedback

We sincerely thank all reviewers for their recognition of ReCo’s novelty. Here are responses to their invaluable suggestions.

Reviewer #1 Q1: Enhance the paper by offering more elaborate descriptions of the datasets and the methods employed for comparison. Additionally, discuss the potential clinical implications and limitations of ReCo, along with supplementary validation. A: We plan to supplement our journal version with a more comprehensive analysis of ReCo and extend its application to additional datasets to further establish its utility.

Reviewer #3 Q1: The weighting of the loss functions. A: For simplicity, we assigned equal weight to all loss functions in the experiments.

Reviewer #4 Q1: More comparisons with SsCL, CCSSL and CoMatch. A: We have compared with CoMatch in this paper, and we will compare with SsCL and CCSSL in our journal version. Q2: Why using the “second-highest” prediction score? rather than “third-highest” and so on? A: The goal of ReCo is to identify the category most susceptible to confusion, where the class with the ‘second-highest’ prediction score naturally exhibits associated characteristics. Hence, we employed the ‘second-highest’ prediction in our experiments. We will discuss the impact of other predictions in our journal version.




Meta-Review

Meta-review not available, early accepted paper.



back to top