Abstract

This paper investigates source-free domain adaptation for cross-modality cardiac image segmentation. Source-free domain adaptation (SFDA) leverages a pretrained model from source domain knowledge and adapts it using target domain data to predict target image labels. While existing SFDA methods have demonstrated strong performance in various medical segmentation tasks, cross-modality cardiac segmentation remains challenging due to significant domain discrepancies between MRI and CT modalities, hindering effective knowledge transfer. Current SFDA approaches primarily focus on pseudo-label denoising through image-level and feature-level alignment, often overlooking class-level information derived from classifier outputs. This paper proposes a novel framework that constructs two class relationship matrices using predictions from a teacher-student model. These matrices are integrated into a contrastive learning framework through intra-view and inter-view pairs. The teacher-student architecture processes both original samples and their augmented counterparts, enforcing prediction consistency for robust adaptation. Simultaneously, our class-aware contrastive learning enhances discriminative capability for cardiac structures. Experimental results demonstrate that our method outperforms state-of-the-art approaches by significant margins, particularly on the challenging CT $\rightarrow$ MR adaptation task.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/0496_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

https://github.com/aoge1993/SFDA-CCRC

Link to the Dataset(s)

N/A

BibTex

@InProceedings{MaAo_SourceFree_MICCAI2025,
        author = { Ma, Ao and Zhu, Qingpeng and Li, Jingjing and Nielsen, Mads and Chen, Xu},
        title = { { Source-Free Domain Adaptation for Cross-Modality Cardiac Image Segmentation with Contrastive Class Relationship Consistency } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15964},
        month = {September},
        page = {576 -- 585}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper proposes a contrastive learning framework for source-free cross-modality cardiac image segmentation.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Integration of class-aware contrastive loss and classifier consistency loss to boost adaptation without source data.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    1) Limited Methodological Novelty: The proposed framework lacks substantial innovation. While it integrates existing techniques such as teacher-student learning and contrastive learning, the contributions appear to be incremental rather than groundbreaking. 2) Insufficient Evaluation on Diverse Datasets: The experiments are conducted solely on the MMWHS dataset. This limited evaluation raises concerns about the generalizability and robustness of the proposed method. 3) Underexplored Data Augmentation Strategy: Data augmentation is a critical component of the proposed framework. However, the authors rely only on standard augmentation, without providing justification for their choices or evaluating alternative augmentation strategies. 4)Marginal Improvement in ASSD: As shown in Table 1, the improvement in ASSD over existing methods is relatively minor. This suggests that the proposed approach may offer limited benefits.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (2) Reject — should be rejected, independent of rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    1) Limited Methodological Novelty: The proposed framework lacks substantial innovation. While it integrates existing techniques such as teacher-student learning and contrastive learning, the contributions appear to be incremental rather than groundbreaking. 2) Insufficient Evaluation on Diverse Datasets: The experiments are conducted solely on the MMWHS dataset. This limited evaluation raises concerns about the generalizability and robustness of the proposed method. 3) Underexplored Data Augmentation Strategy: Data augmentation is a critical component of the proposed framework. However, the authors rely only on standard augmentation, without providing justification for their choices or evaluating alternative augmentation strategies. 4)Marginal Improvement in ASSD: As shown in Table 1, the improvement in ASSD over existing methods is relatively minor. This suggests that the proposed approach may offer limited benefits.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #2

  • Please describe the contribution of the paper

    For source-free unsupervised domain adaptation, this paper proposes a class-aware contrastive learning strategy. Instead of the latent feature vectors used in conventional contrastive learning, the authors generate feature vectors from the decoder’s output. These vectors are designed to quantify class distributions, and a contrastive loss is then applied to these distribution-based features.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The incorporation of class-aware contrastive learning into source-free unsupervised domain adaptation framework are novel and very interesting. It demonstrate solid improvements other comparison methods.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    • The citation [22] in the paper (Source-Free Domain Adaptation for Medical Image Segmentation via Prototype-Anchored Feature Alignment and Contrastive Learning) is mentioned as source of motivation for application of contrastive learning approach and has available source code. What was the reason do not compare results with this approach? It probably would be the most close one among the comparison methods.
    • The evaluation is limited to only on one dataset of cardiac data that contains small number of data-points. It can be beneficial to demonstrate advantage of the proposed method to other anatomical regions, for example AMOS abdomen dataset.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed methodology is novel and solid. But evaluation misses comparison with the most close method that have public code available (citation [22] in the paper).

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    After careful consideration of the authors’ rebuttal and the comments from other reviewers, I am leaning towards accepting this paper.

    While the paper has weaknesses regarding its limited evaluation, the presented idea of class-aware contrastive learning is interesting. It’s a promising concept, even if its full potential isn’t yet entirely demonstrated.



Review #3

  • Please describe the contribution of the paper

    The main contribution of this paper is the introduction of a novel source-free domain adaptation (SFDA) framework for cross-modality cardiac image segmentation, specifically addressing the challenging CT→MR task.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper introduces a novel class-aware contrastive learning strategy for source-free domain adaptation, which innovatively uses class relationship matrices from a teacher-student model to construct intra- and inter-view positive and negative pairs. This formulation avoids reliance on source data or prototypes, improving model discriminability. The method shows strong empirical results, especially on the difficult CT→MR cardiac segmentation task, outperforming state-of-the-art SFDA methods by notable margins. The integration of data augmentation for stability and the use of soft predictions instead of discarding uncertain labels further enhance robustness and label efficienc

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    While the proposed contrastive learning strategy is effective, it builds upon established SFDA frameworks like CBMT [17] and contrastive methods such as PAFA [22], limiting the novelty of the overall pipeline. The class relationship matrices, although insightful, are derived from softmax outputs and may be sensitive to initial prediction noise—particularly early in adaptation. Additionally, the method lacks clinical validation or expert analysis to assess its practical utility. Finally, the performance on more anatomically complex structures like MYO remains relatively weak, indicating challenges in handling fine-grained or overlapping regions.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper presents a thoughtful and well-executed approach to source-free domain adaptation in cardiac image segmentation, a task with significant clinical relevance and technical difficulty. The use of class relationship matrices in contrastive learning is a creative way to enhance discriminability without relying on source data or prototypes. The method achieves strong performance, especially on the challenging CT→MR task, and shows robustness through effective use of teacher-student consistency and soft label refinement. However, the approach builds on existing techniques and lacks clinical evaluation or deeper analysis of its performance on complex structures like MYO, limiting its overall impact. Nonetheless, the clear improvements over prior SFDA methods merit acceptance.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    I already wek accepted the paper, conditioning to the rebuttal from the author, which satisfactorily reply the questions raised during the review phase.




Author Feedback

We deeply appreciate the thoughtful and insightful comments (C) provided by the reviewers (R). They find our work novel, and interesting (R2), outperforms SOTA (R3). We address their comments:

R1C1/R3C1: Incremental or novel: Honestly, the teacher-student framework and contrastive learning (CL) have been proposed before. The teacher-student framework in our work serves as backbone networks, just like U-Net or ResNet. Our key innovation is exploring CL according to prediction vectors at class level, while other works utilize CL at sample level. To our knowledge, our work is the first one emphasizing class-level CL among recent domain adaptation (including UDA and SFDA) methods using CL for medical imaging.

R1C2/R2C2: Why only one dataset: The cardiac dataset is very challenging. There are many great works that focus only on the MMWHS dataset with the same cross-modality domain adaptation tasks (MR-to-CT and CT-to-MR) as we do [a,b,c]. Therefore, evaluation on one dataset is acceptable for a conference paper. Many SFDA methods only conduct MR-to-CT experiment but avoid testing on CT-to-MR task [21,1]. A few works try the CT-to-MR direction and obtain acceptable but not outstanding results [19]. Obviously, the current experimental results in this field were unsatisfactory, making it highly worthwhile to continue further research, which motivated our dataset selection.

R2C1: Comparison with PAFA [22]: We ran dozens of SFDA methods on cardiac dataset once they have open codes. Limited by space, the primary concern for us in choosing compared methods was that the original paper reported experimental results on the cardiac dataset. Based on this, we first chose SIFA [3], FSM [21], AdaMI [1] and FVP [19]. CBMT [17] had a similar framework to us. DPL [4] was the pioneering work for SFDA. CCG [9] was the most recently published journal paper. As for PAFA [22], it was originally on our comparison list but we decided not to report it after we ran PAFA on the cardiac dataset and got abnormal results on the CT-to-MR task. We gave two possible reasons: first, the domain shifts make it difficult to select representative prototypes in cardiac images; second, one can hardly obtain a good initialization (theta0) of cardiac images on vanilla U-Net (that’s why we choose the teacher-student framework with dual networks). Based on our trials and open issues within the GitHub community of PAFA, we concluded that it performs optimally on abdominal datasets. For the possible journal version of our work in the future, we will extend the compared datasets and PAFA should definitely be in the table.

R3C2: Lack of clinical validation or expert analysis: Our work aims at improving segmentation performance through deep learning algorithm. Our primary objective is establishing superior segmentation performance over existing methods [21,1,19,17,4,9] experimentally and then considering clinical feasibility through expert validation and retrospective case analysis.

R1C3: Data augmentation: We utilize the same data augmentation tricks as CBMT [17]. Our work focuses on designing a novel algorithm; using common tricks proves that experimental enhancement is from the method itself, not from new data augmentation techniques.

[a] Zhuotong Cai et al. Unsupervised Domain Adaptation by Cross-Prototype Contrastive Learning for Medical Image Segmentation. JBHI, 2023. [b] Wei Feng et al. Unsupervised Domain Adaptation for Medical Image Segmentation by Selective Entropy Constraints and Adaptive Semantic Alignment. AAAI, 2023. [c] Cheng Chen et al. Synergistic image and feature adaptation: Towards cross-modality domain adaptation for medical image segmentation. AAAI, 2019




Meta-Review

Meta-review #1

  • Your recommendation

    Invite for Rebuttal

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    N/A

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



back to top