List of Papers Browse by Subject Areas Author List
Abstract
Parameter Efficient Fine-Tuning (PEFT) methods have been widely used to adapt foundation models like the Segment Anything Model (SAM) for better generalization in unseen domains. Despite their widespread use, PEFT often suffers from overfitting to the source training domain, which limits their generalization performance. To address this limitation, we propose a novel subspace regularization (SR) method for robust fine-tuning. Our approach iteratively removes the knowledge of task-specific directions, as identified by LoRA parameters learned from the source domain, from the subspace of pre-trained weights. This strategy effectively encourages the LoRA parameters to acquire a more diverse range of knowledge. In addition, we introduce an exponential moving average (EMA) LoRA module that aggregates historical updates of the LoRA parameters throughout the fine-tuning process. This aggregation enhances stability and the generalizability of the learned features by smoothing the trajectory of parameter updates. Our enhanced framework, SR-SAM, incorporates both subspace regularization and the EMA LoRA module to fine-tune the popular SAM model effectively. Experimental results on two widely used domain generalization benchmarks demonstrate that SR-SAM outperforms existing state-of-the-art methods, underscoring the effectiveness of our method. The source code is available at https://github.com/xjiangmed/SR-SAM.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/2210_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
https://github.com/xjiangmed/SR-SAM
Link to the Dataset(s)
N/A
BibTex
@InProceedings{JiaXix_SRSAM_MICCAI2025,
author = { Jiang, Xixi and Yang, Chen and Zhang, Liang and Cheng, Tim Kwang-Ting and Yang, Xin},
title = { { SR-SAM: Subspace Regularization for Domain Generalization of Segment Anything Model } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15969},
month = {September},
page = {509 -- 519}
}
Reviews
Review #1
- Please describe the contribution of the paper
This paper introduces a subspace regularization technique named SR-SAM, aimed at improving robust fine-tuning by managing the subspace of the pre-trained model. By leveraging the task-specific update directions from the source domain identified by the LoRA module, it systematically removes the relevant knowledge from the pre-trained weights. This method successfully increases the variety and range of LoRA’s update directions, helping to reduce overfitting to the source domain.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
A novel subspace regularization(SR) strategy. It is specifically designed to reduce overfitting and bias during training on in-distribution source data.This approach entails recognizing the TSDs in the source training data and progressively eliminating these directions from the pre-trained model.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
The validation of a singular metric is inadequate. The IOU and HD experiments can illustrate the efficacy of the method from various perspectives.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
(1)why metrics such as IoU and HD are not be used for assessing the performance of the model? (2)References to preprints should be replaced with their officially published editions(if any). (3)Inadequate experimental information. It is more conducive to understanding and reproducing the model if the paper could provide adequate and comprehensive details about the experiments.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I am uncertain whether the model has consistently demonstrated superior performance across a broader range of metrics.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #2
- Please describe the contribution of the paper
The main contribution of this paper is the introduction of SR-SAM, a fine-tuning method that utilizes subspace regularization (SR) with Segment Anything Model (SAM) for improved domain generalization. The authors initially showed that there is a large overlap in the directions of the Low-Rank Adaptation (LoRA) subspace’s top singular vectors when fine-tuning the model on the same domain, while the overlap is substantially reduced across domains. As a result, the authors propose an approach to iteratively remove these strong (task-specific) directions from the pre-trained model to limit weight updates in these directions, reducing overfitting and bias. The paper demonstrates strong results on a polyp and prostate dataset, achieving competitive performance compared to state-of-the-art methods.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Motivation: The authors provide an insightful pre-analysis, analyzing the subspace similarity of LoRA parameters, which helps to justify the use of SR. This analysis, based on singular vector directions, provides a clear motivation for the proposed approach.
- Novel approach: The proposed approach for SR with SAM is theoretically strong and well-justified, making this paper a significant contribution to research.
- Clear explanation of the approach: The authors managed to explain the concept very clearly, motivated by a pre-analysis of the limitations of traditional LoRA.
- Strong empirical evaluation: The paper presents a comprehensive evaluation on the polyp and prostate dataset, comparing SR-SAM against a range of state-of-the-art methods. The results demonstrate that SR-SAM LoRA achieves competitive performance and almost always outperforms state-of-the-art methods, as presented in Tables 1-4.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- Experiments section: The current experiments section is a major weakness, missing crucial information for understanding the results. It is not immediately clear what the task of the experiments was; given that SAM is used and the evaluation metric is DICE, the task is likely segmentation, but this was nowhere mentioned. In addition, it is unclear how (non-predictive) data augmentation strategies, such a CutMix, Mixup, etc., are considered and evaluated for the task.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
- The Introduction section is too long and could be simplified.
- The complete Methods section is very clear, providing a clear motivation for the work and description of the proposed method. The authors cover a technical topic, but manage to explain the steps well for a reader with knowledge of traditional LoRA.
- The theory and experiment setup is sound. The figures and tables help understand the paper.
- Clarifying the Experiments section would substantially improve the readability of this paper. It is not immediately clear what the tables present. For example, including “Dice” in the tables’ caption would already help. Also, “segmentation” was nowhere mentioned in the paper, making it unknown to the reader what the evaluated task was. It could perhaps have been the subspace similarity instead of segmentation.
- The paper uses two different notations for the same definition: “SRSAM” and “SR-SAM”. It would be neater to keep one of the two notations.
- Writing: Page 7: Change “It’s” to “It is”.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The proposed method has great potential, but the writing and hence the understanding of the paper could be improved. This work has the potential to be very good.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #3
- Please describe the contribution of the paper
This paper introduces a novel enhancement to LoRA, named SR-SAM. On one hand, it regularizes the pre-trained subspace by identifying and removing task-specific directions; on the other hand, it incorporates an exponential moving average mechanism to stabilize updates and enhance robustness across out-of-distribution domains. The resulting SR-SAM framework outperforms state-of-the-art methods on two domain generalization benchmarks.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-
Through analysis based on Grassmann distance, the authors made two key observations: (1) Top singular vectors within the same domain show high overlap, capturing key task-specific knowledge; (2) Across domains, this overlap decreases, indicating that domain shifts reduce the alignment of essential directions.
-
Experiments conducted on two public datasets demonstrate state-of-the-art performance, and the ablation study further validates the effectiveness of each submodule.
-
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
-
Please provide additional experimental details for Section 2.1, including how the experiments were conducted and the observed phenomena—for example, which datasets were used, how the experiments were designed, and whether the observations were based on sufficiently large-scale data. These details can be included in the supplementary material.
-
Please clarify the rationale for choosing Grassmann distance in Section 2.1. What are its advantages, and would alternative similarity metrics affect the conclusions?
-
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(5) Accept — should be accepted, independent of rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Inspired by feature statistical analysis, this paper proposes an effective improvement to LoRA and achieves strong results. The writing is clear and well-structured, and therefore, I give it an Accept rating.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Author Feedback
N/A
Meta-Review
Meta-review #1
- Your recommendation
Provisional Accept
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A