List of Papers Browse by Subject Areas Author List
Abstract
Parameter-efficient fine-tuning (PEFT) of pre-trained foundation models is increasingly attracting interest in medical imaging due to its effectiveness and computational efficiency. Among these methods, Low-Rank Adaptation (LoRA) is a notable approach based on the assumption that the adaptation inherently occurs in a low-dimensional subspace. While it has shown good performance, its implementation requires a fixed and unalterable rank, which might be challenging to select given the unique complexities and requirements of each medical imaging downstream task. Inspired by advancements in natural image processing, we introduce a novel approach for medical image segmentation that dynamically adjusts the intrinsic rank during adaptation. Viewing the low-rank representation of the trainable weight matrices as a singular value decomposition, we introduce an l1 sparsity regularizer to the loss function, and tackle it with a proximal optimizer. The regularizer could be viewed as a penalty on the decomposition rank. Hence, its minimization enables to find task-adapted ranks automatically. Our method is evaluated in a realistic few-shot fine-tuning setting, where we compare it first to the standard LoRA and then to several other PEFT methods across two distinguishable tasks: base organs and novel organs. Our extensive experiments demonstrate the significant performance improvements driven by our method, highlighting its efficiency and robustness against suboptimal rank initialization. Our code is publicly available: https://github.com/ghassenbaklouti/ARENA.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/4888_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
https://github.com/ghassenbaklouti/ARENA
Link to the Dataset(s)
All details will be there: https://github.com/ghassenbaklouti/ARENA
BibTex
@InProceedings{BakGha_Regularized_MICCAI2025,
author = { Baklouti, Ghassen and Silva-Rodríguez, Julio and Dolz, Jose and Bahig, Houda and Ben Ayed, Ismail},
title = { { Regularized Low-Rank Adaptation for Few-Shot Organ Segmentation } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15966},
month = {September},
}
Reviews
Review #1
- Please describe the contribution of the paper
This paper introduces a new LoRA variant, called Adaptive Rank Segmentation (ARENA), for medical image segmentation that dynamically adjusts the intrinsic rank during adaptation. The authors propose to view the low-rank representation of the trainable weight matrices as a SVD and use an L1 sparsity regularization term to the loss function, which enables automatically searching task-adapted ranks during the adaptation process.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The paper is generally clear and well-written.
- The proposed method is comprehensively evaluated on multiple datasets.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- The proposed LoRA variant appears to be fairly straightforward and bears a strong resemblance to other adaptive rank LoRA methods based on singular value decomposition, such as AdaLoRA [A1] and DyLoRA [A2]. Could the authors elaborate on how their approach relates to and differs from these existing LoRA variants?
- I am curious about how the matrices B, A, and v are initialized. In standard LoRA, B and A are typically initialized to zero. Is the same initialization strategy used in the proposed method?
- How is the initial rank r determined? The authors appear to set the initial rank to 8. Since the initial rank only serves as an upper bound and the proposed optimization can adaptively reduce it, is there a reason not to choose a much larger initial rank (such as min(m,n)) to allow for greater flexibility? Would selecting a larger initial rank result in increased computational cost and more trainable parameters? If so, does this mean that the adaptive rank search strategy does not actually reduce computational cost, and that the number of trainable parameters ultimately depends on the initial rank chosen?
[A1] Zhang, Qingru, et al. “Adalora: Adaptive budget allocation for parameter-efficient fine-tuning.” ICLR 2023. [A2] Valipour, Mojtaba, et al. “Dylora: Parameter efficient tuning of pre-trained models using dynamic search-free low-rank adaptation.” arXiv preprint arXiv:2210.07558 (2022).
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper proposes a well-evaluated LoRA variant for volumetric medical image segmentation. However, several concerns remains on the originality and practicability of the proposed method. I would like to change my score if the authors could address my concerns.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #2
- Please describe the contribution of the paper
This paper proposes a novel method for few-shot organ segmentation by enhancing LoRA with an adaptive rank determination strategy. The method, named ARENA, is evaluated on both novel and base tasks. The experimental results demonstrate promising improvements on the novel task and good parameter efficiency.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
1) The paper introduces a novel LoRA-based method for medical image segmentation. Given the widespread adoption of LoRA across tasks and domains, the proposed improvement is of particular interest for parameter-efficient fine-tuning.
2) The writing is clear and easy to follow.
3) The proposed method demonstrates robustness and promising performance, especially on the novel task.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
1) Although LoRA is a widely adopted baseline, many variants and improvements have been proposed. These related methods should be discussed in greater detail, particularly in terms of how they relate to or differ from the proposed approach.
2) In Table 1, the full fine-tuning (FFT) baseline performs better than the linear probe in the 5-shot setting but worse in the 10-shot setting. This is counter-intuitive, as FFT typically suffers from overfitting in extremely low-data regimes but is expected to generalize better with more examples. Since FFT is a key baseline in evaluating parameter-efficient fine-tuning, this observation raises concerns about the reliability or consistency of the experimental results.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper presents a promising and efficient method for few-shot medical image segmentation with an adaptive-rank LoRA variant. While the results are encouraging and the approach is relevant, the paper lacks a thorough comparison with existing LoRA variants and shows some inconsistencies in baseline behavior. These issues limit the clarity of the contribution and the strength of the experimental validation.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
Authors’ rebuttal addressed my concerns.
Review #3
- Please describe the contribution of the paper
The paper addresses the challenge of adapting pre-trained volumetric segmentation models to new tasks in scenarios where only a few annotated examples are available (few-shot learning). In particular, this paper targets the problem of the variability of the optimal rank of the update matrix in a LoRA-framework (e.g. 8 for kidney and stomach, 32 for aorta): the paper introduces ARENA which dynamically adjusts the intrinsic rank by treating the low-rank update as a singular value decomposition (SVD) and applying an l1 sparsity regularizer on the singular value vector. The authors use a block-coordinate descent strategy to optimize the parameters: standard gradient updates for the low-rank factors (matrices A and B) and proximal updates (via soft-thresholding) for the singular values.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The authors contribute with the introduction of an l1 sparsity regularizer on the singular value vector. Experiments and comparisons are conducted in realistic few-shot settings using both base and novel organ segmentation tasks. The method is well-motivated by clinical constraints.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
While the method aims to be adaptive, it introduces additional hyperparameters (e.g., the regularization weight lambda, and the learning rate for the proximal step) that might need careful tuning in different contexts. It would be valuable to assess if the adaptive low-rank approach generalizes to other modalities (e.g., MRI, ultrasound).
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not provide sufficient information for reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Application of a traditional method (block-coordinate descent strategy) in an AI setting for few-shot adaptation seems to produce better results. More discussion on the hyper-parameters is beneficial.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
Authors replied convicingly regarding the raised issues.
Author Feedback
We thank the reviewers for the insightful comments and appreciate their acknowledgment of the clarity (R5), relevance (R1) and novelty (R1) of the well-motivated (R2) adaptive LoRA variant that we propose for segmentation, as well as the promising performance/efficiency (R1) and comprehensive evaluation (R5). Our detailed answers are below.
- General comment: comparison with other LoRA variants (R1,R5). Given the space constraints (and no appendices), we decided to focus on the PEFT baselines established in the segmentation work in [27], since these have been validated in our context, i.e., volumetric segmentation with dedicated architectures (not the very different NLP tasks/architectures on which most LoRA variants are evaluated). However, we acknowledge the relevance of discussing other LoRA variants proposed in different contexts. While our method shares the objective of rank adaptation with SVD-based methods like AdaLoRA and DyLoRA, it differs in how adaptation is achieved. AdaLoRA reallocates low-rank capacity across layers using several heuristics and a scheduling policy. DyLoRA samples a rank at each training step and truncates the LoRA matrices accordingly to support a predefined rank range. In contrast, we directly integrate rank adaptation into the training objective via L1 regularization, allowing the effective rank to be learned in an entirely data-driven and end-to-end fashion, without sampling, truncation, manual scheduling, or extensive hyperparameters. Note that such a design choice is of special relevance in the explored realistic validation-free few-shot adaptation scenario we address. Furthermore, these adaptive LoRA variants were developed and evaluated for NLP tasks/architectures. Our method is specifically designed and tested in a few-shot medical image segmentation scenario. We also note that we experimented with AdaLoRA in our setting and found that our method consistently outperforms it. We agree that a more detailed comparison with related LoRA variants is important. We can accommodate such discussion/comparisons in the final version and/or in the project GitHub page and paper preprint with more allowed space.
- Specific comments.
- R1: LP vs. FFT for new organs (Tab. 1). As noted in the “PEFT implementation details”, our setup keeps the decoder frozen when adapting to known tasks but fully updates it for novel organs, following the exploratory analysis in [27]. Accordingly, the “linear probe” baseline in Tab. 1 also involves decoder tuning, which explains why its performance may appear stronger than expected. We apologize for not clarifying this in Tab. 1 (we will revise).
- R2: Hyperparameters. While our method introduces additional hyperparameters, we highlight that the same fixed values were used across all the experiments, which aligns with the proposed validation-free few-shot scenario. Hence, robustness to such values has already been assessed in our experiments.
- R2: Other modalities. Given the increasing number of recently introduced open foundation models focused on CT and space constraints, we focused on CT. However, our approach is modality-agnostic (we will add results with other modalities to the project’s GitHub/preprint).
- R5: Parameter initialization: We follow the standard LoRA initialization for matrices A and B. Vector v is initialized with a uniform distribution in [−1,1].
- R5: Initial rank. Based on common practice in the LoRA literature and maintaining parameter efficiency, we chose an initial rank of 8. As the reviewer points out, increasing such a rank would increase computational cost, as in the vanilla LoRA. Our primary goal is to improve LoRA’s performance and model selection in validation-free scenarios through adaptive rank control, not to reduce the training cost. Also, note that a larger rank could be detrimental in the explored application (see Fig. 1.a). However, as showcased in such an experiment, our method brings enhanced stability to bad-rank initialization.
Meta-Review
Meta-review #1
- Your recommendation
Invite for Rebuttal
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #3
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A