Abstract

Deep learning-based Magnetic Resonance (MR) reconstruction methods have focused on generating high-quality images but often overlook the impact on downstream tasks (e.g., segmentation) that utilize the reconstructed images. Cascading separately trained reconstruction network and downstream task network has been shown to introduce performance degradation due to error propagation and the domain gaps between training datasets. To mitigate this issue, downstream task-oriented reconstruction optimization has been proposed for a single downstream task. In this work, we extend the optimization to handle multiple downstream tasks that are introduced sequentially via continual learning. The proposed method integrates techniques from replay-based continual learning and image-guided loss to overcome catastrophic forgetting. Comparative experiments demonstrated that our method outperformed a reconstruction network without finetuning, a reconstruction network with naïve finetuning, and conventional continual learning methods. The source code is available at: https://github.com/SNU-LIST/MOST.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/2788_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

https://github.com/SNU-LIST/MOST

Link to the Dataset(s)

N/A

BibTex

@InProceedings{JeoHwi_MOST_MICCAI2025,
        author = { Jeong, Hwihun and Chun, Se Young and Lee, Jongho},
        title = { { MOST: MR reconstruction Optimization for multiple downStream Tasks via continual learning } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15962},
        month = {September},
        page = {400 -- 410}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposes a continual-learning-based framework for learning MR reconstruction neural networks optimized for multiple downstream tasks. During continual learning of multiple downstream tasks, replay mechanisms are designed for different data structures in different tasks to overcome catastrophic forgetting. While existing works have focused on single-task optimized MR reconstruction, this work successfully extends MR reconstruction optimization to multiple tasks. Experiments are conducted on various datasets, and the results suggest that the proposed methods are promising compared with baseline methods.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    (1) This paper proposes a novel continual learning framework to optimize MR reconstruction for multiple downstream tasks, extending the single downstream task setting to a multiple-task setting, which clearly positions its contributions within the literature. (2) Methods and experiments are well explained. Results show that the proposed method achieves better performance than various continual learning baselines.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    (1) My major concern is whether the continual learning setting is suitable for extending single-downstream-task-optimized MR reconstruction to multi-task cases. Additionally, what would be the advantages of using continual learning compared to training one MRI reconstruction model optimized for multiple tasks simultaneously? Specifically, why not solve Eqn. (3) simultaneously instead of sequentially? The authors are encouraged to provide strong motivations for addressing the problem within continual learning settings and provide clear evidence of its advantages. Furthermore, the performance of models optimized for each individual downstream task should be provided as the upper-bound performance. (2) The variance of the performance, such as standard deviation, should be reported in all tables. Statistical tests should be conducted to clearly demonstrate whether the minor improvements are significant.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    The authors are encouraged to increase the font size in figure 1 for better readability.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (3) Weak Reject — could be rejected, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall, I believe this paper makes great contributions toward continual optimization for MR reconstruction across multiple downstream tasks. However, the motivation and advantages of addressing this multi-task problem within a continual learning setting lack strong explanation and evidence.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    My concerns have been addressed.



Review #2

  • Please describe the contribution of the paper

    This paper proposes an MRI reconstruction optimization framework based on continuous learning, which enables a single reconstruction network to sequentially adapt to multiple downstream tasks, rather than being limited to achieving optimality only on a single task. By introducing the replay strategy and image-guided loss, the proposed method innovatively solves the performance degradation problem in multi-task scenarios, and provides a novel and efficient solution for the application of MRI reconstruction in multi-task.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Novel Problem Setting: This paper introduces a new continual learning problem setup by incorporating continual learning methods into MRI reconstruction, enabling a single reconstruction network to sequentially adapt to multiple downstream tasks. Such a problem formulation is relatively rare in the medical imaging field and offers fresh perspectives and solutions for knowledge transfer between tasks and the challenge of catastrophic forgetting.
    2. Comprehensive Experimental Design: The authors evaluate their method on a variety of downstream tasks—including segmentation and classification—thereby thoroughly demonstrating its applicability and robustness across different scenarios. This extensive experimental setup strengthens the credibility of the proposed approach.
    3. Effective Comparisons: The paper not only compares against classical algorithms such as EWC and MAS, but also benchmarks other replay-based continual learning methods. This comprehensive comparative analysis validates the superiority of the proposed method in preventing catastrophic forgetting and improving downstream task performance.
    4. Strong Clinical Applicability: The method has high practical value, especially in multi-center data environments. When MRI reconstruction for a single downstream task is limited, leveraging continual learning to enhance reconstruction performance can better support clinical applications, facilitating data sharing and model extension in diagnosis and treatment. This provides robust support for cross-institutional applications in future medical imaging.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    1. Limited Methodological Innovation. The overall framework consists of two modules: replay-based continual learning and an image-guided loss. For the former, although the authors adapt the sampling strategy to handle inconsistent data formats across downstream tasks, this approach is essentially an intuitive tweak rather than a deep, proactive solution to catastrophic forgetting. In other words, while mismatched data formats and label types are indeed challenging, the core issue lies in forgetting, not merely in sampling. Thus, if the authors only mitigate forgetting by changing the sampling scheme—without thoroughly discussing or empirically validating the advantages of this change—the strategy appears passive and lacks a significant technical breakthrough.
    2. Questionable Role of the Image-Guided Loss. The image-guided loss seems more like a necessary component for MRI reconstruction in a continual learning setting than a novel mechanism specifically designed to alleviate catastrophic forgetting. Since the task must balance image reconstruction quality with downstream performance, including a reconstruction loss is mandatory. If the authors’ motivation for this loss is to prevent forgetting, the argument is unconvincing: the loss primarily ensures reconstruction fidelity rather than directly addressing forgetting. Compared to existing methods that incorporate task-driven losses in segmentation or classification, this strategy does not demonstrate sufficient novelty.
    3. Baseline Selection and Ablation Study Issues. There are concerns about the baselines used. For example, in Table 2, the finetune baseline appears not to include the image-guided reconstruction loss. If this loss is omitted, the performance drop is predictable; conversely, in the ablation study of Table 3(b), the second row (likely representing the baseline) shows better results, which biases the comparison toward existing methods. The rationale for these baseline settings needs clarification, or additional controlled experiments are required to prove that the proposed improvements offer genuine advantages.
    4. Inconsistencies in Figures / Mechanism Clarification. In Figure 1(c), the schematic shows that Task 1 uses the image-guided loss, but the final task does not. This could be a drawing error or might reflect a special mechanism in the model. The paper must explain this; otherwise, readers will be confused about the internal workflow.
    5. Language and Grammar Issues. There are noticeable grammatical errors in the Abstract—for instance, “In this work, we extended this optimization to sequentially introduced multiple downstream tasks and demonstrated that a single MR reconstruction network can be optimized for multiple downstream tasks by deploying continual learning (MOST).” The phrasing and syntax are not sufficiently precise, which undermines the paper’s academic rigor and readability.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (3) Weak Reject — could be rejected, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    My recommendation is based on two key factors. First, in terms of methodological innovation, although the paper introduces a continual learning framework to address forgetting in a multi-task environment, the overall strategy is merely a simple combination of traditional replay-based methods and an image-guided loss. For the replay-based component, the authors only separate data sampling instead of mixing samples in the same mini-batch—a relatively intuitive and passive change that lacks fundamentally new ideas for resolving data inconsistency across tasks. As for the image-guided loss, it is more of an essential component for MRI reconstruction than an innovative mechanism specifically designed to mitigate catastrophic forgetting. Therefore, I believe the method does not offer sufficient novelty at its core.

    Second, regarding experimental design, the selection of baselines is unreasonable. The baseline does not fully leverage the image-guided loss for MRI reconstruction, resulting in significantly degraded performance and thus exaggerating the advantages of the proposed method in comparisons. This unfair comparative design undermines the credibility of the experimental results and the paper’s conclusions.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Reject

  • [Post rebuttal] Please justify your final decision from above.

    The author once again emphasized the innovativeness of their question setting, but failed to address my concerns about methodological innovation. I suggest that the authors refine their information and results and resubmit their results at a certain location, because I believe that question setting is important and valuable for the community.



Review #3

  • Please describe the contribution of the paper

    This paper proposes a continual learning method to optimize MR reconstruction model for multiple downstream tasks, successfully outperforming previous continual learning methods.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The problem being addressed is unique – optimizing only reconstruction network is not enough, as reconstruction is typically followed by downstream tasks. This paper proposes a MR reconstruction Optimization for multiple downstream Tasks (MOST) method to optimize the reconstruction network for various segmentation and classification downstream tasks. The proposed replay-based continual learning and image-guided loss for sequential finetuning helps to overcome catastrophic forgetting and outperforms previous fine-tuning and continual learning approaches. The authors provided extensive experimentations and ablation studies to show the capability of the method.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    There are a few modifications and clarifications needed before publication:

    1. The assumption of limited access to the training dataset of previous trained tasks is vague. In Section 2.2, it is mentioned that “Our scenario assumes limited access to previously trained datasets due to privacy concerns and computational costs”. Then how did you choose the small subset of datasets to be used for fine-tuning?
    2. In Section 2.2, “However, such multi-task learning is difficult to be optimized and does not adapt well to real-world applications.” Needs more explanation.
    3. Have you tested if the image guided loss would help the non image-generation tasks like classification?
    4. In Table 1, why naive finetuning performs worse than no finetuning? I understand the reconstruction task is subject to catastrophic forgetting because it is not fine-tuned with reconstruction loss, but why fine-tuning does not result in better performance for downstream tasks?
    5. The authors used U-Net for segmentation and CNN for classification, which are very basic models. Why not use more advanced SOTA-performance models?
    6. In Section 3.3, for task order experiments, give some intuition why starting with segmentation is beneficial.
    7. In Table 3(a), it is better to add results for buffer size of 2 or even 0 to show that the buffer is dispensable.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    Please make suggested changes in the weakness section to improve the quality of manuscript.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Meaningful problem setting and innovative methodology but needs further clarifications in methods and experiments.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    The rebuttal addresses my concerns - I recommend acceptance of this paper.




Author Feedback

We thank the reviewers for their feedback.

  1. Clarification for experimental setting (R2-3,4, R3-3) We will revise the results to ensure fair comparisons under the same loss setting. This also relates to the decision to apply the IG loss only to the segmentation task. As mentioned in 2.3, we used the IG loss only for segmentation to reflect a realistic scenario. Segmentation often uses high-quality aliasing-free images for label generation, so leveraging that information is reasonable. In contrast, classification tasks do not typically rely on such images in label generation, making the use of IG loss less justified in that context. Nevertheless, in more conservative scenarios, such high-quality images may not be available even for segmentation (i.e., segmentation labeling with low-quality recon such as GRAPPA). So, we will revise the comparison to present results without the IG loss as the default and describe the IG loss separately as an optional enhancement when additional image information is accessible. Although the effect size slightly decreases without the IG loss, the overall trend remains consistent. (e.g., Last row of Table 1 is changed with the second row in Table 3b=0.971, 0.006 0.929, 0.004, 0.622, 0.000, 0.981, 0.002, 0.809, -)
  2. Motivation of continual learning (R1-1, R3-1,2) We believe that sequential finetuning is meaningful because new downstream tasks continue to emerge, and the task networks also evolve. For example, higher resolution imaging may enable finer segmentation, or new image contrasts may alter lesion appearances, requiring updates to task networks. While joint multi-task learning would be ideal when a new task arrives, it is often impractical due to the need to store and retrain all previous data, which is costly in terms of computation and storage. This issue is more critical in the medical domain, where long-term data storage is often restricted by privacy and regulatory constraints. Moreover, MRI data is large, often reaching hundreds of megabytes per scan, and general hospitals may perform over 100,000 scans annually. Given the need for multi-scanner and multi-task datasets, storage demands can quickly reach hundreds of terabytes, making data management challenging. As a result, continual learning becomes a practical alternative, particularly in environments with limited computing resources such as hospitals in low-resource regions.
  3. Novelty (R2-1,2) The main novelty of this work lies in proposing a novel setup to address challenging scenarios in low-budget clinics. Although the methodological component may seem incremental, we demonstrate that introducing intermediate losses within a continual learning framework yield benefits. As shown in Table 3b, IG reduces forgetting measures (FM). We believe this is because IG encourages the learning of lower-level features, which helps reduce forgetting.
  4. We will carefully proofread the entire manuscript to improve its grammatical accuracy. For example, the sentence is revised as: “In this work, we extend the optimization framework to handle multiple downstream tasks that are introduced sequentially. We also demonstrate that a single MR reconstruction network can be effectively optimized for these tasks through continual learning.” (R2-5)
  5. The training set was randomly selected. (R3-1)
  6. Table 1 shows the results after all tasks have been completed. The naïve finetuning method performs poorly due to catastrophic forgetting. (R3-4)
  7. We did not adopt SOTA models due to additional preprocessing requirements or dataset limitations. (See Discussion) (R3-5)
  8. Classification labels contain less detailed information than segmentation labels, which leads to less effective finetuning. (See Discussion) (R3-6)
  9. Test for upper bound and paired T-tests are conducted and will be included in future versions. (R1-1,2)
  10. The minimum buffer size must be 4 because the number of downstream tasks is 4. (R3-7)




Meta-Review

Meta-review #1

  • Your recommendation

    Invite for Rebuttal

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    N/A

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



back to top