List of Papers Browse by Subject Areas Author List
Abstract
Quantitative magnetic resonance imaging (qMRI) requires multi- phase acqui-sition, often relying on reduced data sampling and reconstruction algorithms to accelerate scans, which inherently poses an ill-posed inverse problem. While many studies focus on measuring uncertainty during this process, few explore how to leverage it to enhance reconstruction performance. In this paper, we in-troduce PUQ, a novel approach that pioneers the use of uncertainty infor-mation for qMRI reconstruction. PUQ employs a two-stage reconstruction and parameter fitting framework, where phase-wise uncertainty is estimated during reconstruction and utilized in the fitting stage. This design allows uncertainty to reflect the reliability of different phases and guide information integration during parameter fitting. We evaluated PUQ on in vivo T1 and T2 mapping datasets from healthy subjects. Compared to existing qMRI reconstruction methods, PUQ achieved the state-of-the-art performance in parameter map-pings, demonstrating the effectiveness of uncertainty guidance. Our code is available at https://github.com/Haozhoong/PUQ.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/0858_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
https://github.com/Haozhoong/PUQ
Link to the Dataset(s)
N/A
BibTex
@InProceedings{SunHao_Guiding_MICCAI2025,
author = { Sun, Haozhong and Li, Zhongsen and Du, Chenlin and Li, Haokun and Wang, Yajie and Chen, Huijun},
title = { { Guiding Quantitative MRI Reconstruction with Phase-wise Uncertainty } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15975},
month = {September},
page = {248 -- 257}
}
Reviews
Review #1
- Please describe the contribution of the paper
The authors validated the effectiveness of under-sampled reconstruction uncertainty maps for T1 and T2 mapping in quantitative MRI (qMRI). Comparative experiments with other deep learning-based qMRI benchmarks, along with ablation studies using both MLP and NLL parameter estimation methods, demonstrated the utility of the k-space reconstruction uncertainty maps.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- This is the first work to investigate the effectiveness of MC Dropout uncertainty on downstream qMRI parameter mapping.
- Both the comparative experiments and ablation studies using MLP and NLL fitting methods are comprehensive.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- The major concern is the limited novelty of this work. It primarily serves as an experimental validation of MC Dropout uncertainty for downstream qMRI parameter estimation. The work would be more impactful if it provided a theoretical justification for how k-space reconstruction uncertainty improves parameter fitting, followed by experimental validation as presented here. However, this theoretical component is missing.
- Based on the presented experiments, it appears that the unrolled reconstruction has a more decisive impact on parameter map accuracy than the uncertainty maps. The improvement gained by incorporating uncertainty maps is incremental compared to the baseline without them.
- Dropout rate appears to be a critical hyperparameter, as demonstrated in Figure 4. It significantly influences performance and can sometimes degrade it compared to models without dropout. However, the paper lacks a clear strategy for tuning this hyperparameter.
- Following my previous comment, if the dropout rate is critical and highly sensitive, it may hinder the generalizability of the proposed approach to different sequence parameters or scanners, thereby compromising its robustness and applicability.
- It is unclear why the authors investigated the effects of dropout rate and sampling times on two different datasets. Why not perform both experiments on both datasets for consistency and stronger validation?
- The provided GitHub link is not accessible.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not provide sufficient information for reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper attempts to address the important problem of qMRI parameter mapping using uncertainty maps and presents thorough experiments, including comparisons and ablation studies. However, the contribution is largely empirical, with limited novelty and theoretical insight. Key aspects such as dropout rate tuning and generalizability across different datasets are not sufficiently explored. Moreover, the improvements from uncertainty modeling appear marginal.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #2
- Please describe the contribution of the paper
The paper presents a two-stage quantitative mapping technique by introducing uncertainty in accelerated MRI reconstruction to the parameter fitting stage.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- An interesting study on the effects of incoporating reconstruction uncertainty in quantiative mapping.
- Improved empirical results compared to other methods in comparison.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
-
A deeper analysis/discussion on why incorporating uncertainty could have caused the improvement is missing. It is unclear why incorporating uncertainty (variance) could improve the accuracy (bias) because they are rather two orthogonal concepts. If the aim is to improve accuracy, one should strive to eliminate the bias. Fig. 1, the signal intensity ground truth is consistently lower than the reconstruction for all 8 acquisitions. This behaviour looks more like a systematic bias in estimation instead of “uncertainty”. Moreover, the reported “uncertainty” seems to align closely with the magnitude of the reconstruction error, rather than reflecting stochastic variability in the predictions. True uncertainty would typically manifest as random fluctuations around the mean, rather than a consistent offset. Thus, while the estimated uncertainty may provide some indirect indication of regions with larger bias, using it to correct for systematic errors conflates bias and variance in a way that is conceptually problematic. A more rigorous separation between bias correction and uncertainty estimation would strengthen the methodology and its interpretation.
-
Marginal performance boost by incoporating uncertainty (Tab. 1). This raises concerns about if incorporating uncertainty causes significant improvement. A significance test would support the conclusions better.
-
Inconsistent reconstruction methods in comparison. The current results cannot exclude the possibility that the improvement was attributed to a better reconstruction stage instead of the innovations in the parameter fitting stage. It would be better to compare the two-stage methods also in terms of reconstruction metrics of baseline images, i.e. the multi-phase readouts, in addition to quantitative mapping accuracy.
-
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
This paper presents an interesting approach to quantitative MRI reconstruction by incorporating uncertainty estimates in a clear and straightforward manner. However, it currently lacks a solid theoretical foundation explaining how and why these uncertainty measures lead to the reported improvements. Providing a conceptual framework or analytical justification for the link between uncertainty modeling and enhanced reconstruction accuracy would greatly strengthen the manuscript.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
The authors resolved my concerns about the rationale of using uncertainty for parameter fitting in qMRI. Their argument of weighted least square makes sense. However, the method shows marginal improvement (p>0.05) in T2 mapping. In general, I find the merits outweigh the weakness after authors’ rebuttal.
Review #3
- Please describe the contribution of the paper
(1). The manuscript is the first approach to leverage uncertainty for improving MRI reconstruction accuracy.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
(1). The manuscript presents an interesting work by leveraging uncertainty for MRI reconstruction.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
(1). Because there is a Monte Carlo dropout, is the training time longer than that of the model without Monte Carlo dropout? (2). What does the “hidden” mean for “The hidden channel for denoiser is 64”? (3). What is time consumption for each method presented in Table 1? (4). The manuscript used NRMSE and SSIM to measure the reconstruction performance. In T1 and T2 mapping, noise significantly influences the accuracy of the estimated values. It would be better to add the results of SNR measurements.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(5) Accept — should be accepted, independent of rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The manuscript is innovative because the manuscript studied uncertainty to improve MRI reconstruction.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
Thank you for addressing my concerns.
Author Feedback
R1 1.Training time Thank you for your insightful comment. In the reconstruction model, MC dropout adds minimal overhead. In the fitting model, repeated MC sampling increases preprocessing time, but it’s done once and has negligible overall impact. 2.Hidden channels “Hidden” refers to the number of channels in the hidden layer of the denoising network. We would like to clarify it. 3.Time consumption The average inference time per image (ms) is: MANTIS: 5.43; Dopamine: 174.85; Deep T1: 122.64; PUQ: 2386.7 The increased time for PUQ is due to 100 Monte Carlo samples. We will elaborate on this in the Discussion. 4.SNR Since the maps are fit-based, background regions lack signal and their fittings are meaningless, making SNR calculation unreliable; thus, we do not report it.
R2 1.Theoretical component Thank you for this insightful comment. Pixel-wise qMRI fitting could be viewed as a nonlinear regression problem. For linear regression, if uncertainties of measurement known, weighted least square is optimal. Due to varied sampling patterns in qMRI acquisition, different phases produce anisotropic uncertainties, so incorporating them into fitting is both natural and feasible. We plan to include more theoretical context. MC dropout here is one practical method to quantify uncertainty. We also experimented NLL method in ablation study, but found it degraded reconstruction performance. 2.Unrolled reconstruction We appreciate the recognition of unrolled reconstruction’s impact, which is a widely adopted method for MRI reconstruction. In the relatively niche qMRI domain, many existing methods are early-stage and underperform. Our work demonstrates that uncertainty guidance yields consistent gains on top of a strong baseline. We also validated the effectiveness of uncertainty guidance on the weaker DeepT1 backbone: NRMSE—6× w/o 0.3085, w/ 0.3054, showing improvement. 3.Dropout rate tuning We tuning the dropout rate in [0.2, 0.3, 0.4, 0.5]. In low-data regimes like MRI, dropout often performance well, and tuning is sufficient. We would like to clarify this. 4.Generalizability While dropout needs tuning, MC-based least-squares estimators often enhance model robustness across varied conditions. We would discuss this point. 5.Hyperparameter validation on two datasets We would report the full results (NRMSE): T2 dropout rate 0.2: w/o 0.3148, w 0.3061 0.3: w/o 0.3151, w 0.3128 0.4: w/o 0.3271, w 0.3227 T1 sampling times 10: w/o 0.0758, w 0.0762 20: w/o 0.0757, w 0.0744 50: w/o 0.0748, w 0.0751 100: w/o 0.0748, w 0.0739 200: w/o 0.0748, w 0.0740 In nearly all settings, uncertainty guidance reduced error and consistently achieved the minimum. 6.GitHub link We apologize for this. it was caused by the default hyphen after “anony”.
R4 1.Uncertainty and bias Thank you for this insightful comment. PUQ is a two-stage framework that first estimates uncertainty from multi-phase images, then uses it to reduce fitting error. The uncertainty and the reduced error are distinct and not directly tied to the classical bias-variance trade-off. Parameter fitting is a nonlinear regression task, where knowing the uncertainty of each point allows us to better estimate function parameters—similar to how weighted least squares improves classical linear regression according to their uncertainty. We plan to include more theoretical context. 2.Significance test Our ablation (five runs) shows uncertainty reduces NRMSE. Paired tests yield p of 0.0079 (T1) and 0.098 (T2), where the small sample size (5) limits power but supports the trend. 3.Reconstruction methods We acknowledge PUQ uses a strong backbone. Unrolled network is not a novel choice in MRI reconstruction, while qMRI remains a niche field with many early-stage methods published. Our focus is on how uncertainty contributes to parameter estimation on top of a strong reconstruction baseline. We also validated uncertainty guidance on the weaker DeepT1 backbone: NRMSE—6× w/o 0.3085, w/ 0.3054, showing improvement.
Meta-Review
Meta-review #1
- Your recommendation
Invite for Rebuttal
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
Reviewers comment positive on the rebuttal, criticism has been resolved!
Meta-review #3
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A