List of Papers Browse by Subject Areas Author List
Abstract
Interpolating missing data in k-space is essential for accelerating imaging. However, existing methods, including convolutional neural network-based deep learning, primarily exploit local predictability while overlooking the inherent global dependencies in k-space. Recently, Transformers have demonstrated remarkable success in natural language processing and image analysis due to their ability to capture long-range dependencies. This inspires the use of Transformers for k-space interpolation to better exploit its global structure. However, their lack of interpretability raises concerns regarding the reliability of interpolated data. To address this limitation, we propose GPI-WT, a white-box Transformer framework based on Globally Predictable Interpolation (GPI) for k-space. Specifically, we formulate GPI from the perspective of annihilation as a novel k-space structured low-rank (SLR) model. The global annihilation filters in the SLR model are treated as learnable parameters, and the subgradients of the SLR model naturally induce a learnable attention mechanism. By unfolding the subgradient-based optimization algorithm of SLR into a cascaded network, we construct the first white-box Transformer specifically designed for accelerated MRI. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art approaches in k-space interpolation accuracy while providing superior interpretability.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/1585_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
N/A
Link to the Dataset(s)
N/A
BibTex
@InProceedings{LuoChe_Towards_MICCAI2025,
author = { Luo, Chen and Jin, Qiyu and Xie, Taofeng and Wang, Xuemei and Wang, Huayu and Liu, Congcong and Tang, Liming and Chen, Guoqing and Cui, Zhuo-Xu and Liang, Dong},
title = { { Towards Globally Predictable k-Space Interpolation: A White-box Transformer Approach } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15967},
month = {September},
page = {567 -- 577}
}
Reviews
Review #1
- Please describe the contribution of the paper
The authors present a cascaded transformer based k-space interpolation method for reconstructing MRIs. The construction of the transformer is based around the SWIN transformer sliding windows in linear and square windows rather than patches. In theory, the work uses structured low-rank (SLR) and the Hankelization operation and the annihilation filters that would be usually estimated from calibrated data. Because this is a large space, the proposed work learns these as parameters from large datasets. They also use the white-box transformer and multi-head subspace self-attention (MSSA).
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Good comparisons and suitable ablations studies
- Citations are OK
- Work appears to be somewhat novel
- Reconstruction images and tables look good
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- Authors use 31 patients for training and 3 patients for testing, no cross validation
- No limitations or future work highlighted
- Only 1 dataset used/shown – fastMRI
- Experiments only on 4x and 6x acceleration
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The work is novel and the results look good, but the evidence is on one dataset and small number of testing patients (i.e. no cross validation) and so the results could be fortunate.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #2
- Please describe the contribution of the paper
This paper proposes a novel white-box transformer-based approach for k-space interpolation in MRI.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The proposed transformer method is grounded in a solid theoretical foundation, making it inherently interpretable—a desirable quality for clinical applications. It also achieves state-of-the-art performance in experimental evaluations.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
It would be helpful if the authors provided details on how hyperparameters were selected for their implementation—for instance, the rationale behind choosing ten iterations as sufficient.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(5) Accept — should be accepted, independent of rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
This paper proposes a novel approach for k-space interpolation, with experiments demonstrating significant improvements across evaluation metrics.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #3
- Please describe the contribution of the paper
This paper proposes a method for globally predictable interpolation for k-space based on the interpretable Transformer framework (white-box Transformer framework). The proposed framework formulates globally predictable interpolation (GPI) from the perspective of annihilation as a novel k-space structured low-rank (SLR) model. The global annihilation filters in the SLR model are treated as learnable parameters, and the subgradients of the SLR model naturally induce a learnable attention mechanism. Compared with the traditional CNN-based k-space interpolation method, the proposed method can effectively capture the long-range dependencies of k-space. Experiments on knee MRI data illustrate the effectiveness of the proposed method.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-
The paper was structured well and the application background and the aim were stated clearly.
-
The novelty of this paper lies in firstly proposing an interpretable white-box Transformer framework for MRI reconstruction, the proposed framework represent the global predictable interpolation of k-space as a new structured low-rank (SLR) model. The authurs also unfold the subgradient-based optimization algorithm of SLR into a cascaded network. This paper presents a novel idea that will be interesting for the MICCAI community.
-
The authors provided a good evaluation of the proposed method.
-
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
-
The comparative experiments do not include recent state-of-the-art MRI reconstruction methods, such as those referenced in [7] and [34], which may limit the effectiveness of the performance evaluation of the proposed method.”
-
The ablation experiment is only briefly described in terms of its implementation, without a comprehensive analysis of the results, which limits the insight into the effectiveness of individual components.
-
In formula (7) and proposed white-box Transformer framework, the configurations of modeling parameters $\lambda_1$, $\lambda_2$, $\mu$, $\gamma$ should be given, for example, the learnable parameters or manually priori-setting.
-
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(6) Strong Accept — must be accepted due to excellence
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
-
This is a very good written article.
-
This work proposed a novel method for interpretable and globally predictable k-space interpolation based on Transformer framework. And this is a good example that applies globally predictable and learnable k-space interpolation on MRI imaging problems, which will be interesting for the MICCAI community.
-
I think the proposed white-box Transformer clould be extended to other learning-based imaging applications and alalyzing methods like CT, EIT.
-
The authors verified the proposed method significantly reduces aliasing artifacts and delivers more stable reconstructions, underscoring its enhanced capability to exploit global structural information. The results show that the proposed method performed better as the compared state-of-the-art MRI reconstruction methods.
-
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Author Feedback
We sincerely thank the reviewers for their valuable feedback and constructive suggestions. We have carefully revised the manuscript to address the raised concerns. Our point-by-point responses are presented below. [R1-Q1] Comparison with recent works [7], [34]. We appreciate the reviewer’s suggestion. Reference [7] is an image-domain Transformer, and [34] focuses on single-coil k-space modeling. These two methods target different problem settings and are not directly comparable to ours. For the sake of fairness, we did not conduct a comparison. Instead, we compared against Swin Transformer, a representative global modeling method, to demonstrate the advantages of our approach. [R1-Q2] Lack of ablation analysis. Thank you for your valuable suggestion. We have revised the manuscript to provide a clearer analysis of the ablation results, highlighting the contribution and interplay of key components. The ablation results confirm that each component contributes meaningfully to the overall performance of the model. [R1-Q3] Unclear parameter settings. Thank you for pointing this out. We have clarified in the revised manuscript that $\lambda_1$, $\lambda_2$, and $\mu$ are implemented as learnable scalar parameters and are automatically optimized during training. This design aims to balance flexibility and stability during optimization. [R2-Q1] Testing on only 3 subjects without cross-validation. We thank the reviewer for this observation. The training set includes 840 images from 31 subjects, and testing is conducted on 96 images from 3 different subjects. This setup forms a subject-wise split, enabling a meaningful assessment of generalization performance. [R2-Q2] No discussion of limitations or future work. We appreciate this suggestion and have added a paragraph to the Conclusion section discussing current limitations and outlining future directions, including broader applications and domain adaptation. [R2-Q3] Only one dataset used. We chose the fastMRI dataset due to its public availability, multi-coil structure, and widespread use in the community. We also conducted additional experiments on a brain MRI dataset, which showed consistent improvements. Due to space constraints, these results were not included in the current version. [R2-Q4] Only 4× and 6× acceleration tested. We focused on 4× and 6× as they are commonly adopted and clinically feasible acceleration factors. Higher acceleration often leads to aliasing artifacts and limited diagnostic value. We plan to explore higher acceleration settings in future work. [R3-Q1] Lack of justification for parameters and the iteration number. Thank you for the suggestion. We have added explanations for the parameter settings in revised manuscript. The iteration number was empirically chosen based on stable performance and computational efficiency. This setting is also consistent with prior unfolded models such as ISTA-Net. We again sincerely thank all reviewers for their thoughtful feedback and support, which greatly helped us improve the clarity and quality of our work. Reference: J. Zhang and B. Ghanem, “ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing,” CVPR, pp. 1828–1837, 2018.
Meta-Review
Meta-review #1
- Your recommendation
Provisional Accept
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A