Abstract

Spatial resolution, signal-to-noise ratio (SNR), and motion artifacts critically matter in any Magnetic Resonance Imaging (MRI) practices. Unfortunately, it is difficult to achieve a trade-off between these factors. Scans with an increased spatial resolution require prolonged scan times and suffer from drastically reduced SNR. Increased scan time necessarily increases the potential of subject motion. Recently, end-to-end deep learning techniques have emerged as a post-acquisition method to deal with the above issues by reconstructing high-quality MRI images from various sources of degradation such as motion, noise, and low resolution. However, those methods focus on a single known source of degradation, while multiple unknown sources of degradation commonly happen in a single scan. We aimed to develop a new methodology that enables high-quality MRI reconstruction from scans corrupted by a mixture of multiple unknown sources of degradation. We proposed a unified reconstruction framework based on explanation-driven cyclic learning. We designed an interpretation strategy for the neural networks, the Cross-Attention-Gradient (CAG), which generates pixel-level explanations from degraded images to enhance reconstruction with degradation-specific knowledge. We developed a cyclic learning scheme that comprises a front-end classification task and a back-end image reconstruction task, circularly shares knowledge between different tasks and benefits from multi-task learning. We assessed our method on three public datasets, including the real and clean MRI scans from 140 subjects with simulated degradation, and the real and motion-degraded MRI scans from 10 subjects. We identified 5 sources of degradation for the simulated data. Experimental results demonstrated that our approach achieved superior reconstructions in motion correction, SNR improvement, and resolution enhancement, as compared to state-of-the-art methods.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2315_paper.pdf

SharedIt Link: pending

SpringerLink (DOI): pending

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2315_supp.pdf

Link to the Code Repository

N/A

Link to the Dataset(s)

https://camcan-archive.mrc-cbu.cam.ac.uk/dataaccess https://www.openfmri.org/dataset/ds000030/

BibTex

@InProceedings{Jia_Explanationdriven_MICCAI2024,
        author = { Jiang, Ning and Huang, Zhengyong and Sui, Yao},
        title = { { Explanation-driven Cyclic Learning for High-Quality Brain MRI Reconstruction from Unknown Degradation } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15007},
        month = {October},
        page = {pending}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposed a unified reconstruction framework based on explanation-driven cyclic learning to solve the problem of MRI reconstruction from unknown degradation. A Cross Attention-Gradient (CAG) is proposed to generate pixel-level degradation knowledge. Experiments on both the simulated degradation and the real motion-degraded images demonstrate the effectiveness of the proposed method.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper aims to solve the problem of MRI reconstruction from unknown degradation, which is a challenging problem. By designing a cycle learning framework, they combine the degradation information into the reconstruction process and achieve satisfactory results.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    All the comparison methods are designed for single-type known degradation. These methods are not designing specific modules for solving the multi-type unknown degradation. From this point of view, the comparisons of this paper are unfair.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    No.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Maybe you should compare your method with those all-in-one restoration methods. I think that is a fair comparison. [1]Potlapalli et al. Promptir: Prompting for all-in-one image restoration. NeuralIPS2023. [2]Xin et al. Fill the k-space and refine the image: Prompting for dynamic and multi-contrast MRI reconstruction. 2023

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall, the model of this paper is technically feasible. However, the comparison experiments seem unfair. It would be better to compare the proposed method with those all-in-one restoration methods.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    The authors have solved some of my concerns, I decide to change my decision to weak accept.



Review #2

  • Please describe the contribution of the paper

    The contribution of this paper is to propose a method which augments the straightforward DL approach for artifact correction (learn a mapping between corrupted and high quality images), by incorporating information (semantic and at the pixel level) about the degradation into the training process through the use of a degradation classifer and attention maps showing which pixels contributed most to the degradation classification.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strength of this paper is the nice/effective way that information about the degradation/pixel level information concerning the information is integrated into the artifact correction. As far as I know, this seems to be a novel (at least in MR artifact correction) way to incorporate information about the degradation.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    I did not find any major weaknesses, but several small questions arose, which I lay out in the comments.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    No

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    I hesitate to call what is done in this paper as image reconstruction, since it does not involve any kspace undersampling/ or indeed any involvement of raw measurement data other than to simulate motion. I would more emphasize that this is artifact correction.

    It is not clear that the datasets used actually contained raw kspace data? Did the authors just take the Fourier transform of image data to generate pseudo-kspace data? If so, I think potential problems with how realistic this is should be discussed (https://arxiv.org/abs/2109.08237).

    I think for evaluating perceptual quality quantitatively, the authors may consider to use a perceptual metric like LPIPS.

    From the supplementary material, it is not clear that the pixels with more artifacts have higher values; e.g. I would expect the band artifacts in the cortices (clear evidence of motion) would then be more highlighted. Could the authors comment on this?

    From the ablation study, it seems like just Restormer with none of the proposed additions is already competitive/beating the other methods used for comparison, at least quantitatively. How does Restormer alone compare qualitatively?

    Finally, there are many, many papers on retrospective denoising, motion correction, and super-resolution for MR images. Indeed it would strengthen the message of the paper if the proposed method was better at each/any of these tasks than a method specifically designed for it. However, the authors did not compare to any methods specific to these degradations in MR. Can the authors comment on this?

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I am inclined to accept this paper given that the main contribution of the paper is novel/seems to significantly improve artifact correction. However, I believe there are some unanswered questions which should be addressed in the rebuttal.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    As stated in the review, the fact that the data used was “pseudo” kspace data from the FT of a magnitude image raises questions about how the method would work in a realistic scenario.

    Furthermore, I would agree with the other reviewers in the concerns about the validation, particularly for comparing to more MR specific works.

    However, the paper still has significant merit. Therefore, I will maintain my score at weak accept.



Review #3

  • Please describe the contribution of the paper

    This paper introduces a unified framework that leverages explanation-driven cyclic learning for high-quality MRI reconstruction from scans corrupted by a mixture of multiple unknown sources of degradation. This method uses the CrossAttention-Gradient (CAG) to enhance reconstruction with degradation-specific knowledge. The experiment results on simulated and real dataset show the effectiveness of the proposed method.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1.This paper addresses a common issue in clinical settings where multiple unknown sources of degradation frequently occur in a single scan. 2.By integrating a cyclic learning scheme that includes a front-end classification task and a back-end image reconstruction task with the Cross-Attention-Gradient (CAG) strategy, this paper implements task-agnostic reconstruction within a unified framework. 3.The effectiveness of the proposed method has been validated on in-vivo data.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. This paper propose a unified image reconstruction framework, but did not compare it with existing SOTA methods,e.g. PromptIR[1]. Although PromptIR was introduced for natural image restoration problems, it is closely related to the method proposed in this paper. [1] Vaishnav Potlapalli, Syed Waqas Zamir, Salman Khan, and Fahad Shahbaz Khan. Promptir: Prompting for all-in-one blind image restoration. Advances in Neural Information Processing Systems (NeurIPS), 2023.
    2. The writing of this paper is not sufficiently clear and can be perplexing. The authors should consider adding a section or marking in Fig. 1 to outline the entire data flow during the testing phase.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    1.Add comparative experiments with [1] to verify whether this method has advantages compared to SOTA methods; [1] Vaishnav Potlapalli, Syed Waqas Zamir, Salman Khan, and Fahad Shahbaz Khan. Promptir: Prompting for all-in-one blind image restoration. Advances in Neural Information Processing Systems (NeurIPS), 2023. 2.Add a section or annotate in Figure 1 to describe the entire data flow during the testing phase, providing readers with a clearer understanding.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The problem addressed by this paper is both practical and interesting, and the effectiveness of the proposed method has been validated on real data. This method could potentially offer effective assistance in real clinical settings.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Author Feedback

We thank the reviewers for the invaluable comments. We have carefully revised our paper as per the reviews and reply below:

Q1 (R1): Concern about the usage of raw kspace data. A1: The public datasets used do not contain raw kspace data. We converted the image data into pseudo-kspace data using Fourier transform. All degradation simulations (down-sampling, motion simulation and noise addition) were performed in the pseudo-kspace (supplementary material). A1.1: Only a few databases offer the access to raw kspace data, the majority of current DL methods employ pseudo-kspace data for degradation simulation. A1.2: The model trained on pseudo-kspace simulated data was evaluated quantitatively and qualitatively on real data, and the results (Table 2 and supplementary material Fig. 3) demonstrated the superior performance of our method when faced with real clinical scenarios.

Q2 (R1): Explainability. A2: In the supplementary material, some points with higher explanation values overlap with pixels with more artefacts. They do not fully cover the band artifacts in the cortices. This may be caused by two factors: 1) in addition to the band artifacts, other pixels in the image are affected by motion, which presents a challenge in fully localizing the band artifacts. The classical explainable method (Grad-CAM) devotes a comparable degree of attention to the band artifacts as to the whole brain [1, 2]. 2) the samples are also degraded by low-resolution/noise, which has resulted in a focus on motion evidence being less prominent in multi-label classification.

Q3 (R1): Backbone. A3: The quality of the images restored by Restormer alone was comparable to those restored by Resvit, with the elimination of noise and artifacts. However, the grey-white matter contrast and the maintenance of image details were noticeably inferior to those achieved with the addition of the proposed components (Cyclic and CAG).

Q4 (R1): Compare with methods designed for a single source of degradation. A4: Considering that methods designed specifically for a single source of degradation suffer from significant performance decline when confronted with other sources of degradation or mixed degradations, we used general baselines for a fair comparison. In future work, we will compare our approach to other SOTA methods on each specific task separately.

Q5 (R3, R4): Lack of comparison with PromptIR. A5: We did not include PromptIR as a competing method based on two main considerations: (1) PromptIR was only tested out on images corrupted by a single source of degradation (rain/haze/noise) and did not report any restoration performance against mixed degradation (e.g., joint rain and haze removal). In the context of this paper, MRI images may be affected by multiple degradations simultaneously, such as motion with low resolution, and therefore PromptIR is not suitable for comparison. (2) Resvit is a unified image-to-image translation framework designed for MRI images. It achieved SOTA performance in a variety of tasks. Given the considerable heterogeneity between natural and MRI images, we thought a better competing counterpart was Resvit. As recommended by the reviewers, we compared our method to PromptIR and obtained that the scores on the two datasets were: Cam-Can (PSNR: 30.52, SSIM: 0.905 and RMSE: 0.063), UCLA (PSNR: 29.90, SSIM: 0.882 and RMSE: 0.067); which was worse than our proposed method and had no advantage over Resvit.

Due to space limit, we could not report additional experiments in current conference version. We will add these results in the short future version as per the reviewers’ suggestions. We thank the reviewers again for the constructive suggestion.

[1] Automated detection of motion artifacts in brain MR images using deep learning and explainable artificial intelligence. [2] Classifying MRI motion severity using a stacked ensemble approach.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    Given the unanimous consensus among reviewers recognising the value and contributions of this work, the AC concurs with the majority’s positive assessments. The approach detailed in the paper presents a solid application and offering a potentially impactful tool for further research. The positive feedback from all reviewers highlights the clear presentation, methodological clarity, and the potential impact of the findings.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    Given the unanimous consensus among reviewers recognising the value and contributions of this work, the AC concurs with the majority’s positive assessments. The approach detailed in the paper presents a solid application and offering a potentially impactful tool for further research. The positive feedback from all reviewers highlights the clear presentation, methodological clarity, and the potential impact of the findings.



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



back to top