List of Papers Browse by Subject Areas Author List
Abstract
Motion artifacts caused by prolonged acquisition time are a significant challenge in Magnetic Resonance Imaging (MRI), hindering accurate tissue segmentation. These artifacts appear as blurred images that mimic tissue-like appearances, making segmentation difficult. This study proposes a novel deep learning framework that demonstrates superior performance in both motion correction and robust brain tissue segmentation in the presence of artifacts. The core concept lies in a complementary process: a disentanglement learning network progressively removes artifacts, leading to cleaner images and consequently, more accurate segmentation by a jointly trained motion estimation and segmentation network. This network generates three outputs: a motion-corrected image, a motion deformation map that identifies artifact-affected regions, and a brain tissue segmentation mask. This deformation serves as a guidance mechanism for the disentanglement process, aiding the model in recovering lost information or removing artificial structures introduced by the artifacts. Extensive in-vivo experiments on pediatric motion data demonstrate that our proposed framework outperforms state-of-the-art methods in segmenting motion-corrupted MRI scans. The code is available at https://github.com/SunYJ-hxppy/Multi-Net.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1643_paper.pdf
SharedIt Link: https://rdcu.be/dV508
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72114-4_21
Supplementary Material: N/A
Link to the Code Repository
https://github.com/SunYJ-hxppy/Multi-Net
Link to the Dataset(s)
https://openneuro.org/datasets/ds004173/versions/1.0.2
BibTex
@InProceedings{Jun_DeformationAware_MICCAI2024,
author = { Jung, Sunyoung and Choi, Yoonseok and Al-masni, Mohammed A. and Jung, Minyoung and Kim, Dong-Hyun},
title = { { Deformation-Aware Segmentation Network Robust to Motion Artifacts for Brain Tissue Segmentation using Disentanglement Learning } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15009},
month = {October},
page = {213 -- 222}
}
Reviews
Review #1
- Please describe the contribution of the paper
This paper presents a deep learning framework tailored for motion correction and segmentation of brain tissue in MRI scans affected by motion artifacts. The authors propose a disentanglement learning network that iteratively eliminates artifacts through a jointly trained motion estimation and segmentation network. The network yields three outputs: a motion-corrected image, a motion deformation map highlighting artifact-affected areas, and a brain tissue segmentation mask. The efficacy of the method is assessed through in-vivo experiments conducted on pediatric motion data.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The authors propose an end-to-end network for motion correction and deformation-aware segmentation. This network is designed to produce both motion-corrected images and predict segmentation of brain tissue.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The evaluation of the methodology fails to include comparisons with current state-of-the-art motion correction and segmentation techniques, despite the existence of public datasets suitable for benchmarking such comparisons. This absence of comparison complicates the accurate assessment of the efficacy of the proposed method in relation to leading techniques within the field. Moreover, the paper lacks a comprehensive statistical analysis, notably omitting tests for statistical significance such as p-values. This absence hinders the ability to determine the statistical significance of any observed differences between the proposed method and alternative approaches, which is a critical aspect of medical image research. Lastly, the paper lacks essential details such as standardized image resolution, which is crucial for comprehending and replicating the study’s findings.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Do you have any additional comments regarding the paper’s reproducibility?
The results reported in the paper are reproducible thanks to the accessibility of publicly available code and the presence of public datasets.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
In the weaknesses section, I have identified several critical areas warranting attention. Addressing these concerns has the potential to significantly enhance both the quality and the impact of the paper. Firstly, it is imperative to compare the proposed end-to-end model with a simple solution that involves segmenting brain tissue post-motion correction. Secondly, clarification is needed regarding the rationale behind the selection of a 2.5D approach over a 3D approach. Lastly, the methodology for setting the parameters governing the weighting of various metrics within the loss functions must be discussed. Understanding the impact of these parameters on the outcomes is important.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The evaluation lacks robustness.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
The paper presents interesting points for discussion at the conference. Please ensure the completion of the necessary revisions before the final camera-ready submission.
Review #2
- Please describe the contribution of the paper
The authors presented a segmentation framework for MRI images with motion artifacts (blurring/ringing). They treat artifacts as part of the image ‘style’ and leverage disentanglement learning for artifact removal on unpaired data. The difference between corrected image and the original image is explicitly incorporated into the segmentaiton pipeline (deformation-aware) which can improve the segmentation accuracy.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The authors propose a novel artifact removal scheme that is formed as a disentanglement learning problem, which requires clear images and corrupted images but they do not need to be paired.
- An explicit artifact detection step is proposed by estimating the deformation field from the motion-corrected image to the original corrected image. The explicit artifact detection could be quite informative to the downstream segmentation task.
- The authors show that joint training of artifact removal and deformation-aware segmentation is beneficial to both image recovery quality and segmentation accuracy.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Sec. 2.1 the authors used datasets that contain both clean and corrupted scans, but they also used a motion simulator to generate motion-corrupted data. I wonder what is the purpose of simulating motion, given that the public datasets already contain such images. Additionally, by simulating motion, it is possible to have paired clean-corrupted images. Could these paired data be used for artifact removal training?
- It would strengthen the paper if the deformation fields are placed on top of the image in qualitative results. This might provide some insights into how the deformation-awareness can contribute to a better segmentation accuracy. I wonder if the deformation field shows some pattern that could be helpful to the segmentation.
- Evaluation metrics are provided without STD.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
The authors need to elaborate how the simulated data was used. It would provide more insights to visualize deformation fields and the segmentation maps on both uncorrected and corrected images.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
This work presents a joint MOCO and segmentation pipeline for MRI images with artifacts. The explicit incorporation of difference between corrected and uncorrected image is novel, but the connection between deformation awareness and improved segmentation accuracy should be discussed more deeply.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Accept — should be accepted, independent of rebuttal (5)
- [Post rebuttal] Please justify your decision
The authors have resloved almost all my concerns.
Review #3
- Please describe the contribution of the paper
The authors propose a novel motion correction 2.5D multi-network fusion to improve brain tissue segmentation in the presence of motion artifacts.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-The paper addresses a significant issue regarding motion artifacts in pediatric MRI scans (since its challenging for kids to remain still for prolonged periods of time) -Authors will be sharing the code, which will facilitate method adoption and usage within research communities. -Good and thoughtful use of figures to support main paper claims -Interesting multi-network fusion method proposed for the motion correction, that combines a motion correction disentanglement learning network with a joint motion estimation and segmentation network
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
-The authors’ proposal to utilize a 2.5 approach by employing two neighboring slices as input raises questions about whether the integrity of the 3D MRI volume would be preserved after concatenating denoised 2D slices back into a 3D volume. Clarification on this is crucial for understanding the potential implications of the method on the overall volumetric data structure and the accuracy of subsequent segmentation results. -Please describe MRI image characteristics are of the internal dataset(echo,spin,resolution, scanner details) and how they compare to those of the public dataset.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Do you have any additional comments regarding the paper’s reproducibility?
- Details are needed regarding the number of unique subjects included in both training and testing sets for both private and public datasets.
- Insufficient details are provided regarding data preprocessing, particularly regarding whether scans were uniformly resampled, registered to a template, or normalized.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- Are the results reported in Table 1 for the private dataset, the OpenNeuro dataset, or both? -Formula 10, can authors please elaborate more on whats Ladv(Dis)? -Please address reproducibility comments outlined above, thanks
For the journal extension (not required for paper revision): -Authors should consider including ablation studies for each component, such as Cycle-Translational Mapping. -Additionaly, it would be interesting to see how their method compares in-vivo with use of vNavs(Volumetric Navigators), and how it compares with respect to computational resources used during training and inference to nnUnet
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper addresses a significant issue regarding motion artifacts in pediatric MRI scans, and have an interesting novel multi-model proposed with open source code released. Additional details regarding data preprocessing, and elaboration on whether a 3D volume can be reconstructed from 2D denoised slices would enhance reproducibility and method adoption.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Accept — should be accepted, independent of rebuttal (5)
- [Post rebuttal] Please justify your decision
I would like to thank authors in taking time to address the comments. I agree that 2.5D model as an alternative to 3D in limited computational settings and complex multi-task model is reasonable suggestion. Further clarification on data processing makes paper organization cleaner. I would suggest instead of calculating MS_SSIM to determine if the 3D volume is maintained, using a downstream volume segmenation calculation (such as SynthSeg from FreeSurfer).
Author Feedback
(1) Model with 2.5D rather than 3D approach (R3, R6): Building a network that simultaneously performs three tasks (segmentation, motion estimation, and artifact correction) resulted in a complex and resource-intensive architecture. Due to limitations in our hardware memory allocation, we opted for a 2.5D approach. Incorporating adjacent slices assists in the segmentation process by providing additional information to the model. Therefore, we combined three consecutive 2D slices to segment the center slice. To determine if the 3D volume is maintained, we computed the MS-SSIM value (0.9658) between the motion-clean and motion-corrected 3D images, derived from combining 2.5D images and the 3D motion-clean image. While a full 3D model would ideally capture even richer contextual information, the 2.5D approach offers a practical solution with good performance given our hardware constraints. (2) Scanning details about the dataset (R3, R6): The private dataset was acquired on a 3T MRI scanner (MAGNETOM Skyra, SIEMENS, Germany) with following parameters: echo time (TE) of 2.3ms, repetition time (TR) of 2400ms, flip angle of 8°, and FOV of 230 x 230 [mm]. The axial image resolution was 1 x 1 [mm] for the public dataset and 0.7 x 0.7 [mm] for the private dataset. To ensure uniform resolution in both datasets, we standardized the image size to 256×256 pixels during preprocessing. We will include these details either within the main manuscript or as supplementary material upon acceptance. (3) The impact of the deformation field and its visualization (only R5): The deformation field measures the difference between the motion-clean image and the motion-corrupted image. The deformation field has higher intensity in regions where obvious differences occur, particularly at the boundaries of the brain. Due to space limitations in the current manuscript, we omitted the deformation field image. However, we will include it in the final version for publication. A multi-task learning strategy involves training the joint motion estimation and segmentation network simultaneously, optimizing the loss functions together. Additionally, the implementation of the cross-stitch module facilitated the exchange of feature maps between other activities, resulting in improved performance. (4) Purpose of motion simulator and the usage of paired dataset (only R5): The private dataset consisted of unpaired motion-clean and motion-corrupted image. We employed the motion simulator to produce motion-distorted image for the private dataset. The objective of motion simulation is to construct a dataset that has been intentionally altered to obtain detailed information about the motion parameters. (5) Comprehensive statistical analysis (p-value, STD) (R3, R5): According to the provided statistics, a p-value of less than 0.05 was deemed to be statistically significant. The standard deviations of our suggested network were as follows: 0.101 (CSF, Dice), 0.102 (Gray Matter, Dice), 0.0658 (White Matter, Dice), 0.0466 (MS-SSIM), and 4.172 (MSE). We will include these STD measurements in the revised version upon acceptance. (6) Dataset utilized for Table 1 (only R6): The quantitative evaluation involved the use of both private and public dataset. We will make it clearer in the revised version. (7) Explanation of the formula 10 (Ladv(Dis)) (only R6): The formula represents the procedure for training using adversarial loss. The adversarial loss is utilized to distinguish the real and fake images in translational mapping. Moreover, “Dis(.)” denotes the discriminators, whereas “E” represents the expectation operation applied to each distribution. (8) Comparison with other models (only R3): We have obtained additional comparison results by using model that involves segmenting brain tissue after motion correction. We will include additional comparison in the supplementary material. (9) Reproducibility (only R3): Our code is available on the GitHub website.
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
All reviewers agree to accept this paper.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
All reviewers agree to accept this paper.
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The authors did a great job with the rebuttal and convinced the reviewers. the paper should be accepted.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
The authors did a great job with the rebuttal and convinced the reviewers. the paper should be accepted.