Abstract

Within-subject multimodal groupwise registration aims to align a group of multimodal images into a common structural space. Existing groupwise registration methods often rely on intensity-based similarity measures, but can be computationally expensive for large sets of images. Some methods build statistical relationships between image intensities and anatomical structures, which may be misleading when the assumption of consistent intensity-class correspondences do not hold. Additionally, these methods can be unstable in batch group registration when the number of anatomical structures varies across different image groups. To tackle these issues, we propose GMM-CoRegNet, a weakly supervised deep learning framework for multimodal images groupwise registration. A prior Gaussian Mixture Model (GMM) consolidating the image intensities and anatomical structures is constructed using the label of reference image, then we derive a novel similarity measure for groupwise registration based on GMM and iteratively optimize the GMM throughout the training process. Notably, GMM-CoRegNet can register an arbitrary number of images simultaneously to a reference image only needing the label of reference image. We compared GMM-CoRegNet with state-of-the-art groupwise registration methods on two carotid datasets and the public BrainWeb dataset, demonstrated its superior registration performance even for the registration scenario of inconsistent intensity-class mappings.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1178_paper.pdf

SharedIt Link: https://rdcu.be/dV1P5

SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72069-7_59

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/1178_supp.pdf

Link to the Code Repository

N/A

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Li_GMMCoRegNet_MICCAI2024,
        author = { Li, Zhenyu and Yu, Fan and Lu, Jie and Qian, Zhen},
        title = { { GMM-CoRegNet: A Multimodal Groupwise Registration Framework Based on Gaussian Mixture Model } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15002},
        month = {October},
        page = {629 -- 639}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    A neural network algorithm for within-subject nonlinear registration of a group of scans of varying MRI contrasts to a reference image. During training the intensity distributions of the group of scans are partially modeled by a Gaussian mixture model in order to derive a suitable loss-function. This involves a predefined label map and the log-posterior probabilities.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • First steps towards a training loss based on GMMs
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The manuscript is not well written.

    • Validation was mostly on simulated images.

  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    I found no mention of code availability in the manuscript.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • Clarify that the objective is actually within-subject registration and explain why this is not done using just a rigid-body transform.

    • If there is a reference image for each subject, then is it really a groupwise registration?

    • Equation 6 looks like there is a single spatial transformation estimated for all images other than the reference image, but then when I read to section 2.3, I see there are $N-1$ deformation fields estimated.

    • Is the T1 scan (reference image) actually used in the registration, or is it simply the image that was labelled?

    • It should not matter that “not every subject has K anatomical structures”. Surely the mixing proportions ($\pi_k$) would fall to zero and the algorithm still work, even if the means and variances are undefined.

    • I would suggest looking into how mutual information could be computed from a GMM, and then generalizing this to N dimensions. This could lead to a more powerful method for multi-modal registration of many images.

    • The GMM described in the manuscript is not really a proper GMM because it assumes that the latent variables are actually observed (the labels on the T1).

    • Bottom of page 5 has “$\gamma_1$, $\gamma_1$, $\gamma_1$”, as does page 7.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    While there are some things I like in the manuscript, I thought it was not clearly written. There are also a few avenues for potential improvements that could be made, which means that publishing the proposed framework may be slightly premature. My opinion of the work is on the edge between accept and reject.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    Providing the authors can clarify the issues I raised, then I’d be happy for the work to be accepted. Because the work is within-subject, I think it would be possible to do slightly better though, and would suggest looking into how mutual information could be computed from a GMM. For this reason, and because the proposed method does not optimize a full GMM (i.e., the latent variables are assumed to be known), I still have some slight reservations.



Review #2

  • Please describe the contribution of the paper

    The authors propose a deep-learning framework for the within subject registration of multiple modalities using a Gaussian mixture model (GMM). The registration is groupwise in the sense that all modalities are used simultaneously. A multimodal similarity measure based on the GMM is introduced. The reference modality needs to be segmented between tissues for weak supervision. The method is validated against existing traditional and deep-learning registration methods on real and simulated datasets.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper is well written and organized. The proposed method includes significant methodological contributions. The proposed method makes sense and is relatively well explained. An ablation study has been performed to evaluate the benefit of each auxiliary loss.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Standard deviations are never displayed in the results. There is no mention of the alternative strategy to tackle this problem which is using image translation to bring modalities to the same contrast, followed by registration with a classical (non multimodal) similarity metric. Typically using tools like SynthSR from Freesurfer. The evaluation on anatomical brain images is of limited interest given that those modalities are usually assumed distortion free, so only a rigid registration is needed.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    There is some confusion between spatial transformation T and deformation field D. Those are two different things, related by: T=Id+D (or equivalently: T(x)=x+D(x)).

    The notations in sections 2.1 are a bit heavy and some are introduced without being really used after.

    Multimodal registration can mean two things, aligning either: 1) multiple modalities across different subjects assumed to be aligned within each subject, or 2) aligning various modalities of a single subject onto a reference modality. This paper is about the second one. This distinction should be made more clear for disambiguation (in the abstract especially), using for example terms like “within-subject multimodal registration”.

    Looking at Figure 2, it is surprising that ANTs and Voxelmorph perform so poorly for TOF. The white spot in TOF have opposite contrast in T1 but the sharp edges should drive the optimization with similarity metrics like local squared correlation coefficient or mutual information. The authors should comment on that.

    In Figure 2 of the supplementary material, something looks off with Voxelmorph. It should be at least capable of aligning the contours of the brains.

    For future work, also using local squared correlation coefficient for ANTs and Voxelmorph in the comparison would be interesting.

    It looks like some alterations to the article template have been made, such as for the Section 3.1 title.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    There is significant methodological contributions. The paper is well written and organised. The comparison with other methods is perfectible however.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Accept — should be accepted, independent of rebuttal (5)

  • [Post rebuttal] Please justify your decision

    There is still no mention of the alternate strategy consisting in intensity normalization followed by regular registration, instead of multimodal registration. Besides that, the authors have well answered most of the questions and remarks from the reviewers. The paper is well written and the methodological contribution is significant.



Review #3

  • Please describe the contribution of the paper

    This paper presents a method for group-to-reference groupwise registration using a GMM-based deep learning approach. Experiments on carotid and brain datasets demonstrate the effectiveness of the method.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The idea of integrating GMM-based groupwise registration into a deep learning framework and its application to carotid segmentation is novel and effective.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Some conventional methods using GMMs are not referred and compared:
      • MAP MRF joint segmentation and registration, MICCAI 2002
      • Groupwise Combined Segmentation and Registration for Atlas Construction, MICCAI 2007.
    2. Only one real dataset is used for experiments. The other two are synthetic datasets, which may limit its applicability.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    1. The label loss in Eq. (8) is not explained clearly. It seems this loss based on the predicted one-hot label is not differentiable to the spatial transformations.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall, this paper is well-written and novel in methodology. However, some methodology details need to be clarified.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Author Feedback

We thank the reviewers for their time and positive feedback on our paper’s technical novelty and effectiveness (e.g., “significant methodological contributions” by R1, “First steps towards a training loss based on GMMs” by R3, “novel in methodology” by R4). We appreciate the detailed comments and suggestions for improvement. Below, we address the concerns raised by the reviewers:

Q1: Clarify the objective is within-subject registration (R1, R3). A1: Thanks for the constructive comments! This paper’s multimodal registration aims to align N modality images of the same subject to one modality (reference image T1) simultaneously. For each subject, N-1 spatial transformations need to be computed excluding T1. We will clarify our method as “within-subject multimodality registration” in the revised version.

Q2: Validation was mostly on simulated datasets (R3, R4). A2: Simulated datasets are crucial for directly validating our method. In clinical datasets, obtaining ground truth labels for each imaging modality is very challenging, especially when specific structures (such as plaques) are not visible in some modalities. In simulated datasets, before applying nonrigid perturbations, all modalities were initially aligned and shared the same ground truth labels, enabling direct performance evaluation. However, in real-world data, e.g., the carotid dataset, only T1 has corresponding manual labels. Thus, we must indirectly evaluate the registration performance by calculating the compositional segmentation DSC.

Q3: Why not done using just a rigid registration (R1, R3)? A3: Rigid transformation cannot align the subtle deformations of vessel wall and plaques caused by blood pulsation. Therefore, we applied nonrigid deformation perturbations to the simulated dataset and employed nonrigid registration in both simulated and real-world datasets.

Q4: Ants and VoxelMorph (VM) exhibit poor performance in the visualization results (R1). A4: In the clinical carotid dataset, Ants and VM with MI calculated similarity in both background and foreground regions, which may affect the registration performance in the foreground ROI regions. Our method proposed a GMM to model the intensity profile of the vessel wall and plaques, enabling to implicitly focus on the ROI regions and reduce the interference from the background without the need of ROI extraction.

Q5: The setting of the mixing proportions and the optimization of latent variables (R3). A5: As pointed out by R3, when there are fewer than K anatomical structures present in the subject, we set the mixing proportion for the missing structures to zero, and the algorithm still operates effectively. The GMM parameters were not fixed. As spatial transformations iteratively update, we recalculate the parameters in Equation (4) every 10 training epochs, ensuring the compositional intensity information represented by GMM keeps improving.

Q6: Label loss in Eq. (8) may not be differentiable to spatial transformations (R4). A6: Label loss in Eq.(8) is designed to encourage smoothness in the predicted labels, ensuring that pixels in the neighborhood are more likely to share the same label. The predicted labels were derived from the maximum predicted probabilities of GMM, thus the gradients can be backpropagated through the Gaussian functions, allowing to update the spatial transformation parameters.

Q7: Lack of comparison with some conventional methods (R1, R4). A7: Our paper focuses on within-subject groupwise registration and offers a thorough comparison with both traditional and deep learning groupwise registration methods. The two MICCAI papers noted by R4 addressed joint segmentation and registration tasks, making direct comparison less suitable. Nonetheless, we’ll consider adding comparisons in the revised version if space allows.

Q8: Notations redundancy and typos (R1, R3). A8: Thanks for pointing these out! In the revised version, we will fix typos and refine notations for clarity and brevity.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    The rebuttal process helped the reviewers and the paper, and I think it has sufficient support to be accepted.

    There are lingering concerns, however, and I strongly encourage the authors to try to clarify these issues at CR and for the presentation at the conference.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    The rebuttal process helped the reviewers and the paper, and I think it has sufficient support to be accepted.

    There are lingering concerns, however, and I strongly encourage the authors to try to clarify these issues at CR and for the presentation at the conference.



back to top