List of Papers Browse by Subject Areas Author List
Abstract
Despite substantial progress in utilizing deep learning methods for clinical diagnosis, their efficacy depends on sufficient annotated data, which is often limited available owing to the extensive manual efforts required for labeling. Although prevalent data synthesis techniques can mitigate such data scarcity, they risk generating outputs with distorted anatomy that poorly represent real-world data. We address this challenge through a novel integration of anatomically constrained synthesis with registration uncertainty-based refinement, termed Anatomic-Constrained medical image Synthesis (ACIS). Specifically, we (1) generate the pseudo-mask via the physiological density estimation and Voronoi tessellation to represent the spatial anatomical information as the image synthesis prior; (2) synthesize diverse yet realistic image-annotation guided by the pseudo-masks, and (3) refine the outputs by registration uncertainty estimation to encourage the anatomical consistency between synthesized and real-world images. We validate ACIS for improving performance in both segmentation and image reconstruction tasks for few-shot learning. Experiments across diverse datasets demonstrate that ACIS outperforms state-of-the-art image synthesis techniques and enables models trained on only 10% or less of the total training data to achieve comparable or superior performance to that of models trained on complete datasets. The source code is publicly available at https://github.com/Arturia-Pendragon-Iris/VonoroiGeneration.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1746_paper.pdf
SharedIt Link: pending
SpringerLink (DOI): pending
Supplementary Material: https://papers.miccai.org/miccai-2024/supp/1746_supp.pdf
Link to the Code Repository
N/A
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Chu_Anatomicconstrained_MICCAI2024,
author = { Chu, Yuetan and Yang, Changchun and Luo, Gongning and Qiu, Zhaowen and Gao, Xin},
title = { { Anatomic-constrained Medical Image Synthesis via Physiological Density Sampling } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15011},
month = {October},
page = {pending}
}
Reviews
Review #1
- Please describe the contribution of the paper
To address the limitations of insufficient annotated data in utilizing deep learning methods, the authors propose integration of synthesized data that are anatomically constrained and refined based on uncertainty estimator. They validate their approach on two downstream tasks of segmentation and reconstruction for few-shot learning.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The overall architecture is novel. The idea of initial pseudo-mask generation based on anatomy and used for training of the generator network is interesting. Additionally, the refinement via reinforcement learning based on the registration between the real and synthesized is intriguing.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The whole paper is centered around the fact that the method is guided from the anatomy, while the underlying factor is the edges of the image, and no real knowledge of anatomy is incorporated. The overall architecture seems too complex, overengineered and hard to follow. Moreover, the results show small improvements, and only on one dataset per task, making it questionable if such overhead is necessary.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- The authors claim their method is anatomically constrained, but it is in fact constrained on the edges of a CT slice. There is no knowledge or understanding of the underlying anatomy, but simply applying a Canny edge detector followed by postprocessing steps. Instead anatomical information can be acquired from other modalities or simulations. The whole pipeline for pseudo-mask generation seems very engineered and it is not clear if it can be used for CTs with other anatomies.
- In Section 2.2 more details are needed regarding the architecture. It is not clear how many generators and discriminators in total are trained and how are they being optimized. Additionally, for the pix2pix model it is not clear why the authors needed to train it on real-world images and what were their pseudo-masks in that case.
- In general the whole pipeline is hard to follow and it is questionable whether this complexity in the architecture is needed, as no significant improvements are shown, in particular for the reconstruction downstream task.
- The segmentation and reconstruction evaluation tasks were each assessed on a single dataset, rather than on both datasets.
- The term “anatomic-constrained” is incorrect, it should instead be replaced by either “anatomy-constrained” or “anatomically-constrained”
- The authors state that the hyperparameters selection is described in the supplementary, but that one is missing.
Minor comments:
- In the Introduction section the authors refer to Fig 1.a), b),c ) but these letters are absent in the figure
- In 3.1 it is not stated the number of CTs in each dataset
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
While the paper presents some innovative ideas, the claimed anatomy-guided methodology and its actual reliance on image edges, undermines the novelty of the approach. Additionally, the complex architecture of the system makes it difficult to follow and potentially impractical for broader application.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Reject — could be rejected, dependent on rebuttal (3)
- [Post rebuttal] Please justify your decision
Thank you for the rebuttal, but I am still skeptical whether the complexity of the proposed architecture is needed for this task. Hence, I maintain my decision.
Review #2
- Please describe the contribution of the paper
In their work, the authors introduce ACIS, a methodology for medical image synthesis with anatomical constraints and uncertainty-based refinement. The method consists of several key components: (1) generating a pseudo-mask through physiological density estimation and voronoi tessellation to encode anatomical information as an image synthesis prior, (2) synthesizing diverse yet realistic images guided by image annotations and the pseudo-mask, and (3) refining the outputs with registration loss to ensure anatomical consistency between synthesized and real-world images. They validate the effectiveness of ACIS on a Chest CT dataset for two tasks: segmentation and image enhancement (like super-resolution, denoising, deblurring), demonstrating promising results.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Overall, the paper is well-written and easy to follow.
- The incorporation of anatomical constraints and uncertainty in the framework is an interesting idea that can enhance segmentation performance.
- The methods are well-presented, supported by clear and informative figures.
- The experiments are well conducted and presented with clarity, aiding in the comprehension of results.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Lack of citations to key methodologies in medical image segmentation that generate synthetic data in scenarios with minimal annotations, potentially overlooking significant advancements in enhancing segmentation performance.
- The use of Bayesian uncertainty is mentioned but not clearly demonstrated in the proposed method in Section 2.3.
- The application of reinforcement learning towards the end of Section 2.3 lacks clarity regarding the algorithm used, the state or value function, and the rationale behind using reinforcement learning.
- Lack of information about the data augmentations used in training for the baselines (Table 1), which is crucial for enhancing baseline segmentation performance.
- The absence of statistical tests like Wilcoxon or signed rank tests to evaluate the significance of gains compared to the best-performing method. Also, the standard deviations are not provided for both segmentation and registration tasks.
More details on these points are provided in response to question 10.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
Overall, the method described has easy implementation but the authors mentioned that they plan to release the code upon acceptance.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
-
The paper lacks citations to important approaches in medical image segmentation that generate synthetic data (images and annotations) in low annotation settings to enhance segmentation performance. These methods involve a combination of few labeled and some unlabeled images, with some demonstrating effectiveness even with as few as 1 or 3 labeled images and some unlabeled images. Examples from this extensive list include: [1] “Semi-supervised and Task-Driven Data Augmentation”, IPMI 2019. [2] “Realistic Adversarial Data Augmentation for MR Image Segmentation”, MICCAI 2020. [3] “Data augmentation using learned transformations for one-shot medical image segmentation”, CVPR 2019. [4] “Cut out the Annotator, Keep the cutout: Better Segmentation with weak supervision”, ICLR 2021.
-
The abstract mentions leveraging Bayesian uncertainty, yet it’s unclear how this is utilized in the proposed method. Section 2.3 references Bayesian uncertainty, but Equation 3 does not demonstrate any uncertainty modeling or estimation. The equation presents an L2 loss between the transformed image after applying the deformation field and the original/synthetic image at T steps. The current text lacks clarity on how this uncertainty is modeled. Can the authors please clarify this?
-
Towards the end of section 2.3, the authors mention the use of reinforcement learning, but it’s ambiguous which reinforcement learning algorithm is applied, the state or value function used, and how it’s utilized in this context. The rationale behind using reinforcement learning for this specific problem is also unclear. Can the authors elaborate on this and provide clarity?
-
The baselines presented in Table 1 do not mention what set of data augmentations are used during training them. Past studies have shown that data augmentations are crucial for enhancing baseline segmentation performance. Could the authors provide more information on the data augmentations used?
-
It would be beneficial for the authors to perform statistical tests like the Wilcoxon test or signed rank test to compare the proposed method against the best-performing compared method. This would help determine if the observed gains are statistically significant for the segmentation task. Additionally, providing standard deviations for both segmentation and registration tasks would enhance the robustness of the reported results.
-
From the text, it is unclear what “ASIC-“ signifies in Table 1. It would be beneficial if the authors could provide an explanation or define this term for better understanding.
-
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper is well-written and clear in its presentation, making it easy to follow. The incorporation of anatomical constraints and uncertainty into the framework is intriguing and has the potential to enhance segmentation and registration performance. However, there are areas that need improvement and has notable gaps wrt citing relevant works. Firstly, it lacks crucial citations of data synthesis methods in medical image segmentation. Secondly, the use of Bayesian uncertainty and reinforcement learning needs further explanation and demonstration in the proposed method. Additionally, details about data augmentations for baselines and statistical tests for significance are lacking. Despite these weaknesses, the paper presents promising methods with potential advancements in segmentation. Further refinement and clarity in these areas could greatly enhance its impact and contribution. Based on the strengths and weaknesses observed in the paper, a Weak Accept recommendation is suggested.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
After reviewing the author’s rebuttal and other reviewers’ comments, I am happy with the Weak Accept rating for this paper.
The paper is well-written, clearly presented, and incorporates intriguing concepts such as anatomical constraints and uncertainty into the framework, which could enhance segmentation and registration performance. However, it lacks crucial citations of data synthesis methods in medical image segmentation, and the use of Bayesian uncertainty and reinforcement learning requires further explanation. Additionally, details about data augmentations for baselines and statistical tests for significance are missing. Despite these weaknesses, the paper’s promising methods and potential advancements in segmentation justify a Weak Accept recommendation.
Review #3
- Please describe the contribution of the paper
The authors utilize the Physiological Density Sampling approach to generate prior pseudo mask for medical image synthesis and then, the bayesian uncretainty estimation based reinforcement learning is applied to refine the synthesized medical images.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- use canny edge detection approach to generate pseudo masks from real images.
- use Bayesian uncertainty estimation as reward of reinforcement learning to refine the synthesized image more real.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Figure is clear.
- The training of annotation generator is not clear.
- For conditional GAN, the way of combining pseudo mask and annotation as condition is not clear.
- Lack of ablation study.
- This paper is only compared with other GAN approaches.
- Need visualizations of image synthesis.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- step 2 in Fig. 1 is not correct. The pseudo mask is not generated from noise (is generated from randomness of sampling), which will confuse reader.
- The training of annotation generator is not clear.
- For conditional GAN, please clarify the way of combining pseudo mask and annotation as condition of image generator.
- Lack of ablation study: Conditional GAN with pure pseudo mask vs Conditional GAN with pure pseudo mask and annotations.
- This paper is only compared with other GAN approaches. Whether authors can compare their propsoed approach to some diffusion models (such as latent diffusion model which can be training and test on RTX A6000).
- Need visualizations of image synthesis. Other image synthesis approaches and proposed approach.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Need more description of methods. Need ablation study and more SOTA comparison.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
In the rebuttal, author promised that all the unclear figure and experiments will be added in GitHub (due ro page limitation). Most of the concerns are solved. The explanation of method of training annotation generator is still unclear.
Therefore, I will keep weak accept.
Author Feedback
We sincerely thank the reviewers for their thorough and insightful feedback, as well as their positive comments, including “clear presentation and easy to follow (R1)”, “promising methods with potential advancements (R1)”, “novel overall architecture (R3)”, and “interesting and innovative ideas (R3)”. We appreciate R1 and R4 for accepting the paper directly, while R3 provides a weak rejection due to some misunderstanding. Below, we address the concerns raised by reviewers. R1&R3&R4:Technical and experimental details Due to page constraints, we cannot include further technical and experimental details in the manuscript. However, such information, along with datasets, hyper-parameter settings, and visualized results, will be available in the open-source code on GitHub. We also try our best to explain some technical details at the end of this rebuttal. R3:Anatomy constraint Edges identify significant transitions in intensity, often corresponding to anatomical boundaries. Many applications use edge detection and filtering to present anatomy in medical images [A-C]. In our study, edge detection presents the spatial density of anatomical structures: regions with significant fluctuations contain abundant information, and vice versa. The method is generalizable to CT scans of other anatomies, such as the abdomen [C]. While we believe edge detection can sufficiently represent the anatomy constraint, we appreciate the suggested simulation and segmentation masks, which we will consider for future work. R3:Overall architecture and improvement The overall architecture consists of three stages, following the motivation of “anatomy-constraint sampling”(section 2.1), “sampling-guided image synthesis”(section 2.2), and “anatomy refinement”(section 2.3), respectively. Each stage is clearly designed without redundant components. Our image reconstruction improvement is significant: achieving performance comparable or superior to upper bounds with only 10% of the training data. We believe this improvement is useful for real-world applications. R3:Training complexity For the framework training, there are existing public models to follow, without training from scratch. We will make our code public to facilitate replication. The framework is trained on an NVIDIA A6000 in about two day, indicating small overhead in the training phase. R3&R4:Details for generator and discriminator We use three generators and three discriminators (Stage 2). The first generator synthesizes pseudo-masks from noise, with its discriminator distinguishing these from real-world samples. The second generator produces annotations from pseudo-masks, with its discriminator differentiating the generated paired pseudo-mask and annotation from natural ones. It follows Conditional GAN and loss functions. The third image generator synthesizes images from given pseudo-masks and annotations, which are concatenated along the channel dimension as the conditional input. It is first trained on real-world sampled pseudo-masks and annotations to make the generated images more realistic, and then fine-tuned on generated pseudo-masks and annotations. [A]Sharpness-aware low-dose CT denoising using conditional generative adversarial network [B]Topology-Preserving Computed Tomography Super-Resolution Based on Dual-Stream Diffusion Model [C]Low-dose ct image blind denoising with graph convolutional networks Bayesian uncertainty Bayesian uncertainty gauges the reliability of registration, aiding anatomical refinement. There are two types of registration uncertainty: transformation and appearance uncertainty. We focus on the latter, which measures the intensity uncertainty of registered voxels or organ volumes. Details are in [23]. Reinforcement learning We use reinforcement learning to avoid directly back-propagating uncertainty as a loss function, reducing computation and memory demands. Reinforcement learning treats appearance uncertainty as the reward and using policy gradient techniques[26].
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The incorporation of anatomical constraints and uncertainty in the framework is interesting and able to enhance segmentation performance. The performance is promising. The authors have addressed the major concerns raised by the reviewers. The authors should revise the paper and release the code as promised. Overall, positive reviews outweigh negative opinions, and the paper should be accepted.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
The incorporation of anatomical constraints and uncertainty in the framework is interesting and able to enhance segmentation performance. The performance is promising. The authors have addressed the major concerns raised by the reviewers. The authors should revise the paper and release the code as promised. Overall, positive reviews outweigh negative opinions, and the paper should be accepted.
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Reject
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The ACIS methodology, while ambitious in integrating anatomical constraints and uncertainty-based refinement for medical image synthesis, falls short in several critical aspects that are fundamental to its acceptance. Firstly, the paper lacks a robust theoretical foundation and clear explanation of the integration and impact of Bayesian uncertainty and reinforcement learning, making its approach difficult to assess and validate. Moreover, the paper fails to engage with recent advancements in the field, neglecting necessary comparisons with state-of-the-art methods that could contextualise its contributions. The absence of rigorous statistical validation, such as the inclusion of standard deviations or statistical significance tests, further weakens the reliability of the reported improvements. After carefully consideration and weighing the reviewers’ feedback, it needs further improvement before publication.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
The ACIS methodology, while ambitious in integrating anatomical constraints and uncertainty-based refinement for medical image synthesis, falls short in several critical aspects that are fundamental to its acceptance. Firstly, the paper lacks a robust theoretical foundation and clear explanation of the integration and impact of Bayesian uncertainty and reinforcement learning, making its approach difficult to assess and validate. Moreover, the paper fails to engage with recent advancements in the field, neglecting necessary comparisons with state-of-the-art methods that could contextualise its contributions. The absence of rigorous statistical validation, such as the inclusion of standard deviations or statistical significance tests, further weakens the reliability of the reported improvements. After carefully consideration and weighing the reviewers’ feedback, it needs further improvement before publication.
Meta-review #3
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The idea of incorporating anatomical constraints and uncertainty in medical image synthesis is novel. However, the reviewers also raised a few concerns, e.g., missing citations and comparison with some state-of-the-art methods. Overall, I think the merits overweigh the shortcomings; therefore, I recommend accepting it.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
The idea of incorporating anatomical constraints and uncertainty in medical image synthesis is novel. However, the reviewers also raised a few concerns, e.g., missing citations and comparison with some state-of-the-art methods. Overall, I think the merits overweigh the shortcomings; therefore, I recommend accepting it.