Abstract

Selecting an optimal standard plane in prenatal ultrasound is crucial for improving the accuracy of AI-assisted diagnosis. Existing approaches, typically dependent on detecting the presence of anatomical structures as defined by clinical protocols, have been constrained by a lack of consideration for image perceptual quality. Although supervised training with manually labeled quality scores seems feasible, the subjective nature and unclear definition of these scores make such learning error-prone and manual labeling excessively time-consuming. In this paper, we present an unsupervised ultrasound image quality assessment method with score consistency and relativity co-learning (CRL-UIQA). Our approach generates pseudo-labels by calculating feature distribution distances between ultrasound images and high-quality standard planes, leveraging consistency and relativity for training regression networks in quality prediction. Extensive experiments on the dataset demonstrate the impressive performance of the proposed CRL-UIQA, showcasing excellent generalization across diverse plane images.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/3238_paper.pdf

SharedIt Link: https://rdcu.be/dV19q

SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72086-4_69

Supplementary Material: N/A

Link to the Code Repository

N/A

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Guo_Unsupervised_MICCAI2024,
        author = { Guo, Juncheng and Lin, Jianxin and Tan, Guanghua and Lu, Yuhuan and Gao, Zhan and Li, Shengli and Li, Kenli},
        title = { { Unsupervised Ultrasound Image Quality Assessment with Score Consistency and Relativity Co-learning } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15005},
        month = {October},
        page = {734 -- 743}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposes an unsupervised ultrasound image quality assessment method, which is based on score consistency and relativity co-learning.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The experimental results on the dataset show the effectiveness of the proposed method when compared to some other existing models.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Only one dataset is used for validation. And some failure cases would be helpful to understand the framework.
    2. In the experiments, the compared algorithms are out of date (all before 2021) and none of them designed for medical images. More literature should be included, such as RTN: Reinforced transformer network for coronary CT angiography vessel-level image quality assessment.
    3. The ablation study is insufficient. For example, the parameters assigned to the loss, and removing each of the component.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    See the weakness.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Mainly due to the insufficient experiments.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This paper presents an unsupervised ultrasound image quality assessment method with score consistency and relativity co-learning.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. proposed an unsupervised image quality assessment for ultrasound images by employing score consistency and relativity co-learning.
    2. They employ weakly augmented views to enforce the score consistency loss, and employ the strong augmented views to enforce the relativity loss.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. How well should the image dataset match with the standard plane image set, and how to obtain this image set for different image dataset to be evaluated is not clearly explained in the paper.
    2. The SOTA methods used for comparison is not that up-to-date and suitable. Some more recent references on image quality assessment are to be discussed.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    The reproducibility of the paper might not be straightforward, since some detailed information are missing, e.g., how to select the standard plane image set. How the parameters in the loss function are to be selected is not discussed.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    1) How well the image dataset should match with the standard plane image set, and how to obtain such image set for different images? 2) How many image pairs are finally labeled by the sonographers for testing? Please be clear on this. 3) Why only accuracy is used as the evaluation metrics? Is the number of high- and low-quality samples balanced? Why not show the performance of other metrics, e.g., precision, AUPRC etc. 4) The ablation results shown in table 1 shows that adding the L_r loss does not seem to improve the performance, why adding together with the L_c loss improve the performance? Please give some reasons. 5) Please include some latest medical image quality methods, e.g., to compare to discuss the advantages of your proposed method: 1) Q. Chen et al., Muiqa: Image Quality Assessment Database And Algorithm For Medical Ultrasound Images, IEEE ICIP 2021, pp. 2958-2962. 2) H. Yang et al., A minimally supervised approach for medical image quality assessment in domain shift settings, IEEE ICASSP 2022, pp. 1286-1290. 6) Why are the parameters used in the implementation not summed to “1”? how sensitive of the choice of these parameters affect the performance of the method for different datasets?

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall the paper proposed a method with some novel loss incorporated, and the experimental results appear to support the claim.

  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    This paper describes a method to train an ultrasound quality assessment model without having to directly annotate image quality of any of the images. This works through three components: feature distance with known-high quality images

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper is clearly written and easy to follow. The method is simple and very practical as it requires very little human labelling effort to implement. Although the authors have applied it to ultrasound, it is probably generally applicable to other imaging modalities. The experimental comparison is thorough and shows significantly better performance than baselines.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The method is implemented and evaluated entirely on private datasets

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    I only have minor comments: the paper is good in its current form

    • I suggest changing the word “enhancement” (which suggests improving image quality) to “corruption”.
    • The caption of table 1 should state what the metric is
    • An appropriate comparison that is missing would be to simply use the quality pseduo-label as a “model” and see how its performance compares to that of the trained model.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    A practical and effective method with a good experimental validation. Clearly written manuscript.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    The paper proposes an unsupervised method to compute the image quality score for Ultrasound data. They build on an existing model YOLOv5 pretrained for key anatomical structures (KASs) detection.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strengths of the paper are: • Most of the existing quality assessment methods depend on the ground truth (GT) annotation by experts, but since quality is a highly relative term, the GT from experts usually is highly varying. An unsupervised approach to this is assessment is an excellent strategy. • Measuring score consistency using weakly augmented images and using highly augmented images to strengthen the model’s ability to correlate between different image quality features and quality prediction scores is an interesting method.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The paper is well organised and presented. Here are a few suggestions to improve the quality of the work:

    • In section 3.1 the dataset is said to be obtained from 262 videos, but it would be worth mentioning the number of individuals scanned to obtain this data. This would clarify if there was mutual exclusivity between the training and test data. • Section 3.2 mentions that the network is build on an existing one trained for KSA detection, but it would worth doing an ablation study to ensure that quality scoring is indeed being influenced by the actual presence of KSA in the image as well.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    The paper is very well organised and written. The methodology presented is simple yet looks effective in solving an issue that several researchers face while dealing with ultrasound data. The suggestions provided are minor but if implemented would improve the quality of the work.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    • The clarity of the work is excellent. • The use of a fully unsupervised method to solve a problem that is highly relative is also commendable.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Author Feedback

N/A




Meta-Review

Meta-review not available, early accepted paper.



back to top