Abstract

High myopia significantly increases the risk of irreversible vision loss. Traditional perimetry-based visual field (VF) assessment provides systematic quantification of visual loss but it is subjective and time-consuming. Consequently, machine learning models utilizing fundus photographs to estimate VF have emerged as promising alternatives. However, due to the high variability and the limited availability of VF data, existing VF estimation models fail to generalize well, particularly when facing out-of-distribution data across diverse centers and populations. To tackle this challenge, we propose a novel, parameter-efficient framework to enhance the generalized robustness of VF estimation on both in- and out-of-distribution data. Specifically, we design a Refinement-by-Denoising (RED) module for feature refinement and adaptation from pretrained vision models, aiming to learn high-entropy feature representations and to mitigate the domain gap effectively and efficiently. Through independent validation on two distinct real-world datasets from separate centers, our method significantly outperforms existing approaches in RMSE, MAE and correlation coefficient for both internal and external validation. Our proposed framework benefits both in- and out-of-distribution VF estimation, offering significant clinical implications and potential utility in real-world ophthalmic practices.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1834_paper.pdf

SharedIt Link: https://rdcu.be/dVZi4

SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72378-0_65

Supplementary Material: N/A

Link to the Code Repository

https://github.com/yanzipei/VF_RED

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Yan_Generalized_MICCAI2024,
        author = { Yan, Zipei and Liang, Zhile and Liu, Zhengji and Wang, Shuai and Chun, Rachel Ka-Man and Li, Jizhou and Kee, Chea-su and Liang, Dong},
        title = { { Generalized Robust Fundus Photography-based Vision Loss Estimation for High Myopia } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15001},
        month = {October},
        page = {700 -- 710}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper introduces a novel framework dubbed refinement-by-denoising (RED), designed for visual field (VF) estimation from retinal images. The proposed pipeline encompasses three key components: pre-trained feature extraction, a newly proposed feature denoising module, and a regression module. Comparative experiments demonstrate that this approach outperforms existing methods in terms of both efficiency and effectiveness, while ablation studies highlight the significant contribution of the proposed modules to the enhancement of results.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The comparison results surpass other SOTA methods, evidencing the effectiveness of the proposed method in this task. Significant improvements from ablation experiments affirm the proposed role of modules in enhancing accuracy.
    2. The method is lightweight, requiring significantly fewer parameters than fine-tuning and less than most comparative methods, making it hardware-friendly for deployment of both training and inference.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. The manuscript contains an excess of secondary content that is not a primary contribution.
      • Extensive formula derivations that are not original contributions to the paper, which would be more appropriately placed in supplementary materials, such as the process from Equation 4 to Equation 7, and from Equation 10 to Equation 12.
      • The decomposition of e in Equation 11 is redundant, as it is not utilized in subsequent derivations nor coupled with any specific clinical significance analysis.
    2. Some explanations related to the main themes and conclusions are too brief, leading to a proliferation of confusing descriptions:
      • My greatest concern is why the authors assert that OOD problems can be modeled as zero-mean additive white Gaussian noise. I briefly reviewed the referenced paper [15], where the method is applied to domain shifts of different styles but similar semantics. However, the manuscript deals with the substantial gap between natural and medical images. To my knowledge, such OOD cannot be simply modeled as additive noise, and the authors should provide a more detailed explanation for this conclusion.
      • In the Ablation Study section, what exactly do ‘Mean,’ ‘Median,’ and ‘kernel’ represent?
  • Please rate the clarity and organization of this paper

    Poor

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    Is the optimization method for Equation 12 gradient descent or least squares?

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Please see the weakness section.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Despite the promising results achieved by the proposed method, the theoretical explanation and modeling process are too vague to be convincing.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • [Post rebuttal] Please justify your decision

    The clarity and organization of this paper is poor



Review #2

  • Please describe the contribution of the paper

    The paper introduces a framework for Vision Loss Estimation in High Myopia using fundus photographs. Traditional visual field assessment methods are subjective and time-consuming, prompting the development of machine learning models for more efficient estimation. The proposed Refinement-by-Denoising (RED) module enhances the generalizability of visual field estimation by refining features and adapting pretrained vision models. This framework outperforms existing approaches in accuracy metrics for both internal and external validation on real-world datasets.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    1.The Refinement-by-Denoising (RED) module enhances the generalizability of visual field estimation by refining features and adapting pretrained vision models. 2.The framework outperforms existing approaches in terms of accuracy metrics such as RMSE, MAE, and correlation coefficient for both internal and external validation on real-world datasets. 3.The study systematically assesses the robustness of visual field estimation models across datasets from different centers and populations, offering valuable insights for future ophthalmic practices.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    1.To address high myopia, there are ways like ICL implanting[1][2][3], Can the vision loss estimation assist the ICL implanting? 2.Does the proposed RED reduces the inference efficiency on vision loss estimation?

    [1]Zhang, Zhe, et al. “Primary observations of EVO ICL implantation for high myopia with concave iris.” Eye and Vision 10.1 (2023): 18. [2]ICL in Treatment of Myopia (ITM) Study Group. “United States Food and Drug Administration clinical trial of the Implantable Collamer Lens (ICL) for moderate to high myopia: three-year follow-up.” Ophthalmology 111.9 (2004): 1683-1692. [3]Fang, Huihui, et al. “Purely Image-based Vault Prediction with Domain Prior Supervision for Intraocular Lens Implantation.” Proceedings of the 2022 International Conference on Intelligent Medicine and Health. 2022.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    None.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Please refer to the main wearkness part.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    In this paper, an important problem of visual loss estimation from fundus images is solved. The proposed method is well motivated. However, the theory behind MC-SURE is not simple. It is recommended to move mathematical formulas. To sum up, I suggest that the paper be rated as weak accept.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    This paper proposes a novel, parameter efficient framework to enhance the generalized robustness of visual field (VF) estimation on both in- and out-of-distribution data.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The paper is well-written and easy to follow.
    • The experiments can support the novelty.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • Private dataset used for evaluating the method. Hard for others to compare their numbers against the numbers reported in the paper.
    • Lack of validation on public datasets.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?
    • The code is not available.
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • Utilize the public data to verify the generalization ability of the proposed method.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
    • Well written and logical.
    • Lack of experiments on public dataset.
  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Accept — should be accepted, independent of rebuttal (5)

  • [Post rebuttal] Please justify your decision

    It is a novelty application study and the paper is well organized.




Author Feedback

We sincerely appreciate all comments from the three reviewers: [#1], [#3] and [#4]. While we acknowledge their constructive feedback, we would like to clarify the following points:

To [#1]: One of our main contributions is the parameter-efficiency framework RED for robust VF estimation from fundus photos. We introduce these equations for the optimization of RED, which provides a theoretical guarantee to optimize RED in an unsupervised manner.

Besides, we include the detailed decomposition of e in Eq.(11) to depict the composition of e for better understanding. And we will omit it in Eq.(11) and keep the description of its decomposition below Eq.(11).

As for the noise modeling in our problem, to the best of our knowledge, existing domain adaptation methods usually model the domain shift/gap based on the Gaussian model, such as the referenced paper [15], where they model the encoded features by the Gaussian model with potential uncertainties, i.e., additive white Gaussian noise (AWGN), which is illustrated in Eq.(5) and Eq.(6) in their paper. Inspired by them, we model the domain gap between the natural image and the fundus photo domain following it; here, the domain gap exists on the extracted features from the fundus photo using natural image pre-trained models. And we assume the extracted features contain AWGN. And we will clarify the above modeling more accurately. It is worth mentioning that there is no perfect model to handle the domain gap. Our experimental results in Table 2 and Table 3, as well as the Ablation Study, demonstrate this modeling and, with the proposed RED, can appropriately address the above domain gap, as illustrated by the improved performance on both internal and external validation data.

Besides, in the Ablation Study of “Effectiveness of RED”, we aim to examine the effectiveness of the proposed RED; therefore, we compare it with alternative denoising methods, and due to the lack of ground truth, we choose the mean and median kernels/filters as the baselines. The experimental results are reported in Table 3. And we will clarify them and include an additional description for better readability.

To [#3]: This study aims to estimate vision loss from fundus photographs, driven by the hypothesis of “structure-function” relationship, where we believe that structural changes captured by fundus photographs are associated with the visual loss detected by VF test. The ICL has become an important technique used for myopia correction. Some studies [1] have reported that ICL implantation for highly myopic eyes led to significant changes in retinal structure, such as retinal thickness and vessel density, which could be captured by fundus photography. Therefore, our proposed method could be applied to evaluate visual improvement in highly myopic eyes after ICL implantation.

[1] Xu Y, et al. Analysis of Microcirculation Changes in the Macular Area and Para-Optic Disk Region After Implantable Collamer Lens Implantation in Patients With High Myopia. Front Neurosci. 2022 May 19;16:867463. doi: 10.3389/fnins.2022.867463.

Besides, our method is parameter efficient, which introduces a shallow ReLU-based MLP with only one hidden layer. And we calculate the executing time (on a single NVIDIA V100) and floating-point operations per second (FLOPs), which is illustrated as follows: Regression: 5.65x10^(-5) s / 26624.00 FLOPs Regression + RED: 8.98x10^(-5) s / 288768.00 FLOPs Therefore, the inference efficiency is only slightly reduced.

To [#4]: We fully agree that validating our method on public datasets is important. However, to the best of our knowledge, no existing public dataset of high myopia with pairs of point-wise VF and fundus photographs is available. Therefore, we have collected real-world data from two clinical centers for external validation in this study. We will also publicize our code for better reproducibility.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



back to top