List of Papers Browse by Subject Areas Author List
Abstract
Early screening and classification of Age-related Macular Degeneration (AMD) are crucial for precise clinical treatment. Currently, most automated methods focus solely on dry and wet AMD classification. However, the classification of wet AMD into more explicit type 1 choroidal neovascularization (CNV) and type 2 CNV has rarely been explored, despite its significance in intravitreal injection. Furthermore, previous methods predominantly utilized single-modal images for distinguishing AMD types, while multi-modal images can provide a more comprehensive representation of pathological changes for accurate diagnosis.
In this paper, we propose a Modal Prior Mutual-support Network (MPMNet), which for the first time combines OCTA images and OCT sequences for the classification of normal, dry AMD, type 1 CNV, and type 2 CNV. Specifically, we first employ a multi-branch encoder to extract modality-Specific features.
A novel modal prior mutual-support mechanism is proposed, which determines the primary and auxiliary modalities based on the sensitivity of different modalities to lesions and makes joint decisions. In this mechanism, a distillation loss is employed to enforce the consistency between single-modal decisions and joint decisions. It can facilitate networks to focus on specific pathological information within individual modalities.
Furthermore, we propose a mutual information-guided feature dynamic adjustment strategy.
This strategy adjusts the channel weights of the two modalities by computing the mutual information between OCTA and OCT, thereby mitigating the influence of low-quality modal features on the network’s robustness.
Experiments on private and public datasets have demonstrated that the proposed MPMNet outperforms existing state-of-the-art methods.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/0967_paper.pdf
SharedIt Link: https://rdcu.be/dVZi7
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72378-0_68
Supplementary Material: N/A
Link to the Code Repository
N/A
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Li_MPMNet_MICCAI2024,
author = { Li, Yuanyuan and Hao, Huaying and Zhang, Dan and Fu, Huazhu and Liu, Mengting and Shan, Caifeng and Zhao, Yitian and Zhang, Jiong},
title = { { MPMNet: Modal Prior Mutual-support Network for Age-related Macular Degeneration Classification } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15001},
month = {October},
page = {733 -- 742}
}
Reviews
Review #1
- Please describe the contribution of the paper
- The authors proposed a Modal Prior Mutual-support Network (MPMNet) to classify normal, dry AMD, Types 1 and 2 Wet AMD cases based on paired OCT and OCTA. In the experiment, multiple existing approaches are compared with the proposed model. Both quantitative and qualitative results are provided in the manuscript. The authors stated that this is the first study to classify dry and Types 1 and 2 wed AMD from the normal cases using both OCT b-scans and OCTA images. The proposed approach included two new designs: 1) the modal prior mutual-support mechanism and 2) the mutual information-guided feature dynamic adjustment strategy (FDAS).
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- OCT and OCTA are two of the most popular ophthalmic image modalities. The proposed MPMNet takes advantage of both image modalities, leading to a higher AMD classification accuracy rate than other existing single-modality approaches.
- The research goal of identifying dry and two types of wet AMD is clinically meaningful.
- The authors provided a well-organized introduction, research goals, related studies, and main contributions.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- It is unclear how the authors selected their training, validation, and test set.
- The terms “modal prior ‘mutual’-support mechanism” and “modal prior ‘self’-support mechanism” are confusing
- Although Fig. 2 seems an informative flowchart, many details are not intuitive and inconsistent. For example, the yellow dashed lines in 2(a) are supposed to help the readers understand how the CNN and VIT are connected in different CNN stages, but I still find them confusing between the 2(a) and the main flowchart. Also, labels are not clearly matched, e.g., “Mat” and “MatMul”.
- The description of the feature dynamic adjustment strategy is vague in both the method section and Fig. 2(b).
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not provide sufficient information for reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
- There are not sufficient details on how to reimplement the exact proposed model, although the proposed model sounds reasonable and should work conceptually.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- The authors did a good job introducing the study purpose and proposed model, but the organization of the method section can be improved.
- The proposed model sounds novel and promising, but reimplementation would be challenging for someone not in the same group.
- I recommend adding the notations of the equations in the main flowchart.
- Using distillation loss is a good idea. More details can be helpful.
- Missing a clear description of how the training, validation, and test datasets is one main limitation in this study. The authors need to state that their data was clearly divided by subjects.
- The authors stated that the dataset “MMC-AMD” is publicly available. However, from their GibHub (https://github.com/li-xirong/mmc-amd), it seems only the color fundus photographs and OCT B-scans are available. Although the authors could access the images, it is unclear whether the public can access the paired OCT B-scan sequences and OCTA.
- The authors stated that the OCTA’s dimension is 3 x 3 mm2, but the OCTs are unclear. In addition, what’s the protocol for the OCTs? How many B-scans are in one OCT volume?
- It is unclear how the proposed model performs for each classification category. How good is it for separating Type 1 and Type 2 AMD cases? How good is it for separating the normal and Dry AMD cases?
- Fig. 3 is a nicely designed figure that illustrates meaningful qualitative results. It would be even better if sub-panel labeling could be more explicit so the figure can be self-explained.
- A few typos: The caption of Fig. 1 – OCTA (top row) and OCTA (second row); Modal prior mutual-support mechanism? Modal prior self-support mechanism?
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The method is novel, but the authors need to provide clear information about how they divide training, validation, and test datasets.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
This paper presents MPMNet, a pioneering network that leverages both OCT and OCTA images to enhance the classification of age-related macular degeneration (AMD) and its subtypes. Key contributions include a multi-branch encoder for comprehensive feature extraction, a modal prior mutual-support mechanism that optimizes the use of modality-specific information, and a mutual information-guided feature dynamic adjustment strategy to improve network robustness against image quality variations. MPMNet demonstrates superior performance over existing state-of-the-art methods, offering significant advancements in the precision of AMD diagnosis and treatment strategies, underscoring its potential clinical impact.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The main strengths of the paper are as follows:
Novelty in Integration of Modalities: The MPMNet introduces a unique approach by combining OCT and OCTA images for the classification of AMD and its subtypes. This dual-modality integration is novel as it leverages the complementary strengths of each imaging technique. OCTA provides detailed visualization of blood flow, which is crucial for identifying CNV, while OCT offers depth-resolved images of retinal layers, important for assessing structural changes. This integration allows for a more comprehensive analysis than using either modality alone. Modal Prior Mutual-support Mechanism: Another innovative aspect of MPMNet is the modal prior mutual-support mechanism. This component enhances the network’s ability to focus on the most informative features from each modality based on their pathological relevance. By dynamically determining the primary and auxiliary modalities, the network optimizes the processing of complex data, potentially leading to more accurate and reliable diagnostic outcomes. Mutual Information-guided Feature Dynamic Adjustment: The introduction of a feature adjustment strategy guided by mutual information is a significant advancement. This strategy adjusts the influence of each modality based on the quality of the images, thus mitigating the effects of poorer quality images on the diagnostic results. This aspect not only improves the robustness of the network but also ensures that the classification accuracy is maintained even with variably quality data.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The paper could improve by providing a more detailed error analysis, identifying cases where the model fails or is less accurate. Understanding these scenarios could help in refining the model further and ensuring more reliable performance.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
No
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
Your submission detailing the Modal Prior Mutual-support Network (MPMNet) for classifying age-related macular degeneration (AMD) presents a promising method that leverages multimodal imaging data. Below, I offer some constructive feedback aimed at refining your manuscript.
The formulation of MPMNet and its components (multi-branch encoder, MPSM, and feature adjustment strategy) are well articulated. However, the manuscript would benefit from a more detailed explanation of the underlying mathematical models, particularly how the modal prior mutual-support mechanism quantitatively enhances feature selection from multimodal data. Consider elaborating on any preprocessing steps, data normalization techniques, or specific transformations applied to the OCT and OCTA data before input into your network, as this will help in replicating your results and understanding the model’s input sensitivity.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Accept — should be accepted, independent of rebuttal (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The recommendation to accept this paper is primarily driven by the following major factors:
- Innovative Integration of Multimodal Imaging Data: The introduction of MPMNet, which effectively combines OCT and OCTA images for AMD classification, represents a significant technical advancement. The method not only uses these modalities in tandem but also introduces a novel way to prioritize their influence dynamically based on their pathological significance. This approach is both innovative and apt for tackling the nuanced challenges of AMD classification.
- Robust Evaluation and Performance: The paper provides comprehensive experimental results demonstrating that MPMNet outperforms existing state-of-the-art methods. The use of both private and public datasets for evaluation adds to the credibility of the results. The detailed performance metrics and comparative analysis underscore the efficacy of the proposed model.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
[1] The paper proposes a novel Modal Prior Mutual-support Network (MPMNet) for the classification of different types of AMD using both OCT sequences and OCTA images. [2] This paper proposes a new modality priority mutual support mechanism to facilitate network attention to pathological information that is sensitive to each modality. [3] In this paper, a dynamic adjustment strategy based on mutual information guidance is designed to reduce the influence of low quality images on the network.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
[1] This article is the first to use both OCT sequences and OCTA images to diagnose different types of AMD. [2] The structure of the article is clear and the logic is smooth. [3] Since of the different disease representations observed by different modes, the modal prior self-support mechanism proposed in this paper can enhance the feature representation of specific modes. [4] This paper proposes a novel feature dynamic adjustment strategy (FDAS) to evaluate the importance of images using mutual information.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
[1] The experimental results of the proposed method perform well in Private datasets, but the improvement is not obvious compared with MMC-AMD in Public datasets. [2] The formula requires further explanation, such as what the Ws and Wv subscripts in section 2.1 mean. [3] In Fig.1, the top row is OCT and the second row is OCTA, which is inconsistent with what is shown in the figure.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
Hopefully, the author will publish the source code after accepting the submission.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
[1] Fig.1 needs to be further modified and corresponds to the description. [2] The results can further explain why the classification effect of low-quality images can be improved. [3] The formula requires further explanation, such as what the Ws and Wv subscripts in section 2.1 mean.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Accept — should be accepted, independent of rebuttal (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The article is well organized, the method is highly motivated, and the results are presented clearly. Some language expressions can be further optimized.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Author Feedback
Thank the AC and the reviewers for their efforts and constructive comments. We will incorporate the revisions in the camera-ready and journal version of the paper and make further improvements in our future work.
Meta-Review
Meta-review not available, early accepted paper.