Abstract

Cerebrovascular diseases can occur suddenly and unpredictably, making it crucial to identify high-risk individuals through screening to prevent or mitigate its impact. However, digital subtraction angiography (DSA), the current gold-standard, is difficult to apply to large-scale screening or primary healthcare settings due to its high cost, complex operation, and invasive nature. In contrast, Color Fundus Photography (CFP) can reflect related cerebrovascular diseases through retinal microvascular changes while maintaining low-cost and risk-free advantages. Nevertheless, current CFP image-based methods for predicting cerebrovascular disease mostly focus on pixel-level image features only, ignoring the correlation between arteriovenous morphology, optic disc structure and disease risk. To address this gap, we propose CVGB-Net, a method that integrates a cross-view encoder to fuse high-level semantic features, primarily capturing vascular abnormalities in the retinal vasculature caused by cerebrovascular diseases, with low-level pixel features extracted by the foundation model, RetFound, designed for ocular tasks. The fused cross-view features for each sample are then processed through a graph-based discriminator, which utilizes a graph adapter to link disease-related features across the entire dataset. This approach further enhances the model’s ability to differentiate between diseased and healthy cases. To validate our approach, we present a tailored CFP-Cerebrovascular diseases Screening (CCS) dataset with 2,338 expert-diagnosed cases. Experimental results demonstrate the effectiveness of our approach, highlighting its potential for cost-effective large-scale cerebrovascular diseases screening. https://github.com/glodxy/CVGB_net

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/1503_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

https://github.com/glodxy/CVGB_net

Link to the Dataset(s)

N/A

BibTex

@InProceedings{TiaCon_Cerebrovascular_MICCAI2025,
        author = { Tian, Congyu and Zou, Shihao and Liao, Xiangyun and Chen, Cheng and Ou, Chubin and Lv, Jianping and Wang, Shanshan and Si, Weixin},
        title = { { Cerebrovascular Diseases Screening from Color Fundus Photography via Cross-View Fusion and Graph-Based Discrimination } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15971},
        month = {September},
        page = {195 -- 204}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper propose a pipeline to use CFP images to predict Cerebrovascular Disease, with CVGB-Net and two newly proposed modules. The first module is called Cross-view encoder, which utilizes the pretrained AV/OD segmentor and RETFound to get better feature and process them with mamba structure and feature alignment via contrastive loss. The second module is a graphadapter structure borrowed from a NeurIPS 24’ paper ([8] in the paper references). The code is not publicly available. No related information was mentioned. The authors proposed to have a new dataset called CCS, but no releasing information is related.
    The authors conducted experiments on the dataset and did moderate ablation studies to show the effectiveness of the proposed method.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The design of two modules looks reasonable and interesting, despite the graph adapter part has some ambiguity for a 100% understanding. The results looks good, especially the recall. The overall writing structure looks satisfying.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    This paper has some major and some minor weakness. Major weakness:

    1. Reproducibility: The paper proposed a new dataset but has no information about when it will be released. The code, with two complex structure, is also not available, with no information mentioned. If the author can not properly address these problems, this would become a grave error.
    2. The results reported are not paired with details of model selection and variance, making readers doubt the reliability of the results and can not avoid the guess from cherry picking. This is moderate compared to the limited code / data availability.
    3. Some details when introducing the method: I don’t quite understand the need to have the category node in the graphadapter. I also don’t understand the detail: is it only a node with all ones vector as the start? Or is it two nodes where disease / healthy are presented for each? Since you already have the disease - healthy class-specific graph, I don’t quite understand the need of this reference node.

    Also, is the whole graph in total with 2 subgraphs with 6 nodes? Can I interpret the graph in Fig. 1 is exactly the final graph structure? This part is really vague. I will need the authors to disclose more about the design here.

    1. Ablations study not fully uncover the method effcetiveness: The author only compared RETFound with its original implementation. It would be important (not requesting the experiment due to review policy, it is optional) the ablation with removing the AV/OD part in their methods and only use RETFOund feature. The author would think about also conduct ablation what if the RETFound feature is removed from their method.

    Minor weakness:

    1. Sec3.2: Implementation Detail -> Implementation Details
    2. In Fig.2, classify head -> classification head.
    3. Do authors want CVGB-Net as the main name? If so, they canconsider replace Ours with the name.
    4. Cost-effective is reasonable term to describe this task, but might not be appropriate to put in the title, this will wrongly let readers think about the methods is cost-effective, but in fact there is no related analysis.
    5. FIg.1 graph adapter part, the caption is missing for nodes, and became hard to understand. The lock icon used is also a bit tricky, I thought the method should include multi-stage training, the author should think about how to better demonstrate this information.
    6. In Fig. 2, why continuous metric uses CE loss? This is problematic, or are the author actually predicting disease labels? This part needs further clarification.
    7. I don’t know results from which head were used for resutls in Fig. 5. Need clarification.
    8. The Fig. 4 is kind of overlapped with Table. 1.
    9. Why not report specificity or precision in Table. 1? The space should be enough?
    10. I feel the pixel level attenion in Fig. 5 does not uncover very useful information.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    I think if the authors mentioned the CCS dataset as one of the contribution, they should provide enough information to demonstrate when the dataset is publicly available.

    Also, the code, including implementation of two modules, the multi-stage training code, should be released if there is no reasonable difficulties. Otherwise, given the fact that the proposed module is complicated, I will doubt the wider usage of the proposed modules.

    These two parts should be appropriately addressed by the authors, and I will be happy to adjust my score based on that.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (3) Weak Reject — could be rejected, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The design of the 2 proposed module looks novel and reasonable, despite I see no code publicly available.

    The dataset looks valuable but nothing about publicizing it is mentioned.

    The implementation details lack the part of multi-stage training. OD segmentor and AV segmentor implementation are not disclosed.

    The results looks improving but the variance quantification is missing. Also, some of the ablation studies is missing, especially the RETFound is used as part of their method. Ablation on this part should be ablated more detailedly.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    I still have some problems with the details, such as the dimension of the f_ref, and the scale of the GCN used.

    However, if the authors claim that the code and model will be released, and also the test set, then, together with their explanations on the method, my concern on methodology implementation can be addressed, by checking their implementations.

    I agree with the other two reviewers that this is a work with good clinical applicability. I like their way to disclose the release of the dataset. The problem is usually that we can not control the release of such an interesting dataset. I hope the authors will commit to their claims.



Review #2

  • Please describe the contribution of the paper

    The authors target the early prediction of cerebrovascular diseases using Color Fundus Photography (CFP). They propose a cross-view encoder that it able to combine medically-informed features with pixel-level features. The medically-informed path includes two segmentation systems for the Optic Disc and arteriovenous vasculature. The pixel-level features are derived from the foundational model RetFound. A mamba-based final architecture fuses the results of these pathways. A Graph Adapter is used to improve the differentiation of control participants and participants who are positive for cerebrovascular issues. The authors introduce their own custom dataset for this work that they name the CFP-Cerebrovascular diseases Screening (CCS) dataset.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The authors present a novel framework to predict cardiovascular diseases from Color Fundus Photography. The paper proposes a unique system to ensure that pixel-based information (data learned) and disease-based information (hand selected). They combine this information in a cross-view encoder with a bidirectional mamba that combines the features. It includes two different segmentors of clinical features that both account for distinct possible cerebrovascular disease. The overall system seems very intentional for the goal task.

    2. The graph adapter is a unique system that helps classify individuals as being healthy or as having a cardiovascular disease. The graph is composed of sub-graphs that are able to present learnable embeddings for each class and features from the data. The graph convolutional neural network has a much stronger representation of the final data than would be possible with a simple linear system.

    3. They collect a custom dataset specifically for this classification task. This could possibly further benefit the field if the authors someday wish to make the dataset public (not mentioned in the paper).

    4. The results on their dataset are very favorable using their method compared to five other public methods. The authors also include ablation results without the cross-view encoder or without the graph adapter. Four different evaluation metrics are provided.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    1. The information on the dataset is slightly lacking. We know that the data came from the same institution, the original data size is 2576 x 1934, the diseases that are classified as diseased, and the ratio of diseased and healthy. However, we are still missing a lot of information such as average age, gender ratio, and comorbidities.

    2. The data imbalance is concerning with 2205 healthy and 133 diseased. The testing data is only 26 diseased. Will the algorithm still work on different diseased data? Would the results be different with an even split of healthy / disease (133 each)?

    3. The results are only on their own dataset. The authors should consider testing on a different dataset as well in their future extensions to their work, particularly given that they only have 26 testing data. Maybe UKBiobank since it has eye and heart data. This would help show feasibility on a replicable problem.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    Explanation for my reproducibility comment: The implementation details are good enough. The algorithm is described well enough to understand the concept of what is done, but it is not described enough where someone could code it. The dataset description is extremely vague for a paper that states that one of its strengths is introducing a new dataset.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (5) Accept — should be accepted, independent of rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I think that the methodology is very thoughtfully crafted for the problem of diagnosing cardiovascular disease from Color Fundus Photography. They fuse many different parts that each tackle important sub-goals of the task that come together to create a unique full pipeline. My only real issue with the paper is the lack of reproducibility details, particularly with the dataset.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    I already believed that the paper should be accepted prior to the rebuttal. I think the paper will be interesting to have at the conference and that the authors answers our questions and suggestions well. I just hope that the code and dataset will indeed be made public.



Review #3

  • Please describe the contribution of the paper

    The paper makes several key contributions to cerebrovascular disease screening using Color Fundus Photography (CFP). One of the most significant contributions is the introduction of the CCS Dataset, a comprehensive collection of 2,338 expert-diagnosed cases aimed at predicting cerebrovascular diseases from CFP images. This dataset is an invaluable resource for training and evaluating models, offering diverse examples that can help improve the generalization and robustness of screening tools in real-world clinical settings. Additionally, the paper proposes CVGB-Net, a novel framework that integrates a Cross-View Encoder (CVE) and a Graph Adapter (GA). The CVE is designed to capture low-level pixel features from raw CFP images and high-level semantic features related to anatomical structures, such as arteriovenous (AV) morphology and optic disc (OD) characteristics. This dual feature extraction strategy enhances the model’s ability to discriminate between healthy and diseased cases by incorporating detailed pixel-level information and the broader context of vascular abnormalities. The Graph Adapter (GA) further refines the model by establishing relationships between classification features, which improves its capacity to link relevant clinical category features across the dataset. The framework emphasizes the importance of morphological relationships in the retinal vasculature, which are critical for assessing cerebrovascular risk. Including these relationships enables a more holistic risk assessment beyond the pixel-based analysis typically seen in traditional approaches. This added depth allows for more accurate and interpretable predictions, particularly in the context of early detection of cerebrovascular diseases. Experimental results show that this approach consistently outperforms state-of-the-art methods, further validating its efficacy and highlighting its potential for real-world application. The model’s performance on the CCS Dataset demonstrates its capability to detect cerebrovascular diseases with greater precision than previous techniques, suggesting that it can significantly improve diagnostic outcomes. Notably, the framework’s design emphasizes cost-effectiveness, making it suitable for large-scale screening in primary healthcare settings where resources may be limited. Its reliance on CFP, a low-cost and widely available imaging modality, ensures that the method can be implemented at scale, offering an accessible solution for early disease detection and improving healthcare delivery.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Novel Dataset: The introduction of the CCS Dataset, comprising 2,338 expert-diagnosed cases, provides a comprehensive and diverse resource for studying cerebrovascular diseases through CFP images. This dataset enhances research opportunities by offering a substantial amount of data for model training and evaluation.

    2. Innovative Methodology: The development of CVGB-Net, which integrates a Cross-View Encoder (CVE) and a Graph Adapter (GA), represents a novel and effective approach. It merges low-level pixel features with high-level semantic features, significantly improving disease classification accuracy by capturing both fine-grained details and broader contextual information.

    3. Focus on Morphological Relationships: The paper highlights the importance of morphological relationships in retinal vasculature, addressing a significant gap in existing methods. This focus enables a deeper understanding of the key disease risk factors, such as vascular abnormalities, which traditional methods may overlook.

    4. Cost-Effectiveness: The use of Color Fundus Photography (CFP) as a low-cost and risk-free imaging modality makes the proposed screening method highly accessible. This design is particularly valuable for large-scale applications, ensuring the method can be integrated into primary healthcare settings with limited resources.

    5. Performance Improvement: Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art techniques across multiple evaluation metrics, showcasing its enhanced ability to detect cerebrovascular diseases with greater accuracy and reliability.

    6. Comprehensive Analysis: The paper includes a thorough interpretability analysis, shedding light on how the model utilizes various features from CFP images to make predictions. This enhances transparency and helps validate the model’s decision-making process, making it easier for clinicians to trust and adopt the technology.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    Imbalance in Disease Cases: The CCS Dataset contains a low number of diseased cases (133) compared to healthy cases (2,205), which may lead to challenges in training the model effectively and could impact the generalizability of the results. Limited Comparison with Other Modalities: While the paper emphasizes the advantages of CFP, it may not sufficiently compare its method with performance outcomes from other imaging modalities like Digital Subtraction Angiography (DSA) or Magnetic Resonance Angiography (MRA), limiting the context of its effectiveness. Dependence on Image Quality: The effectiveness of the proposed method is inherently tied to the quality of the input CFP images. Image quality variation due to different equipment or patient conditions could affect the model’s performance. Specificity and Sensitivity Metrics: Although the paper reports performance improvements, it might not fully explore or explain the model’s specificity and sensitivity in various clinical scenarios, which are crucial for understanding its practical application. Lack of External Validation: The experimental results presented in the paper may benefit from external validation using independent datasets or real-world clinical data to assess generalizability and robustness. Complexity of Model: Introducing multiple components (CVE and GA) may add complexity to the model, creating challenges in model training, debugging, and interpretation, making it less feasible in resource-limited settings.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    The paper presents a valuable contribution to cerebrovascular disease screening, offering an innovative and comprehensive approach to analyzing Color Fundus Photography (CFP) images. The introduction of the CCS Dataset and the novel CVGB-Net methodology are both commendable and show strong potential for large-scale, cost-effective screening in primary healthcare settings. Combining low-level pixel features and high-level semantic features, mainly through the Cross-View Encoder and Graph Adapter, is a novel approach that enhances disease classification accuracy and addresses key gaps in previous methods. However, one area where the paper could be improved is the accessibility of the dataset and code. While the paper provides a detailed description of the methodology and experimental setup, providing direct access to the dataset and source code would strengthen the research’s reproducibility and facilitate its adoption by other researchers in the field. Overall, the paper is well-structured and clear and demonstrates a strong understanding of the technical and clinical aspects of cerebrovascular disease screening. Keep up the excellent work, and I look forward to seeing the continued development of this research.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (5) Accept — should be accepted, independent of rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Justification:
    The paper provides a substantial contribution to the field of cerebrovascular disease screening using Color Fundus Photography (CFP). The introduction of the CCS Dataset, the novel CVGB-Net framework, and the effective integration of both low-level pixel features and high-level semantic features are significant advancements. These elements demonstrate a deep understanding of the clinical context and provide a practical solution that could be scaled for large-scale, cost-effective screening in primary healthcare settings.

    Key strengths include:

    Innovative Approach: The methodology combines a Cross-View Encoder and Graph Adapter, effectively addressing a gap in previous work by emphasizing the importance of retinal vascular morphology in detecting cerebrovascular diseases. Clinical Relevance: The proposed solution is highly relevant for real-world applications, considering the cost-effectiveness and accessibility of Color Fundus Photography in primary healthcare settings. Performance: The experimental results show that the proposed method outperforms existing techniques, adding credibility to the approach. Dataset: The CCS Dataset, containing 2,338 expert-diagnosed cases, provides a valuable resource for the research community and enhances credibility.

    The primary factor preventing a perfect score is the lack of immediate access to the dataset and code. Although the authors have indicated plans to release these upon acceptance, providing access sooner would have further strengthened the reproducibility and impact of the research. Nevertheless, the paper’s overall quality and potential for real-world application in cerebrovascular disease screening make it deserving of acceptance.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A




Author Feedback

Thank you for the constructive feedback and positive remarks. We appreciate the reviewers’ recognition of our work as “novel” (R2), “valuable” (R3), and “interesting” (R4). We also note the acknowledgment that our constructed datasets “benefit the field” (R2 & R3) and that our results are “favorable and good” (R2, R3 & R4). Below, we address the specific concerns raised.

Common Questions: Code: We will release our code for re-implementation upon acceptance. Dataset: We will release the test set for reproducibility. We are continuously collecting data and plan to share the expanded dataset and provide a deeper analysis in the future extension work under necessary approvals and de-identification.

(R2 W1) Thank you for the suggestion. In our dataset, the average age is 50.4 (healthy) and 68 (diseased); the male/female ratio is 1017/1188 (healthy) and 57/76 (diseased). Common comorbidities include hypertension and diabetes. (R2 W2) Our goal is disease screening, where diseased cases are rare in large populations. Thus, our dataset typically reflects this real-world imbalance to better assess our method’s ability. On different diseased data, we performed five-fold cross-validation, achieving an average sensitivity of 0.6735 (STD 0.0592). On an even split of dataset, sensitivity was 0.6551. (R2 W3) Please refer to the common questions.

(R3 W1) Please refer to (R2 W2). (R3 W2) As noted in the Introduction, while DSA and MRA offer higher accuracy than CFP, they are impractical for large-scale, cost-effective screening. Our work focuses on utilizing CFP for this purpose, accepting some performance trade-offs for broader applicability. (R3 W3,4&5) Our dataset, sourced from a single hospital to ensure high image quality, demonstrates the preliminary feasibility of using CFP images for large-scale, cost-effective cerebrovascular screening. A detailed confusion matrix comparison is provided in Fig. 4, where specificity and sensitivity are reported. We are continuously collecting data from additional hospitals and will provide broader validation across clinics and image qualities in the extension journal version. (R3 W6) We will release our code to support re-implementation.

(R4 W1) Please refer to the common questions. (R4 W2) We followed standard practice by selecting the best model based on validation performance and evaluating on a separate test set. As stated in Sec. 3.1, the dataset was split into training, validation, and testing sets at a 3:1:1 ratio. We performed five-fold validation and obtained average sensitivity of 0.6735 (STD 0.0592). (R4 W3) There are two learnable category nodes (embeddings), initialized using the average features of all healthy and diseased samples, respectively. They form a subgraph alongside another subgraph containing two groups of healthy and diseased sample nodes. Each category node is concatenated with its corresponding group of sample nodes (i.e., healthy/diseased category node with healthy/diseased sample nodes). A GCN is then applied to the graph to capture category-specific contextual information for enhanced discrimination. (R4 W4) When using only the RETFound feature, sensitivity drops from 0.6389 to 0.5833; when removing it, sensitivity drops to 0.6250. (R4 Minor) We will remove “cost-effective” from the title as suggested. The multi-stage training process is detailed step by step in Sec. 2 and will be further highlighted. We will clarify the graph adapter nodes and the lock icon in Fig. 1’s caption. In Fig. 2, “Prob” denotes predicted probabilities for healthy and diseased classes used in the CE loss. Fig. 5 shows activation maps from the Semantic Encoder and RetFound. Fig. 4 presents a comprehensive confusion matrix comparison, while Table 1 highlights key quantitative results to showcase our model’s ability in the screening task. Removing the pixel feature reduces sensitivity from 0.6389 to 0.6250. All points will be addressed in the final version.




Meta-Review

Meta-review #1

  • Your recommendation

    Invite for Rebuttal

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    N/A

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



back to top