Abstract

Cardiac Magnetic Resonance (CMR) imaging is a vital non-invasive tool for diagnosing heart diseases and evaluating cardiac health. However, the limited availability of large-scale, high-quality CMR datasets poses a major challenge to the effective application of artificial intelligence (AI) in this domain. Even the amount of unlabeled data and the health status it covers are difficult to meet the needs of model pretraining, which hinders the performance of AI models on downstream tasks. In this study, we present Cardiac Phenotype-Guided CMR Generation (CPGG), a novel approach for generating diverse CMR data that covers a wide spectrum of cardiac health status. The CPGG framework consists of two stages: in the first stage, a generative model is trained using cardiac phenotypes derived from CMR data; in the second stage, a masked autoregressive diffusion model, conditioned on these phenotypes, generates high-fidelity CMR cine sequences that capture both structural and functional features of the heart in a fine-grained manner. We synthesized a massive amount of CMR to expand the pretraining data. Experimental results show that CPGG generates high-quality synthetic CMR data, significantly improving performance on various downstream tasks, including diagnosis and cardiac phenotypes prediction. These gains are demonstrated across both public and private datasets, highlighting the effectiveness of our approach. Code is available at https://github.com/Markaeov/CPGG.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/1849_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

https://github.com/Markaeov/CPGG

Link to the Dataset(s)

N/A

BibTex

@InProceedings{LiZiy_PhenotypeGuided_MICCAI2025,
        author = { Li, Ziyu and Hu, Yujian and Ding, Zhengyao and Mao, Yiheng and Li, Haitao and Yi, Fan and Zhang, Hongkun and Huang, Zhengxing},
        title = { { Phenotype-Guided Generative Model for High-Fidelity Cardiac MRI Synthesis: Advancing Pretraining and Clinical Applications } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15961},
        month = {September},
        page = {483 -- 493}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper addresses the challenge of limited cardiac MRI (CMR) data availability for AI development by introducing Cardiac Phenotype-Guided CMR Generation (CPGG). The researchers present a two-stage approach: first generating cardiac phenotypes (clinical measurements like ejection fraction and ventricle volumes), then using these as conditions for a masked autoregressive diffusion model to synthesize realistic CMR sequences. Unlike previous methods that use simple class labels or require first-frame images, this approach leverages clinically relevant measurements for fine-grained control of the generation process. The method produces high-quality images while requiring significantly less computation than traditional techniques, enabling large-scale data synthesis. Experiments on public and private datasets demonstrate that using the synthetic data for model pretraining improves performance on important clinical tasks including disease classification and cardiac phenotype prediction, validating the approach’s practical utility.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper makes notable methodological contributions through its innovative use of cardiac phenotypes as conditioning variables for CMR image generation. Particularly commendable is the implementation of an autoregressive scheme that significantly accelerates CMR sequence generation compared to conventional 3D diffusion methods. These advancements effectively address critical challenges in medical imaging data acquisition—namely, the prohibitive costs, time-intensive nature, and privacy concerns associated with collecting large-scale CMR sequence data. Although the problem formulation could be more explicitly defined and the evaluation metrics could more directly demonstrate the discriminative advantages of the approach, the technical innovations represent valuable progress in medical image synthesis.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    [Introduction]

    • “However, challenges related to data acquisition and privacy concerns have resulted in a scarcity of such datasets, thereby limiting the potential of AI in CMR image analysis.” There appears to be a methodological contradiction between the authors’ premise of data scarcity and their utilization of 32,444 CMR sequences from UK Biobank for model training. This inconsistency raises questions about the clinical necessity driving this research. The evaluation metrics selected do not sufficiently demonstrate the discriminative value of synthetic CMR images in clinical or research applications. A more targeted articulation of the clinical problem and corresponding methodological development would strengthen the paper’s contribution.

    • “Cardiac phenotypes encompasses a set of clinically relevant measurements extracted from CMR imaging, including key metrics such as LVEF and Left Ventricular End-Diastolic Volume (LVEDV), which together provide a comprehensive characterization of the heart’s functional and structural properties.” The paper would benefit from specifying which 82 cardiac phenotypes were included in this study, either through direct enumeration or citation of a comprehensive source. While LVEF and LVEDV effectively quantify systolic function relevant to heart failure assessment, these metrics alone provide limited information about cardiac anatomical structure. Additional clarification regarding the complete set of phenotypes would enhance reproducibility and clinical relevance.

    [Method - Figure 1]

    • “Fig. 1: Overview of our model. A and C describe a two-stage generation process. B showed the details of the generation of each token.” The figure would be significantly improved by explicitly identifying variables xi and zi to facilitate comprehension of the Masked Autoregressive Model mechanism. There appears to be an inconsistency between the figure and the textual description in Section 2.2, as the figure does not show masking of tokens before the Transformer encoder. Harmonizing the visual representation with the textual description would enhance clarity, particularly regarding the prediction process p(xi|zi).

    [Method - Cardiac Phenotypes Generation]

    • “During inference, a latent vector is sampled from a standard normal distribution and then decoded to generate cardiac phenotypes, effectively creating new phenotypes data.” The methodology would be strengthened by explicitly describing how the phenotype distribution was determined and whether supervised training was necessary for the VAE to learn this distribution. This information is critical for understanding the foundation of the generative process.

    [Method - Masked Autoregressive Model]

    • “These tokens are randomly masked, during training, a dynamic masking ratio is applied, as used in [11][12].” This statement indicates that tokens following the 3D-VAE encoder (presumably before the Transformer encoder) undergo random masking, which appears inconsistent with the model visualization in Figure 1.

    • “Let xi represents a ground-truth token,” The definition of “ground-truth token” requires clarification. Based on context, these presumably represent the unmasked non-overlapping tokens derived from the original CMR data, consistent with the autoencoding approach described.

    • “xt i represents the noisy token at the time step t, defined as xt i = √α¯txi + √1 − α¯t ε” The notation xt i and its relationship to xi should be explicitly located within the model overview in Figure 1. Additionally, the variable ε requires definition for reader comprehension.

    • “The process begins with an empty CMR latent representation, where all tokens are masked. The iterative decoding proceeds over K steps, during which the model predicts the remaining masked tokens at each iteration, and the predicted tokens are randomly retained, masking ratio adhering to a cosine schedule.” The role of cardiac phenotypes in the decoding process remains inadequately explained. Without a clear description of how the phenotype distribution was determined, the conditioning mechanism for CMR sequence generation is difficult to evaluate.

    [Experiments - CMR Generation Quality]

    • “our model achieves better performance on both Fréchet Inception Distance[8] (FID) and Fréchet Video Distance[17] (FVD) metrics.” The manuscript would benefit from a brief explanation of how FID and FVD quantify image and video quality, facilitating better interpretation of the results. Additionally, standard comparative metrics such as PSNR and SSIM should be calculated using available CMR sequences to provide a more comprehensive evaluation.

    • “Table 1: Quantitatively evaluation of our CPGG model.” The abbreviation “vid.” requires clarification—presumptively referring to a complete CMR sequence. Explicit definition would enhance readability.

    • “Fig. 2: Examples of generated CMR and their corresponding cardiac phenpotypes using the CPGG framework, ordered by LVEDV from small to large.” This figure should incorporate qualitative comparisons visualizing generated results from VideoGPT, ModelScopeT2V, and CPGG alongside corresponding ground-truth CMR sequences. Such comparative visualization would substantiate the claimed improvements.

    [Results - Downstream Tasks]

    • “When the amount of synthetic data reaches five times that of the real data, a significant performance improvement is achieved compared to pretraining with real data alone in all datasets.” A sensitivity analysis examining the factors contributing to classification improvement with increased synthetic data would enhance interpretability of Tables 2 and 3. Such analysis would provide insight into the mechanisms underlying the observed performance gains.

    • “This proves that the CMR data generated by our method has high fidelity and strictly adheres to fine-grained conditions such as cardiac phenotypes.” The reported average R² across 82 phenotypes (Figure 3) appears substantially lower than the specific phenotype R² values reported in Table 2. This discrepancy warrants a more comprehensive presentation of results showing variance or distribution of differences between predicted and ground-truth phenotypes. Further interpretation of factors contributing to prediction accuracy improvement would strengthen the findings.

    • “Table 2: Performance of two downstream tasks on public datasets. The upper part of the table is the disease classification task, and the lower part is the cardiac phenotype regression task. * indicates that synthetic CMR data is used not only for data augmentation in the pretraining stage but also as labeled data for data augmentation during finetuning.” The comparative analysis would be strengthened by including generation results from VideoGPT and ModelScopeT2V. Additionally, the manuscript lacks clarity regarding the sampling mechanism from cardiac phenotypes during iterative decoding for CMR sequence generation, raising concerns about potential data leakage if all phenotypes were utilized for sampling. The finetuning strategy mentioned here was not previously introduced and requires detailed specification.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (3) Weak Reject — could be rejected, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper presents an innovative approach to CMR image generation but falls short in several critical areas warranting weak rejection. First, the authors fail to establish a convincing clinical necessity for their method, presenting a contradiction between their premise of data scarcity and their utilization of over 32,000 CMR sequences for model training. Second, the methodology suffers from significant clarity issues, particularly regarding the phenotype distribution determination, the masking mechanism in the autoregressive model (with inconsistencies between text and figures), and the conditioning process during CMR generation. Third, the evaluation lacks clinically meaningful metrics and comparative visualizations that would demonstrate the discriminative advantages of the CPGG model in practical scenarios. While the technical approach shows promise, these fundamental shortcomings undermine the paper’s potential contribution to the field unless substantially addressed in a rebuttal.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    Final Recommendation: Accept

    Based on the authors’ comprehensive rebuttal, I am changing my recommendation to Accept. The authors have adequately addressed most of my major methodological concerns, particularly clarifying the clinical necessity (data scarcity for effective pretraining), resolving definitional ambiguities in their autoregressive model, and explaining the phenotype conditioning mechanism. However, I maintain concern about the clinical relevance and validation framework—while the technical approach is sound, the paper would benefit from a more structured evaluation plan to assess how faithfully the generated CMR data mimics real-world clinical scenarios and whether the synthetic data preserves clinically meaningful patterns that would be relevant for actual diagnostic applications. Despite this limitation, the methodological contributions and demonstrated improvements in downstream tasks represent valuable progress in medical image synthesis that merits publication.



Review #2

  • Please describe the contribution of the paper

    The authors present a methodology to generate cardiac MRI constrained to a selected phenotype. They build a variational autoencoder to sample phenotypes which are then used in a masked autoregressive generative model to constrain image generation.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The strength of the paper is that is based on the UK biobank and that it used conditional diffusion synthesis for the generation of the images.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    Phenotypes are not properly described, and it’s not clear why an autoencoder for the sample generation is needed. There is no validation of the pipeline, and a check whether the generated images are really representing the phenotype, and thus, if the constrains are respected. Image segmentations and classical analysis should be done on the generated images to show the agreement with the phenotype definition. I could not understand what the classifiers were based on, what type of architecture, and the exact training procedures. Additionally, it is not clear how different performance are between conventional image augmentation and this approach (with the same number of training cases in both)

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Some aspects are not fully clear, but the approach is interesting and could be further developed

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #3

  • Please describe the contribution of the paper

    This work introduces a model for conditional cMRI video generation aiming to address the problem of limited data availability. A novel architecture is introduced, bringing innovations in designing a 3D VAE, conditioning generation on 82 cMRI phenotypes, and masked autoregressive design. This type of architecture allows for low inference cost and doesn’t require vector quantization (discrete codebooks). As a result, this approach outperforms VideoGPT and ModelScopeTV2 on cMRI generation task, and the experiments show that this approach can be used as an augmentation technique to improve performance of various downstream tasks.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Novel method is introduced. Particularly - the mechanism to sample 82 cardiac phenotypes from the generative model and then use it to condition a video generation process.
    • Strong evaluation showing the benefit of using data generation engine as the augmentation technique
    • Well-written paper and technically sound
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    • Code not available. The provided link leads to a repository where folder structure is visible, but individual files can’t be accessed (the requested file is not found error)

    • Previous work on CMR generation (GANcMRI [1] Table 1) reports FVD of 283.53, whereas you report FVD of 711.17. While your approach remains significant—particularly due to its ability to condition on cardiac phenotypes—it would be good to comment on this prior work and clarify the trade-offs in video quality.

    [1] Vukadinovic, M., Kwan, A.C., Li, D. Ouyang, D. (2023). GANcMRI: Cardiac magnetic resonance video generation and physiologic guidance using latent space prompting. Proceedings of Machine Learning Research 225:594-606 Available from https://proceedings.mlr.press/v225/vukadinovic23a.html

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    Additional concerns I have that would be helpful if addressed: 1) Link to the code is provided but code itself is not accessible, please fix the access to the code repository.

    2) It’s unusual to observe an LVEF MAE of 3.327 with an R² of 0.525. For context, [2] reported an MAE of 6.4 and R² of 0.53 on cMRI data, while [3] reported an MAE of 4.1 and R² of 0.81 on ECHO data. Could you provide a scatterplot comparing predicted and ground truth values to help interpret this result?

    3) Can you clarify if during training “phenotype distribution” is just 84 phenotypes assigned to this real image rather than a sample from the generative model? i.e. Do we sample phenotype distribution only during inference? Also, does sampled distribution from the generative model stay constant during iterative inference?

    4) Minor typos to fix Typo “Disease Classicifation” 3.3 Performance of Downstream Tasks after Data Mixing Typo “we introduce a maskd autoregressive CMR” Figure 2 caption cardiac phenpotypes Not sure what you meant by this sentence “We modify the VAE in stable diffusion[14] to a 3D-VAE to project the input CMR into a compressed latent space by extending 2D convolutions to 3D convolutions, the spatial downsampling factor, denoted as fs, and the temporal downsampling factor, denoted as ft”

    [2] Adhikari A, Wesley GV 3rd, Nguyen MB, Doan TT, Rao MY, Parthiban A, Patterson L, Adhikari K, Ouyang D, Heinle JS, Wadhwa L. Predicting Cardiac Magnetic Resonance-Derived Ejection Fraction from Echocardiogram Via Deep Learning Approach in Tetralogy of Fallot. Pediatr Cardiol. 2025 Mar 4. doi: 10.1007/s00246-025-03802-y. Epub ahead of print. PMID: 40038120.

    [3] Ouyang, D., He, B., Ghorbani, A. et al. Video-based AI for beat-to-beat assessment of cardiac function. Nature 580, 252–256 (2020). https://doi.org/10.1038/s41586-020-2145-8

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (5) Accept — should be accepted, independent of rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper introduces an innovative method for cardiac MRI video generation and presents strong experiments. I believe this work offers valuable contributions to the community by addressing the challenge of limited medical data availability and showing potential for applications in pre-operative planning.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    After the rebuttal I still believe that this paper should be accepted. The authors addressed some minor concerns I had.

    I agree with the the concern that the reviewer #1 had about validation with clinically meaningful metrics. However, employing medical doctors to evaluate the output of this generative model is costly and time consuming and I think that this paper would be a valuable resource to the community as it stands - primarily because of its novel methodology and its value as data generation engine for dataset augmentation.




Author Feedback

Thanks for the valuable comments. Here is our reply: 1.Code link has been fixed 2.Clarity issues(R2): We clarify: xi represents non-overlapping latent tokens after 3D-VAE encoding of CMR, zi denotes the “Embedded token” in the legend(Fig.1). A masked autoencoder strategy is applied with a dynamic masking ratio (0.7–1.0), write on the arrow pointing to the transformer encoder 3.Clarification of phenotypes and VAE (R1&2&3). Phenotypes (Category 157 field of UKB) are low-dimensional continuous variables with complex clinical relationships. We use VAE for posterior approximation to model their joint distribution which is essential for generating physiologically plausible samples. This enables diverse and realistic conditioning inputs for CMR generation. Without the VAE, using real phenotypes limits generative diversity, as the model cannot extrapolate beyond training data. Latent space sampling in the VAE allows novel yet realistic phenotypes, improving both pretraining and downstream performance 4.Discriminative value(R2) of generated CMR and consistency with phenotypes(R1). Segmentation and classical analysis will be in future expansion. We conducted a qualitative analysis in Fig.2, synthesized CMRs are sorted by LVEDV, showing a clear increase in heart size. We also mixed large-scale synthesized phenotypes and CMRs as additional supervision to train a phenotypes prediction model, outperforming models trained on real data alone (“mix*” in Tab.2 and Fig.3). The above findings are effective support 5.Clinical necessity(R2). While UKB’s 32,444 CMR form a considerable dataset, it’s small compared to the hundreds of thousands or millions typically used for good pretraining, and there is no publicly available CMR dataset larger than UKB. To address this, we augment it with synthesized high-quality CMRs, which improves downstream performance and supports our motivation 6.The role of phenotypes in decoding process(R2). Phenotypes information is fused through bidirectional attention(Sec2.2), thus this condition is embedded into the latent variable zi of the token diffusion model’s objective P(xi|zi), classifier-free guidance is used for conditional generation 7.Classifier(R1). We use the MAE-base encoder (ViT-B/16) with task-specific prediction head 8.Sensitivity analysis(R2). Due to space limits, we plan to explore it in future work to better understand the performance gains 9.Phenotypes prediction performance(R2&3). To R2: Due to page limits, we only reported performance for 5 key phenotypes. Not all phenotypes are well captured by 4-ch CMRs, resulting in a lower average R² than for specific phenotypes. To R3: MAE and R² measure different statistical traits: MAE is scale-dependent, while R² depends on target variance. Our LVEF has a smaller standard deviation (std=6.2 vs. 13.5/12.5 in [2]/[3]), so a low MAE can still produce a moderate R² 10.Evaluation of generation model(R2&3). FID/FVD measure distributional similarity between generated and real data, with lower values indicating better quality. CPGG can be regarded as an unconditional generative model since phenotypes are generated from noise without ground truth CMR, making PSNR/SSIM inapplicable. We will visualize generation results of other models in the revision for qualitative evaluation. We cropped CMRs to focus on cardiac regions, highlighting motion areas, while GANcMRI did not crop ROI, leaving mostly static background that weakens motion signals. This reduces temporal differences in I3D features, resulting in lower FVD for GANcMRI 11.Comparative analysis on downstream task(R1&2). In our early experiments, traditional augmentation harmed downstream task performance. We are open to expanding comparative analysis with other generative models, but their slow generation limits large-scale pretraining, highlighting our method’s speed advantage. We trained CPGG only on the training set (Sec3.1) to prevent data leakage and we also have external data to ensure robust evaluation




Meta-Review

Meta-review #1

  • Your recommendation

    Invite for Rebuttal

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    N/A

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    This paper presents a phenotype-guided generative framework for synthesising high-fidelity cardiac MRI (CMR) data, aimed at augmenting datasets for pretraining and downstream clinical applications. While Reviewer 1 raised some concerns regarding clinical validation and architectural clarity, Reviewers 2 and 3 supported acceptance, recognising the novelty, technical merit, and practical utility of the method. The rebuttal provides thoughtful clarification on key points, including the role of the phenotype VAE, generation fidelity, and downstream task impact.



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



back to top