Abstract

Unsupervised domain adaptive segmentation typically relies on self-training using pseudo labels predicted by a pre-trained network on an unlabeled target dataset. However, noisy pseudo-labels present a major bottleneck in adapting a network to distribution shifts between source and target domains, particularly when data is coming in an online manner and adaptation is constrained to exactly one round of forward and backward passes. In this scenario, relying solely on inaccurate pseudo-labels can degrade segmentation quality, which is detrimental to medical image segmentation where accuracy and precision are of utmost priority. In this paper, we propose an approach to address this issue by incorporating expert guided active learning to enhance online domain adaptation, even without dedicated training data. We call our method \textbf{ODES}: \underline{O}nline \underline{D}omain Adaptation with \underline{E}xpert Guidance for Medical Image \underline{S}egmentation that adapts to each incoming batch of data in an online setup. However, acquiring annotations through active learning for all images in a batch often results in redundant data annotation and increases temporal overhead in online adaptation. We address this issue by proposing a novel image-pruning strategy that selects the most informative subset of images from the current batch for active learning. We also propose a novel acquisition function that enhances diversity of the selected samples for annotating. Our approach outperforms existing online adaptation approaches and produces competitive results compared to offline domain adaptive active learning methods. The code can be found at \url{https://github.com/ShazidAraf/ODES}

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/5017_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

N/A

Link to the Dataset(s)

CHAOS dataset: https://chaos.grand-challenge.org/Results_CHAOS/ DUKE dataset: https://pubs.rsna.org/doi/full/10.1148/ryai.220275

BibTex

@InProceedings{IslMd_ODES_MICCAI2025,
        author = { Islam, Md Shazid and Nag, Sayak and Dutta, Arindam and Ahmed, SK Miraj and Niloy, Fahim Faisal and Bera, Shreyangshu and Roy-Chowdhury, Amit K.},
        title = { { ODES: Online Domain Adaptation with Expert Guidance for Medical Image Segmentation } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15963},
        month = {September},

}


Reviews

Review #1

  • Please describe the contribution of the paper

    The work is to propose medical image segmentation problems under an active learning scenario. Moreover, the work is to focus on how to deal with domain shift given limited annotation efforts.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The focused topic is indeed important for medical image research. It is also an interesting extension from the well-studied Test Time Adaptation scenario.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    The overall presentation is considered ok but lacks enough evidence to confirm its value. As claimed by the author(s), it is the first paper to focus on medical image segmentation with online streaming data. Some demonstrations to show the effectiveness of the proposed method are needed. For instance, we would like to see the proposed method compared to naïve approaches or baseline methods. The random selection in Table 3 is a good example. It should be included in many other tables. In fact, we would like to see some figures to show the trends of making different choices of controlling parameters.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    Detailed comments:

    • The setting of giving target data in an online fashion and seeing a batch of data only once seems inappropriate for real-world medical data segmentation scenarios. It is rare to say that medical images may be deleted after a while due to storage limitations. Building the pool to store the possible candidates for annotations may require some effort, though.
    • The proposed algorithm is not directly related to medical image segmentation but for general segmentation tasks. One may like to see any specific considerations or designations for the medical applications. For instance, a discussion on the segmentation of different organs (liver, kidney, etc.) according to their specific issues could be very valuable.
    • Most comparisons in the experiment section may have little value because most of the comparisons were based on different settings. Various advantages show the corresponding model’s effectiveness, and it is expected to have that without a surprise. The only thing new to us is to know the difference due to the different settings or scenarios. In this case, a good figure shows more information, such as we can learn the relation between different choices of parameters and the model’s effectiveness (e.g., the relation between different K and the effectiveness). Overall, it is hard to judge how valuable the proposed method is given the various empirical studies without theoretical supports.
    • (following the previous one) When compared to TTA, the author(s) should provide a smooth transition from TTA to ODES, maybe with the tuning of K when it becomes very small, to confirm “how much” ODES or how much Image Pruning-based annotations are crucial to boost the final performance. The part of annotations plays a trade-off to the final performance, and one may like to know how much they want to sacrifice for the model’s effectiveness.
    • Is the BN-based pruning in Eq. 1 considered an unsupervised one? How the label information, such as the correct annotations may influence the BN-based divergence computation and the pruning result?
    • If domain adaptation is the main focus of this work, then we may be interested in knowing how to choose the right parameter to deal with situations with different amounts of adaptation.
    • Why larger K can’t always lead to better performance in Table 1? It looks like closely related to how much domain adaptation we may have. In this case, we should have a metric to describe the amount of adaptation needed and compare that to the optimal choice of K.
    • How many controlling parameters in this work? K and b and any others? All should be clearly mentioned in the method and experiment parts. Moreover, some necessary ablation studies should be included.
    • Is the term “expert” related to “oracle”? One may get confused with the “expert” used in one of the recent hot topics, “mixture of experts”.
    • Why does Table 3 show the result of K=10 rather than other choices of K? One may prefer to see various choices of K and their results.

    Minor comments:

    • Both “image pruning” and “in-phase” are abbreviated as “IP” which is very confusing.
    • Typo: “The acquisition of is done using uncertainty-guided AL.” (page 2)
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (2) Reject — should be rejected, independent of rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    How possible to apply the proposed method in various scenarios and what insights can be learned from this paper.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #2

  • Please describe the contribution of the paper

    The paper introduces a domain adaptation approach using active learning for image segmentation. The main contribution lies in a novel acquisition strategy that leverages uncertainty while preserving sample diversity and effectively reducing the amount of manual annotation required for the active learning approach. The results on several public datasets show that the model achieves better performance than the state-of-the-art method for medical image segmentation, and the performance is comparable to offline active domain adaption methods.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The integration of active learning into the domain adaptation framework.
    • A novel diversity-weighting strategy is proposed to enforce diversity samples.
    • The loss function includes a component designed to enforce shape continuity and ensure a smooth transition between successive slices.
    • Results on several public datasets demonstrate significant improvements over six SOTA methods. Additionally, the ablation studies show that the diversity weighting strategy contributes to improving the model’s performance.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    • The impact of active learning in medical images, particularly in the context of online streaming, appears to be limited and uncertain.
    • In the description of image pruning description, it would be helpful to explicitly describe the augmentation process to get X_hat.
    • Could the author clarify whether the classification used for A^c corresponds to the output of the pre-trained model, specially the argmax(P) ?
    • Figure 3 should be thoroughly explained, including the inputs, the autoencoder (AE) model and P. How is this model integrated with the workflow described in Fig.2? Does the AE correspond to the pre-trained model, f_theta, used for pseudo-label inference, or is it a new model?
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
    • The paper is well-written and easy to follow; however, some methodological details are missing.
    • Justifications and citations are provided to support most of the claims.
    • The paper includes information about the dataset and code implementation. Nonetheless, the reproducibility of the reported results does not appear to be straightforward.
    • Acronyms should be introduced before using them, for example, unsupervised domain adaptation (UDA).
    • The results demonstrate significant improvements; however, reporting p-values and 95% confidence intervals would provide a more robust comparative analysis. The authors may consider using bootstrap methods for this purpose in future work.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The proposed method introduces a degree of novelty within the domain adaptation framework and was evaluated on several public datasets, demonstrating significant improvements over six state-of-the-art methods. Furthermore, the ablation study supports the contribution of the proposed innovations.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    Major concerns were addressed



Review #3

  • Please describe the contribution of the paper

    The main contribution of the paper is the first application of an online active learning-driven domain adaptation framework. Specifically, it proposes an image pruning strategy based on BatchNorm statistics, combined with a region sampling mechanism that considers uncertainty, impurity, and diversity.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. First Integration of Active Learning and Source-Free Online Adaptation: This work is the first to integrate active learning with source-free online domain adaptation, forming the ODES framework. Previous studies either relied solely on test-time adaptation (TTA) using pseudo-labels or used offline source-based/source-free adaptive domain adaptation (ADA).
    2. Domain Shift-Aware Image Selection via BatchNorm Statistics: The method leverages the running mean/variance of BatchNorm layers to compute the KL divergence from source domain statistics, thereby quantifying the “degree of domain shift.” This allows automatic selection of the top K% most divergent images for annotation. This strategy concentrates expert effort on the most shifted samples, significantly reducing labeling costs.
    3. Composite Acquisition Function: The acquisition function integrates three metrics in a unified sampling process: (1) pixel-level uncertainty (entropy), (2) local impurity (mixing degree), and (3) dual spatial-feature diversity (two-stage FFT with Gaussian weighting). This approach effectively avoids redundant annotations—selecting not only the hardest-to-segment regions but also ensuring that these regions are spatially and semantically diverse, thus maximizing model performance gain from limited annotations.
    4. Comprehensive and Rigorous Empirical Evaluation: On three cross-domain medical datasets (CHAOS T1->OOP, CHAOS->DUKE, and BMC->RUNMC), only 1% annotation budget achieves significant performance improvements over various state-of-the-art TTA methods. Under the same budget, its performance is only 2–3% lower than offline ADA methods, demonstrating the competitiveness of the online approach. Ablation and Generalization: Detailed ablation studies (analyzing the contributions of image pruning and diversity weighting) and “forgetting analysis” confirm that the method does not suffer from catastrophic forgetting, highlighting the framework’s stability and robustness.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    Some components are not entirely novel.

    1. The image pruning strategy, which uses the KL divergence of BatchNorm statistics to measure domain shift, is inspired by the work of Niloy et al. in “Effective Restoration of Source Knowledge in Continual Test Time Adaptation” [1], where it was originally proposed to detect and restore source domain information. While this paper repurposes the idea for active learning-based selection, the concept itself is not entirely new.
    2. The acquisition function, which combines uncertainty entropy and region impurity, also heavily draws from the “Entropy Impurity” strategy proposed by Xie et al. in RIPU [2]. The main contribution here lies in the integration of a diversity weighting mechanism; however, diversity sampling based on Farthest-First Traversal is itself derived from the Core-Set approach by Sener and Savarese [3], a widely adopted technique in active learning literature.

    References: [1] Niloy, Fahim Faisal, et al. “Effective Restoration of Source Knowledge in Continual Test Time Adaptation.” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024. [2] Xie, Binhui, et al. “Towards Fewer Annotations: Active Learning via Region Impurity and Prediction Uncertainty for Domain Adaptive Semantic Segmentation.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. [3] Sener, Ozan, and Silvio Savarese. “Active Learning for Convolutional Neural Networks: A Core-Set Approach.” International Conference on Learning Representations, 2018.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Key Strengths Supporting Acceptance

    1. A novel paradigm of online active learning: This work is the first to combine active learning with test-time adaptation, bridging a previously unexplored intersection between the two fields.
    2. Rigorous and convincing experimental validation: Extensive evaluations on three cross-domain medical image datasets demonstrate clear advantages over multiple state-of-the-art TTA and offline ADA methods, achieving performance close to offline ADA despite operating under stricter online constraints. The inclusion of ablation studies (on Image Pruning and Diversity weighting) and forgetting analysis further supports the robustness and soundness of the method’s design.

    Concerns and Areas for Improvement

    1. Limited originality in certain components: Some key ideas are adapted from prior work—such as KL-based BatchNorm pruning and entropy–impurity-based region sampling. While the integration under an online learning framework is novel, the individual modules themselves show limited originality.
  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    The author answered my question by listing the differences from the existing literature.



Review #4

  • Please describe the contribution of the paper

    The paper introduces ODES, a novel framework for online domain adaptation in medical image segmentation. ODES combines active learning and batch normalization-based image pruning to adapt in a streaming setting with no source data access or data storage. It proposes a diversity-aware patch acquisition strategy and achieves state-of-the-art performance among test-time adaptation methods, coming close to offline active domain adaptation (ADA) performance despite its online constraints.​

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. First to propose an active learning-based online domain adaptation method for medical segmentation.​
    2. Novel image pruning strategy using batch norm statistics to reduce annotation cost.​
    3. Diversity-weighted acquisition function improves annotation efficiency.​
    4. Robust ablation studies and strong empirical results across three domain shifts.​
    5. Achieves performance close to offline ADA methods with only 1% annotation budget.​
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    1. Only a single pretraining model and architecture (DeepLabv3) were tested.​
    2. The annotation budget (1%) is fixed; no sensitivity analysis for b% is provided.​
    3. Limited analysis of real-time latency and feasibility under tight clinical constraints.​
    4. The strategy assumes expert annotations are instantly available, which may not always hold.​
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    Suggestion: Table 4 (Forgetting Analysis) would benefit from reporting the mean Dice score across all 20 batches in addition to per-batch values. This would make the trends easier to interpret and strengthen the analysis.​

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (5) Accept — should be accepted, independent of rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    ODES presents a novel and well-motivated approach for active learning in an online domain adaptation setting. The problem formulation is timely, and the proposed image pruning and diversity-weighted acquisition strategies are thoughtfully designed. The results are impressive given the online constraint, and ablations support the method’s effectiveness. Minor concerns about generalization beyond DeepLabv3 and annotation delay under clinical workflows reduce the score slightly.​ Additionally, Table 4 could be improved by including the mean Dice score across all 20 batches for each cycle. This would better illustrate the trend and quantify overall improvements in performance.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A




Author Feedback

We thank all four reviewers (R1,R2,R3,R4) for their constructive feedback.

[R4]Novelty: ODES is the first Active Learning (AL) approach used in online domain adaptation in medical image segmentation. (a) Our image pruning (IP) strategy to reduce annotation cost is novel. Although [16] detects domain shift, it does not address the issue of annotation cost. (b) The Gaussian weighting strategy that integrates spatial and feature-wise diversity is novel. FFT has been used only for sampling pixels. However, the overall weighting mechanism is unique.

[R1]Augmentation of X_hat: To obtain X_hat​, we apply augmentations such as random horizontal & vertical flips, small rotations, and brightness adjustments to each image of the batch.

[R1]A^c: Output argmax(P) is converted into a one-hot encoded mask over C classes. A^c corresponds to class c in the one-hot encoding.

[R1]Fig 3: Yes, the Auto-Encoder (AE) denotes pretrained segmentation model f_theta, input is X_t (input image batch), P is output predictions. Fig 3 provides a detailed view of the Acquisition block from Fig 2, applied individually to each image selected by the image pruning module (IP). For each selected prediction P_j, we compute an initial acquisition map A_init^(j) using the acquisition function, then calculate W_f and W_d, and combine them as W = upsample(W_f ⊙ W_d). The final acquisition map becomes A^(j) = A_init^(j) ⊙ W. To enhance clarity in the final version, we will revise Fig 3 as follows: (a) Mention f_theta alongside AE (b) Insert IP after AE to indicate only sampled subset of prediction and their features are used in AL (c) A_init → A_init^(j), P → P_j, A → A_final^(j) , W→W_j to indicate each of them is related to a single image of test-batch. (d) replace input→X_t and shape indicator (H × W)→(Height × Width) to avoid confusion.

[R2] ODES designed for Medical Application : ODES aligns AL-induced delays with natural turnover periods in clinical workflows (ref: introduction), thereby minimizing the impact of AL’s latency. However, AL is less ideal for applications like online segmentation of autonomous driving, where even minimal lag poses safety risks. Hence, ODES is best suited for medical settings, where minor delays are acceptable within clinical workflows and the benefits of AL outweigh the impact of latency.

[R2] BN-stats: The BN-based pruning in Eq. 1 is fully unsupervised, as it does not rely on labels. While labels are used to update model parameters (BN affine parameters), our pruning relies solely on the BN statistics of the pretrained source model f_theta​ and incoming test batches. Thus, model updates using labeled images and our pruning strategy are totally orthogonal.

[R2]Storage: ODES is suitable for mobile healthcare applications where ample storage is scarce and access to the cloud can be challenging.

[R2,R3]Parameter Impacts: (a) K and b are the only controlling parameters. Ablations on varying K is shown in Tables 1 and 2. When b is (0.1–0.7)%, we observe a drastic gain in DSC. Beyond b ≈ 1%, the improvement in DSC continues but becomes marginal, showing a saturation trend—further increases in b yield only small gains while significantly increasing annotation effort. Hence we have chosen b=1%.

(b) K=10 represents an extreme case that significantly reduces annotation cost and highlights the effectiveness of image pruning. Strong performance at K=10 implies similar or better results for larger K, therefore, we conduct the ablations of Table 3 for K=10. (c)The mean DSC consistently improves with larger K across all adaptations. An outlier scenario with the liver class in Table 1, likely due to stochastic effects, leads to a negligible drop in DSC for K=100.

[R2]Clarity and Typo: We will correct the typo and enhance clarity as suggested. “Expert” refers to Oracle. [R3] Table 4: We will report mean DSC of all batches. [R1] Reproducibility: Additional information regarding reproducibility will be included in the final version.




Meta-Review

Meta-review #1

  • Your recommendation

    Invite for Rebuttal

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    N/A

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    This paper introduces a framework for online domain adaptation in medical image segmentation that integrates active learning with batch normalization-based image pruning and a diversity-aware acquisition strategy.

    The method addresses a timely and underexplored challenge—adapting segmentation models in streaming settings with limited annotation budgets and no access to source data. Reviewers generally praised the originality of combining test-time adaptation and active learning in an online setting, as well as the strong empirical performance across three domain shifts with only 1% annotation effort.

    While one reviewer raised concerns about real-world applicability and experimental clarity, these were adequately addressed in the rebuttal.

    Overall, the paper makes a meaningful and well-validated methodological contribution and should be accepted.



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



back to top