List of Papers Browse by Subject Areas Author List
Abstract
Accurate disease grading is critical for early diagnosis and
effective treatment planning. However, class imbalance and subtle inter-
class variations in real-world disease grading datasets make it challeng-
ing for traditional classification models to differentiate between neigh-
boring disease stages and preserve ordinal label relationships. Existing
approaches emphasize inter-class ordinal relationships but fail to distin-
guish closely related categories effectively. To address these limitations,
we consider disease grading as an ordinal regression problem and adopt
a supervised contrastive learning approach to design a hybrid super-
vised contrastive ordinal learning framework. Our framework consists of
three basic modules: 1) prototype-based contrastive ordinal learning, 2)
weighted sample-based contrastive learning and 3) disease stage grading
using regression. To deal with class imbalance while enhancing intra-class
consistency and inter-class separation, we design a distance-based pro-
totype contrastive ordinal loss, which pushes the samples closer to their
class centers while maintaining their ordinality. This approach captures
subtle differences within closely related disease stages and results in a
separable ordinal latent space. Additionally, a per-sample class weight-
ing strategy is integrated into weighted supervised contrastive ordinal
learning to prevent class collapse, ensuring balanced gradient contribu-
tions and robust inter-class separation. Our approach effectively captures
both large-scale and fine-grained variations, enabling precise ordinal clas-
sification for disease grading. We validate the framework on diabetic
retinopathy and breast cancer datasets, demonstrating its adaptability
across medical conditions and potential to enhance diagnostic accuracy
in medical imaging applications
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/3657_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
https://github.com/AfsahS/A-Hybrid-Contrastive-Ordinal-Regression
Link to the Dataset(s)
Diabetic Retinopathy dataset: https://www.kaggle.com/c/diabetic-retinopathy-detection
BUSI dataset: https://www.kaggle.com/datasets/aryashah2k/breast-ultrasound-images-dataset
BibTex
@InProceedings{SalAfs_AHybrid_MICCAI2025,
author = { Saleem, Afsah and Lewis, Joshua R. and Gilani, Syed Zulqarnain},
title = { { A Hybrid Contrastive Ordinal Regression Method for Advancing Disease Severity Assessment in Imbalanced Medical Datasets } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15972},
month = {September},
page = {13 -- 22}
}
Reviews
Review #1
- Please describe the contribution of the paper
This paper presents a hybrid supervised contrastive ordinal learning framework for disease severity grading in imbalanced medical image datasets. The approach combines two novel losses—Prototype-based Contrastive Ordinal Loss (PCOL) and Weighted Supervised Contrastive Ordinal Loss (SCOLw)—with an RMSE-based regression head. PCOL aligns sample embeddings with class prototypes while enforcing ordinal separation via a distance-aware penalty. SCOLw applies inverse-frequency weighting to mitigate class collapse and improve minority-class separation. Experiments on Diabetic Retinopathy and BUSI breast ultrasound datasets report improved accuracy and MAE over several baselines.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Clear problem motivation: Class imbalance and subtle inter-class variations in medical grading tasks are well known; the paper correctly identifies and targets this challenge.
Modular design: The two loss functions (PCOL and SCOLw) plus a regression head form a coherent, single-stage training pipeline that is easy to implement.
Empirical improvements: On the BUSI dataset, the proposed method achieves 91.0% accuracy and 0.10 MAE, outperforming prior contrastive and ordinal methods. Ablation studies demonstrate that each component contributes positively to overall performance.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
Incremental novelty: PCOL is essentially prototype-based metric learning, and SCOLw is a straightforward weighting of an existing SCOL loss. The paper offers minor modifications of SupCon/SCOL rather than a fundamentally new algorithmic innovation.
Limited clinical relevance: All experiments are confined to two public benchmarks. There is no discussion of performance on heterogeneous, real-world clinical data or how a 1–2% gain in MAE would translate into improved patient outcomes.
Baseline reproducibility concerns: Details on hyperparameter tuning and whether competing methods were re-implemented under identical conditions are missing. On the DR dataset, the proposed model even trails Ord2Seq in overall accuracy, casting doubt on fair comparison.
Insufficient technical insight: The intuition behind the distance-aware ordinal penalty in PCOL is not deeply analyzed. Figures fail to illustrate how the latent space ordering is meaningfully improved compared to standard contrastive approaches.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
While the paper tackles a relevant problem and shows some empirical gains, its contributions are largely incremental, and key claims about novelty and real-world impact remain under-supported.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
I have revised my opinion from a weak reject to an accept.
Review #2
- Please describe the contribution of the paper
The paper presents a framework for ordinal disease severity prediction in medical imaging, specifically targeting highly imbalanced datasets, by introducing two contrastive ordinal loss functions and formulating the grading task as a regression problem.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The idea of considering the ordinal disease grading task as a regression task is interesting and is proven to work well.
- The authors propose to aligns sample embeddings with class prototypes while penalizing misalignments based on ordinal distance using PCOL proves to be effective.
- The author Extends supervised contrastive learning by incorporating class-frequency-based weights to address class imbalance.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- the motivation for using class prototypes in contrastive learning (PCOL) is not well justified. It is unclear why existing approaches such as AdaCon perform poorly on long-tailed datasets and how the introduction of prototypes specifically mitigates this issue.
- Although the method shows strong performance, the contribution appears incremental. The key components, like PCOL and SCOL_w, are not conceptually novel. The use of class prototypes in contrastive learning has already been explored in [1]. Moreover, SCOL_w is essentially a weighted extension of existing supervised contrastive loss functions, and the novelty is limited.
- Regarding the figures, their quality can be improved—consider using vector graphics (e.g., PDF format) to avoid blurriness. Rather than embedding lengthy text within figures, use symbolic representations to improve clarity. Some visual elements, such as Figure 1(b), do not add significant value and could be refined or removed.
There are also minor issues throughout the paper, such as multiple values being highlighted in Table 3, which could be confusing.
[1] Li, J., Zhou, P., Xiong, C., & Hoi, S.C. (2020). Prototypical Contrastive Learning of Unsupervised Representations.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not provide sufficient information for reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Though effective, I am currently rating a weak rejection for this work, primarily because the paper lacks a clearly articulated motivation and presents only incremental contributions.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Reject
- [Post rebuttal] Please justify your final decision from above.
After reading the authors’ response, I would like to maintain my original rating. This decision is primarily due to the high similarity between the proposed method (Eq. 1/2) and the well-known PCL baseline (Eq. 11). While the introduction of a weighting mechanism presents some degree of novelty, the overarching idea, modifying a CL function into a weighted version, has been extensively studied, particularly in the context of long-tailed learning. The rebuttal argument that “PCL has not been applied to supervised ordinal regression” is not particularly compelling. Rather than simply stating this, it would be more helpful for the authors to articulate why PCL or similar methods have not been adopted in this context and how this insight motivates the specific design choices made in their method. I also concur with the other reviewer regarding the lack of fundamentally new algorithmic innovation in the proposed approach.
Review #3
- Please describe the contribution of the paper
This paper proposes a hybrid super vised contrastive ordinal learning framework, which mainly includes a prototype-based contrastive ordinal loss (PCOL) and a weighted supervised Contrastive Ordinal loss (SCOLw). PCOL enhances global structure learning by leveraging class prototypes as representative embeddings, aligning individual samples with their respective prototypes. This promotes intra-class compactness, inter-class separability, and ordinal consistency through a distance aware regularization term, enabling the model to capture subtle variations in dis ease severity. SCOLw preserves intra-class compactness while learning fine-grained inter-class variations through bounded ordinal penalties, preventing the over-penalization of underrepresented samples and stabilizing feature representations.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The motivation using ranking learning from two views to constrain models for feature extraction is interesting.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- The framework is not well organized. It is kinda of hard to align the description with the framework.
- The variables in Eq. 1 are not clearly clarified.
- It seems the two-view ranking learning is degraded into contrastive learning, which is less novel.
- Please rate the clarity and organization of this paper
Poor
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The novelty of the method
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #4
- Please describe the contribution of the paper
-
The authors propose a hybrid contrastive learning framework that integrates both instance-level and prototype-based objectives.
-
They cast the task as a regression problem, aligning the model output with the ordinal nature of the labels.
-
This combination of objectives results in a more robust solution, delivering significantly improved performance on long-tail (low-frequency) classes—a major challenge in real-world scenarios, particularly in healthcare.
-
Specifically, they use class prototypes computed as average embeddings of in-batch samples per class, and then enforce proximity between each sample and its class prototype while distancing it from others, similar in nature to the objectives in prototypical neural networks.
-
Additionally, the authors introduce a weighted variant of Supervised Contrastive Ordinal Loss (SCOL), where the contribution of each class is scaled inversely to its frequency, reducing the dominance of frequent classes during training.
-
Empirical evaluations, including stacked bar plots, show not only improved correct predictions but also a favorable misclassification pattern—errors are more often to adjacent classes, indicating that the embedding space better reflects the ordinal structure and reduces severe misclassifications
-
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-
Strong empirical evidence: Figure 2 effectively illustrates the superiority of the proposed method over related works, particularly by presenting both correct and adjacent class prediction rates. This analysis highlights the model’s ability to respect ordinal relationships even when errors occur.
-
Robust performance on long-tail classes: The overall performance, especially in terms of Mean Absolute Error (MAE), demonstrates the model’s strength in handling class imbalance. The improvement on low-frequency (long-tail) classes is particularly noteworthy, as this remains a major challenge in real-world applications, especially in healthcare.
-
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- Methodological Ambiguities:
-
The definition and computation of the class-specific weights (w_i) are unclear. Figure 1 suggests they are derived from the CNN output, while Section 2.3 claims they are based on the inverse class frequency, computed dynamically. However, the formula or exact mechanism for this computation is not provided—clarifying this would resolve a key ambiguity.
-
Section 2.1 defines a generic feature map (psi), but it is not referenced in later parts of the paper. Specifically, it is unclear how this is then projected to the feature embedding, such as f_a, or how it interacts with the gray layer and blue MLP layers. An explicit description of how psi propagates through the network may be needed, or its removal may prevent further ambiguities.
- Implementation Details:
-
The statement “projection heads in contrastive learning blocks have 1280 and 128 neurons” is vague. It’s unclear which projection head corresponds to which part of the framework (e.g., is 1280 used for PCOL?).
-
The mini-batch sampling strategy is not discussed. Was sampling random, stratified, or class-balanced? Given the small batch size (24), it’s important to consider cases where minority classes are underrepresented which could affect the reliability of class prototypes C_i. For example, if a batch contains one sample for one of the classes, its prototype feature would be the same as the embedding feature, as averaging will not take any effect. Would this cause any problems?
- Results and Statistical Rigor:
-
Quantitative results in Table 1 would benefit from reporting mean and standard deviation over multiple runs to establish the significance of improvements. Table 2 could include p-values if space is limited. The repeated use of the word “significantly” is not supported by statistical evidence.
-
The ablation results appear to be reported on the test set, which is inappropriate. These analyses should be conducted on the validation set to avoid test data leakage. Authors did mention that they use 10-fold and 5-fold cross validation for DR and BUSI datasets, respectively, but the results aren’t reported as such.
-
In Table 3, shouldn’t the 2nd column be called “SCOL”? The weighted version seems to be determined through the 3rd column, w_i. If that is the case, please update the naming and description in Section 4.1 for consistency.
- Writing & Organization:
- Overall writing can be improved in terms of efficiency and organization, and thus readability. Please refer to the further details below.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
Aside from the major comments above, the other minor problems are as listed:
-
In abstract, “… intra-class consistency and intra-class separation,” has a typo. It must be inter-class separation, correct? Just in case, please revisit all the instances of inter and intra in the paper.
-
In abstract, “. To address these limitations, we consider disease grading as an ordinal regression problem and adopt a supervised contrastive learning approach to design a hybrid super vised contrastive ordinal learning framework” is too wordy and doesn’t read smoothly. Maybe consider refining, such as by dropping “adopt a supervised contrastive learning approach to”.
-
The first sentence of the last paragraph of page 2 (4th paragraph of the Introduction), “To address the above challenge…”, can be a lot more efficient. Please reword.
-
Paragraphs 4 and 6 of the Introduction section have too much overlap and redundancy which could be condensed and become more efficient to save up space for more content.
-
Paragraph 5 is redundant as it is repeated in sections 3 an 4. Valuable space is lost this way.
-
The arrow showing “shared weights” is corresponding to all the CNN backbone and the gray layers after the backbone? Is the CNN backbone frozen and are only the gray heads learnable? Please clarify in the figure if any layer is frozen. Are the two blue MLP blocks also shared?
-
In equation 2, I assume B is the batch size, but it is not defined. This is not consistent with the mini-batch I notation in Equation 1, section 2.2.
-
Is the reported accuracy, the balanced accuracy across classes? How about the MAE? Is it averaged across the classes?
-
How many and what type of GPUs were used for the training of the proposed solution?
-
In Table 2, the accuracy of SupCon[7], “95.4%”, should be in bold font for the benign class, unless the number has a typo. Please revisit and fix the “It significantly improves Benign classification with 94.7% …” as well.
-
In Table 3, Malignant results of “ours” is not supposed to be in bold. However, in the text, it says “For Malignant, both losses yield 0.15 MAE …”. Does that mean “0.11” is a typo? Please fix the issue.
-
Please have the citations that are grouped together in ascending order, such as “… to reduce long-term complications and deaths [14,11,9].” which can be [9,11,14].
-
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
While the paper contains several ambiguities in the methodology and implementation details, its core contributions are meaningful and the empirical results are strong—particularly in addressing class imbalance and preserving ordinal structure. The proposed design choices, such as combining instance-level and prototype-based contrastive learning with a regression objective and a weighted SCOL variant, are well-motivated and effective. However, the current ambiguities should be clarified in the rebuttal, and the overall writing can be improved for efficiency and clarity.
Had the writing been more concise, the authors could have used the space to present additional qualitative insights—such as t-SNE or similar visualizations of the embedding space—to better illustrate the model’s ability to structure ordinal relationships. With these revisions, the paper would be significantly strengthened and I believe it has the potential to be accepted.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
I found the response to the concerns of other reviewers and mine satisfactory. I don’t believe that the contributed equations are too similar to ProtoNCE loss in the paper PCL (cited by R5). The nature of the functionalities are similar, yes, but the context is different, and the behavior of ProtoNCE is suited for its own context. I believe the novelty of this work is ok in the context of Ordinal regression task especially with long tail cases.
Author Feedback
Thank you to the reviewers and ACs for the feedback. As suggested, we will use vector figures, simplify visuals, revise text for clarity, and correct typos in the camera-ready version. R1, R3, R5-Incremental Novelty: We respectfully disagree with the concern that novelty of this work is incremental. Prior work cited by R5 addresses unsupervised contrastive learning (CL) and, to our knowledge, prototype-based CL has not been applied to supervised ordinal regression. Our proposed Prototype-based Contrastive Ordinal Loss (PCOL) introduces a novel distance-aware ordinal penalty to enforce label ordering among the prototypes to better represent minority classes in long-tailed datasets. Additionally, SCOL_w is not a trivial weighted extension of SCOL that fails to handle inter-class variations and intra-class compactness for minority classes. To handle this, SCOL_w introduces dynamic, per-instance weighting based on batch-level class frequencies. Importantly, the integration of PCOL, SCOL_w & RMSE loss forms a unified, ordinal-aware contrastive framework that consistently improves medical grading performance across benchmarks, highlighting its novelty and relevance. R1-Framework organization and Eq. 1: We will clarify the framework flow and redefine all variables in Eq. 1 with consistent notation in the revised version. R2-Methodological Ambiguities: The class-specific weights (wᵢ) are computed for each batch using inverse class frequency, not CNN outputs. We will correct it and add the formula in the revised version as suggested. R2-Implementation Details: Feature map ψ is passed through global average pooling (grey layer) to obtain feature-embeddings, then to two separate MLPs one for PCOL & one for SCOL_w. Each MLP consist of two dense layers with 1280 & 128 neurons. We use class-stratified batch sampling to ensure stability of class prototypes even in small batch sizes. R2-Results and Statistical Rigor: The p-value is less than 0.001in all our experiments. We will report mean ± std & p-values in the revised tables. R2-Ablation study on test set: We follow conventional 10-fold and 5-fold cross validation as mentioned in state-of-the-art works [2,8,17], for DR and BUSI datasets and report averaged ablation results across test sets from each fold in cross-validation assuring no data leakage occurs. R3-Clinical relevance: It is worth noting that both DR and BUSI are derived from real-world clinical imaging studies with diverse acquisition protocols and patient populations. Even 1–2% MAE gains are clinically meaningful in ordinal grading, as small differences can shift patients across diagnostic thresholds and affect clinical decisions. R3-Baseline reproducibility concern: We followed settings reported in SOTA methods and used consistent backbone for fair comparison. While slightly trailing Ord2Seq in accuracy, our model significantly improves minority class performance, which is crucial in ordinal medical grading tasks. R3-Insufficient technical insight: We have evaluated the effectiveness of the distance-aware ordinal penalty in PCOL through t-SNE visualizations, which showed improved class separation & ordinal alignment over standard contrastive methods. We will include them in the revised version to better illustrate the benefit of our approach. R5-Prototype motivation: As acknowledged by R1, R2, & R3, our paper clearly highlights the challenges of class imbalance and subtle inter-class variation in medical grading. In long-tailed ordinal settings, instance-based contrastive learning struggles with sparse tail-class representations, as discussed in Section 3.1. To address this, PCOL introduces class prototypes as stable anchors that enhances intra-class compactness while preserving ordinal consistency, which are critical in skewed and imbalanced datasets. In contrast, existing approaches like AdaCon [3] rely on ECDF-based margins instead of ordinal labels directly, making them less effective for long-tailed imbalanced datasets.
Meta-Review
Meta-review #1
- Your recommendation
Invite for Rebuttal
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #3
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Reject
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
While the paper demonstrates strong empirical performance, there exist several critical shortcomings with respect to the motivation of the work, limited novelty, i.e., incremental extension of prior work, and presentation issues.