List of Papers Browse by Subject Areas Author List
Abstract
The precise subtype classification of myeloproliferative neoplasms
(MPNs) based on multimodal information, which assists clinicians
in diagnosis and long-term treatment plans, is of great clinical significance. However, it remains a great challenging task due to the
lack of diagnostic representativeness for local patches and the absence of diagnostic-relevant features from a single modality. In this paper, we propose a Dynamic Screening and Clinical-Enhanced Network (DSCENet) for the subtype classification of MPNs on the multimodal fusion of whole slide images (WSIs) and clinical information. (1) A dynamic screening module is proposed to flexibly adapt the feature learning of local patches, reducing the interference of irrelevant features and enhancing their diagnostic representativeness. (2) A clinical-enhanced fusion module is proposed to integrate clinical indicators to explore complementary
features across modalities, providing comprehensive diagnostic information. Our approach has been validated on the real clinical data, achieving an increase of 7.91% AUC and 16.89% accuracy compared with the previous state-of-the-art (SOTA) methods. The code is available at https://github.com/yuanzhang7/DSCENet.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1741_paper.pdf
SharedIt Link: pending
SpringerLink (DOI): pending
Supplementary Material: N/A
Link to the Code Repository
https://github.com/yuanzhang7/DSCENet
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Zha_DSCENet_MICCAI2024,
author = { Zhang, Yuan and Qi, Yaolei and Qi, Xiaoming and Wei, Yongyue and Yang, Guanyu},
title = { { DSCENet: Dynamic Screening and Clinical-Enhanced Multimodal Fusion for MPNs Subtype Classification } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15004},
month = {October},
page = {pending}
}
Reviews
Review #1
- Please describe the contribution of the paper
1.A new dynamic screening module is introduced, which adapts the feature extraction process from local patches to decrease the influence of non-relevant features while improving their diagnostic accuracy. 2.A clinical-enhanced fusion module has been developed. This module integrates clinical data to uncover features that are complementary across different modalities, enhancing the overall diagnostic information.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
1.An innovative DS module is proposed that differs from traditional MIL methods. It uses dynamic random encoding, not relative or absolute position encoding for patches.This encoding method is capable of adapting to the varying numbers of patches in WSIs. 2.Multimodal fusion has been used to harness various clinical information in addition to WSIs.This approach has led to improved performance in classification tasks.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
1.The private dataset is randomly divided into training, validation, and test sets, so reporting metrics on the test set only once has a certain degree of randomness. 2.Whether the clinical data includes strong associations with the classification labels, which may lead to significantly higher results than using only WSI. 3.Whether all patients in the real world will have access to genetic data , how easily it can be obtained, and what the costs are. If it is only used for scientific research, it might hinder the practical application of the method. 4.Agent Attention is designed to reduce the computational complexity of standard self-attention without losing too much performance. The claim in the article that using agent tokens can enhance representation seems a bit far-fetched.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
1.Report the mean and standard deviation of the metrics using k-fold cross-validation. 2.Evaluate whether there are methods that can achieve comparable performance using only clinical data. 3.Assess the application value of the method, as the genetic data mentioned may not be available for all patients.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Some questions about experimental methods and data processing.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
The author explained the issue with the dataset, which I believe is a fair comparison
Review #2
- Please describe the contribution of the paper
Pathologists rely on whole slide images (WSIs) to diagnose myeloproliferative neoplasms (MPNs). However, existing deep learning solutions face challenges in accurately diagnosing MPNs because patch-level features lack discriminative information, necessitating slide-level analysis. While integrating clinical data is a common approach, it’s typically treated as a post-processing step rather than being integrated into the training process.
In response to these challenges, the authors propose two novel methodological components:
Dynamic screening (DS) method, which prioritizes certain patches over others to enhance discriminative features. Clinical-enhanced fusion (CF) technique, which not only fuses image and clinical inputs (such as age and red cell count) but also balances the features to mitigate disparities between representations, while giving due importance to clinically relevant data.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Engaging Introduction: The introduction is clear, providing a strong motivation and relevant references.
Performance Boost: The method significantly improves performance over similar models
Fair Model Comparison: The comparison with other public models is fair
Thorough Evaluation: The qualitative evaluation, including ablation studies, offers detailed insights, emphasizing the importance of both dynamic screening (DS) and clinical-enhanced fusion (CF) for achieving high performance.
Concise Experimental Setup: The experimental settings are described well
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Unclear Method Section: The method section lacks important details crucial for reproducing the model, hindering understanding and replicability.
Lack of Understanding of DS and CF Models: While the motivation behind DS and CF models is clear, their operational mechanisms and reasons for success remain incomprehensible, likely due to the unclear method section.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
Currently, the method isn’t clear enough for replication, and the code isn’t accessible. However, the authors assure that they will release the code. It’s hoped that the method section will be clearer in the rebuttal version.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
Method section:
- 2.1 doesn’t say what resnet was pre-trained on. I guess it is the same model from the referece [17]? Please explicitly mentioned this in the paper.
- “Instead of the absolute position encoding” Position encoding was not explained previously, why is it needed and what is it?
- “we dynamically random encode the sequences of patches as the random grid” sorry I don’t understand this sentence, better elaborate.
- “…used to realize dynamic enhancement encoding” what is that? What and how does it enhance? Is it some kind of attention mechanism?
-
Global averaging G can be described in formula using R(NxL) introduced in the same section
- Exaclty, which clinical data are given to the model? I see that they are mentioned throughout the paper, but having a table would make it easier to read.
- What is a “clinical indicators”? The term was not introduced.
- “Clinical-enhanced query C ∈ RM×L from the concatenation of clinical feature and image feature followed by fully connected layers and Relu activation and scale pooling.” this sentence misses the verb, so it does not have a clear meaning.
- The whole section “Clinical query block” is not clear and needs to be rewritten.
- remove “)” at the end of Eq4
Results section:
- “Quantitative evaluation” section: what’s the task? is it a binary classification? What are you discriminative between? All these is missing. Is this subtype classification? In that case, we have 4 classes if I understand correctly, and thus we should show avg+-std for AUC and F1 in case of macro-avg, or provide the individual values per category.
- Does “w/o DS & w/o CF” corresponds to Eq.2 with x_i only and without epsilon? I’m just curious because I would like to know how much the residual in Eq.2 influences respect to the patch feature (second component of the equation). My doubt is that the residual does most of the work, but the ablation says that the patch feature is actually crucial. Did I understand correctly?
Figures:
- fig1 is uninformative: I don’t see the visual similarity in a and b only shows the different modalities but doesn’t explain what the issue is.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I recommend accepting the paper based on its clear motivation, well-justified model, and exhaustive experiments. However, it’s essential to note that the method section needs rewriting to ensure reproducibility. Additionally, it would be beneficial for the authors to publish the code to facilitate replication and further research.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Accept — should be accepted, independent of rebuttal (5)
- [Post rebuttal] Please justify your decision
The authors cleared all my doubts about the method and experiments.
Review #3
- Please describe the contribution of the paper
- A dynamic screening module that improves feature selection from local patches.
- A clinical-enhanced fusion module that integrates clinical data for better diagnostic accuracy.
- Demonstrated improvements in classification performance, achieving higher accuracy and AUC than existing methods.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
DSCENet is pioneering in its approach by integrating multimodal data—combining whole slide images (WSIs) and clinical information—for MPN subtype classification. This fusion is significant as it leverages the strengths of both modalities to improve diagnostic accuracy.
The use of clinical data as an additional input to guide the feature learning from WSIs is very innovative and challenging to successfully incorporate.
The paper presents a robust evaluation with significant improvements in classification metrics, such as a 16.89% increase in accuracy and 7.91% in AUC over previous methods. This not only demonstrates the efficacy of the approach but also its potential clinical applicability and benefit.
Validation on a real clinical dataset underscores the practical feasibility and relevance of the proposed method, suggesting it could be integrated into clinical workflows
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
While the paper presents improvements over state-of-the-art methods, it primarily compares against other multimodal or image-based methods. Including a broader range of benchmarking, such as methods that heavily utilize other types of clinical data or genetic information, would provide a more comprehensive understanding of its performance relative to the entire field.
Additional discussion on how the model might perform with other types of cancers or diseases, or across more varied datasets, would strengthen the case for its generalizability and broader application.
The paper could be strengthened by including methods or metrics that assess and improve the model’s explainability.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
Adequate
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
Please refer to the strengths and weaknesses sections above.
-
The paper would benefit from a broader range of benchmark comparisons.
-
For future research, consider applying the model to other datasets or diseases to test its generalizability. Discussing or demonstrating how the model adapts to different types of medical imaging or clinical data would be valuable.
-
Given the clinical application of the model, enhancing the explainability of the network’s outputs could foster greater trust and acceptance among medical practitioners. Future research could explore incorporating explainable AI techniques that make the decision-making process transparent.
-
Discuss the potential for this model to be used in longitudinal studies of MPN patients, tracking progression and response to treatment over time, which could be an interesting direction for future research.
-
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Innovative multimodal approach, robust validation, but lacks broader benchmarking and generalizability.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #4
- Please describe the contribution of the paper
The paper explores the use of additional contextual data for whole slide images and demonstrates its effectiveness outperforming several state-of-the-art methods.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
+novel approach +thorough evaluation +state-of-the-art comparison +clinical relevance demonstrated
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Paper could be more mathematical. Yet, the current write-up is convincing.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Do you have any additional comments regarding the paper’s reproducibility?
The link will be provided in the published version.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
The paper does a very good job to propose and execute a novel method. Comparisons with the state-of-the-art are convincing. The write-up is clean and this is probably the best paper in my stack. I encourage the meta reviewers to actually read this paper, before they agree to superficial and unfounded criticism.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Strong Accept — must be accepted due to excellence (6)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I really like all parts of the paper from the method, the visuals that make it easy to follow all the way to a convincing evaluation and would hate to see such a good paper rejected at the conference.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Author Feedback
We thank all reviewers for their highly positive appreciation. 1.Excellent novelty(R1-“novel methodological components”, R4-“innovative”, R5-“novel approach”, R6-”very innovative”) 2.Boost performance(R1-“boost performance”, R4-“improved performance”, R5-“SOTA comparison”, R6-“significant improvements”) 3.Clinical significance(R5-“clinical relevance demonstrated”, R6-“clinical applicability and benefit”)
Furthermore, we deeply appreciate R5 for the strong acceptance and R1, R6 for their acceptance.
Q1: About random partition.(R4) Our data partition is broadly recognized, validating the superior performance of our method. -All experiments are conducted under the same data partitioning criteria, and our method obtains the best result. -We provide thorough evaluations including qualitative, quantitative, and ablation studies, which are widely recognized by R1, R5, and R6. -The similar partitioning as [1-3]. [1] Wu C, ICCV, 2023. [2] Bontempo G, MICCAI, 2023. [3] Graham MS, MICCAI, 2023.
Q2: Availability of genetic data.(R4) Genetic data is clinically accessible. -Genetic mutations are crucial biomarkers in MPN classification, and genetic tests are typically recommended in clinical practice [4]. -The genes we use (JAK2, MPL, CALR) are common MPN driver mutations, usually performed on peripheral-blood DNA, making them widely available [5]. -Our method pioneers multimodal MPN diagnosis by fusing clinical data and WSI, aligning with clinical practice and enhancing diagnostic reliability [6]. [4] Thiele J, Am J Hematol, 2023. [5] Cross NC, BJHaem, 2021. [6] Rumi E, blood, 2017.
Q3: Performance using only clinical data.(R4&R6) Our comparisons have included the model using only clinical data, which does not attain optimal performance. -In Table 1, the fifth row has shown a 13.82% lower AUC with only clinical data and the fourth row has shown a 10.91% lower AUC with only WSI, compared to our multimodal method. -This confirms that our multimodal method outperforms single-modality methods, effectively integrating clinical data and WSI to obtain optimal performance.
Q4: Claim of CF module.(R4) Our CF module advances agent attention by designing a clinical-enhanced query as additional guidance, aiding in exploring diagnostic-relevant representations across modalities, thus enhancing the model’s representation.
Q5: About the task.(R1) Our work focuses on the subtype classification task of MPN, namely PV, ET, PrePMF, and PMF. The confusion matrices in Fig.4 have provided the individual values per class.
Q6: Potential misunderstanding of Residual.(R1) In Eq. 2, the first part x_i denotes the residual, while the second part denotes the patch features designed by us. Removing the residual has minimal impact on performance.
Q7: Other details.(R1) -We will release the code upon acceptance to reveal more details. -We used the pre-trained ResNet-50 with weights from [17]. -“w/o DS & w/o CF” means removing both the DS and CF modules. (1) Details in DS module -The absolute position encoding refers to encoding the spatial position in Transformer, which is set to a fixed-length sequence via zero padding. -Our dynamic random encoding maps the sequence of patches to a grid composed of random numbers, dynamically adapting to changes in patch quantity. -The random grid is fed into FC layers, adjusting its weights to prioritize important features, thereby enhancing the subsequent selection of patch features. (2) Details in clinical query block -Clinical indicators refer to the demographic characteristics, blood test parameters, and genetic mutation status. -The enhanced clinical query C is derived from the concatenation of clinical and image features, followed by fully connected layers with ReLU activation and scaled pooling.
Q8:Future work.(R6) -Thanks to R6 for the future work insights. Due to space constraints, we expect to explore model adaptation to other diseases, explainable AI, and long-term data collection for prognosis in the future.
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
NA
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
NA