List of Papers Browse by Subject Areas Author List
Abstract
Deep learning models excel when evaluated on test data that share similar attributes and/or distribution with the training data. However, their ability to generalize may suffer when there are discrepancies in distributions between the training and testing data i.e. domain shift. In this work, we utilize meta-learning to introduce MetaStain, a stain-generalizable representation learning framework that performs cell segmentation and classification in histopathology images. Owing to the designed episodical meta-learning paradigm, MetaStain can adapt to unseen stains and/or novel classes through finetuning even with limited annotated samples. We design a stain-aware triplet loss that clusters stain-agnostic class-specific features, as well as separates intra-stain features extracted from different classes. We also employ a consistency triplet loss to preserve the spatial correspondence between tissues under different stains. During test-time adaptation, a refined class weight generator module is optionally introduced if the unseen testing data also involves novel classes. MetaStain significantly outperforms state-of-the-art segmentation and classification methods on the multi-stain MIST dataset under various experimental settings.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2372_paper.pdf
SharedIt Link: https://rdcu.be/dY6iI
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72083-3_29
Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2372_supp.pdf
Link to the Code Repository
N/A
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Kon_MetaStain_MICCAI2024,
author = { Konwer, Aishik and Prasanna, Prateek},
title = { { MetaStain: Stain-generalizable Meta-learning for Cell Segmentation and Classification with Limited Exemplars } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15004},
month = {October},
page = {307 -- 317}
}
Reviews
Review #1
- Please describe the contribution of the paper
The author focused on the domain-generalization method with meta-learning. The target is pathological image cell classification and segmentation. To address the stain generalization problem, the author proposed the stain-class triplet loss and consistency preserving loss. The experiment demonstrated the effectiveness of the proposed method.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The research topic is important especially in the pathological image processing. To tackle the stain generalization problem, utilizing stain-class pair triplet loss and consintency preserving loss is persuasive. The experiments demonstrated the effectiveness of the proposed method.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The author compare the state-of-the-art methods with the proposed method in the experiment but the Sota methods on domain generalization only includes the meta-learning based approach. The author should compare the other-type domain generalization methods. It is difficult to understand the figures e.g. Fig.1. What do the arrows of triplets mean in Fig1. a) and b)? What do the squares like blue, red, green and yellow mean in Fig. 1 d)? The range of the applications is not clear. Is it possible for the proposed method to apply H&E domain-shift problem among hospitals?
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
What algorithm does the author adopt for image-to-image translation? DeepLIIF? It should be clear. How about the results when the support samples are 0% in Experiment 1? It is useful to compare the proposed method with the domain generalization methods because the domain generalization algorithms do not access the test domain.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I mentioned above.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
My concern became clear so I set the rate as WA.
Review #2
- Please describe the contribution of the paper
The paper presents MetaStain, a stain-generalizable representation learning framework for cell segmentation and classification in histopathology images. The framework utilizes meta-learning to adapt to unseen stains and novel classes, even with limited annotated samples. It introduces a stain-aware triplet loss to cluster stain-agnostic class-specific features and separate intra-stain features from different classes. It also employs a consistency triplet loss to preserve the spatial correspondence between tissues under different stains. During test-time adaptation, a refined class weight generator module is optionally introduced for handling novel classes. The paper demonstrates that MetaStain outperforms state-of-the-art segmentation and classification methods on the MIST dataset.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The paper is well written, well-organized for reading.
- The paper addresses the important problem of domain shift in medical imaging by proposing a stain-generalizable representation learning framework.
- The use of meta-learning allows the model to adapt to unseen stains and novel classes, which is particularly useful in scenarios where rich expert-annotated training data is lacking.
- The proposed stain-aware triplet loss and consistency triplet loss effectively capture the stain-specific and stain-agnostic features, respectively, improving the generalization capabilities of the model.
- The experimental results show that MetaStain significantly outperforms state-of-the-art methods on the multi-stain MIST dataset under various experimental settings.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The paper lacks detailed information about the dataset used, such as the resolution of the images, and the distribution of stains and classes. This information is important for understanding the generalizability of the proposed framework.
- The paper does not provide a thorough analysis of the limitations and potential challenges of the proposed framework. It would be beneficial to discuss the scenarios in which the framework may not perform well and potential strategies to mitigate those limitations.
- The presented results in the tables each seems to be from a single run. It is important to have a cross-validation, or at least multiple runs with variations, and report average and standard deviation to prove that the presented result are not random. Also, a test of statistical significance is necessary to prove the superiority of the proposed approach.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- Provide more details about the dataset used, including the number of samples, resolution of the images, and distribution of stains and classes. This information is important for understanding the generalizability of the proposed framework.
- Discuss the limitations and potential challenges of the proposed framework. Consider scenarios in which the framework may not perform well and potential strategies to mitigate those limitations.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper addresses an important problem in medical imaging and proposes a novel framework for stain-generalizable representation learning. The proposed framework shows promising results and outperforms state-of-the-art methods on the multi-stain MIST dataset. However, it can be further improved in terms of limitations above.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
My questions were well answered
Review #3
- Please describe the contribution of the paper
The paper proposes a meta-learning method for a stain-generalisable representation learning framework for IHC cell segmentation. It uses a stain-aware hard triple sampling that cluster stain-agnostic class-specific features (e.g. make IHC+ cells in different stained images close to each other). Therefore, the trained model is easier for test-time adaptation with unseen testing stain, thus allowing for a more efficient usage of limited annotation in the testing domain.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-
The paper is very motivated - a generalizable cell segmentation for multiple-IHC stains is indeed needed in practice.
-
Solid and interesting methodology: In additional to the triple sampling and learning, there are several other highlights of the method: (I) the design of incremental meta-learning, i.e. using two losses, one on meta-train set and the other on the meta-test is also interesting and well justified. (ii) the GAT approach for incremental new class learning iii) the consistency preserving loss.
-
The comprehensive experimental evaluation. The evaluation datasets are in good size. Extensive comparison have been performed. The design of second experiment (unseen stain and new class) is very practical and exceed my expectation.
-
The paper is overall well-written, with clear motivation, solid methodology, comprehensive experimental evaluation with extensive comparison to competitive methods.
-
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
There are many components in the method and an ablation study of these components would be nice to show the contribution of each component. But I also understand due to the page limit of miccai paper, there might be no space for this, so it may not even fair to call it as a weakness. Nevertheless, this need to be added in the journal version. It is not clear whether the code is available
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
Even though the description of the method is somewhat clear in the manuscript, the code publication is essential to allow for reproducibility give the complexity of the method.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
See weakness
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Accept — should be accepted, independent of rebuttal (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Well-written paper, innovative methodology (although it is somewhat complex), extensive evaluation.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Author Feedback
We thank the reviewers for the valuable and predominantly positive (A,WA,WR) feedback. They appreciate the task’s significance, solid methodology, and rigorous experimentation. Q1: Dataset details (image resolution, stain distribution, classes) (R1) Image resolution: 0.4661 µ/pixel (20x). Patch size: 1024×1024, non-overlapping. Stain distribution: 4642 HER2, 4153 ER, 4361 Ki67, 4139 PR, 4000 H&E patches. Patches from 4 biomarkers were used in training (meta-train+meta-test) and the 5th at inference. HER2 patches were extracted from 64 WSIs, patches of other stains from 56 WSIs. IHC-stained WSIs contain 2 classes (IHC+, IHC-), H&E WSIs contain 5 classes - Neoplastic, Inflammatory, Connective, Necrosis, Non-neoplastic epithelial. Q2: Failure scenarios/limitations, mitigation (R1) 1) To use consistency-preserving (CP) loss, ~20% of the dataset needs co-registered IHC-IHC patches (e.g. ER-PR, ER-Ki67). However, the I2IT models can only synthesize IHC from H&E. By association, we require ~20% IHC-H&E pairs in the original dataset to achieve IHC-> H&E ->IHC 2) Instead of expert annotations, we use DeepLIIF & HoVer-Net to obtain ground truth cell segmentations & classifications 3) Dataset must exclude low-quality H&E samples to optimize I2IT performance and maximize CP loss potential. Proper QC techniques are needed to maintain high-quality image input. Q3: Multiple runs (avg, std). Statistical significance. (R1) We re-evaluated the tasks with 10 random runs for each experimental setting (5-20%) involving Ki67 and ER samples at inference. Due to rebuttal restrictions on presenting new results, the avg. dice, acc with std. will be reported in the ‘final’ supplementary. Our results are statistically significantly different from SOTA (t-test). Q4: Range of applications unclear (R3) 1) Reduces pathologist annotation efforts and time for cell segmentation & classification on novel stains. 2) segmented cells on samples of novel stains may aid in better downstream analysis (predictive/prognostic models). 3) attain generalizable performance for cross-hospital samples (different staining techniques) Q5: What do arrows and squares mean in the figure? (R3) Fig 1a arrows: maximize the distance between IHC+ and IHC- feature prototypes. Arrows linking 1(a, b): minimize the distance between IHC- prototypes from ER & Ki67 stained samples. In Fig 1d, colored squares represent the spatial arrangement of 4 grids within a patch. 2 differently-stained samples with similar spatial arrangements indicate a +ve correspondence. Q6: Compare with other domain generalization (DG) methods. Results when the support samples are 0% in expt 1 -> comparison with no access to test domain? (R3) We compared MetaStain with 2 non-meta-learning DG methods, based on feature augmentation [1] & domain-invariant feature learning [2]. Meta-learning offers clear advantages: i) extraction of superior commonalities (meta-knowledge) from diverse training domains and ii) enhanced optimization by simulating domain shifts episodically. MetaStain outperforms [1,2] with 0% support samples. Unlike typical DG, we integrated test-time adaptation into MetaStain to enhance feature generalization to unseen stains, marking a major contribution to our methodology. Even after augmenting [1, 2] with finetuning, they couldn’t surpass MetaStain in 5-20% support settings. Rebuttal guidelines prevent us from providing these results; can be shown in supplementary if AC/reviewers desire. Q7: Algorithm for I2IT? (R3) We apologize for citing DeepLIIF inappropriately as it wasn’t used for the H&E-to-IHC I2IT task. We adopt the method in MIST [3]. Q8: Ablation study. Code availability. (R4) An ablation study is already provided in Tab 2 (right) for IHC+/IHC- seg. & classification on Ki-67 stained samples at inference. Due to space constraints, results for other stains were not included. Code will be made public upon acceptance.
References:
- Gu et al, MICCAI’21
- Hu et al, TMI’22
- Li et al, MICCAI’23
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
N/A