List of Papers Browse by Subject Areas Author List
Abstract
Weakly supervised nuclei segmentation methods have been proposed to simplify the demanding labeling process by primarily depending on point annotations. These methods generate pseudo labels for training based on given points, but their accuracy is often limited by inaccurate pseudo labels. Even though there have been attempts to improve performance by utilizing power of foundation model e.g., Segment Anything Model (SAM), these approaches require more precise guidance (e.g., box), and lack of ability to distinguish individual nuclei instances. To this end, we propose InstaSAM, a novel weakly supervised nuclei instance segmentation method that utilizes confidence of prediction as a guide while leveraging the powerful representation of SAM. Specifically, we use point prompts to initially generate rough pseudo instance maps and fine-tune the adapter layers in image encoder. To exclude unreliable instances, we selectively extract segmented cells with high confidence from pseudo instance segmentation and utilize these for the training of binary segmentation and distance maps. Owing to their shared use of the image encoder, the binary map, distance map, and pseudo instance map benefit from complementary updates. Our experimental results demonstrate that our method significantly outperforms state-of-the-art methods and is robust in few-shot, shifted point, and cross-domain settings. The code will be available upon publication.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2694_paper.pdf
SharedIt Link: https://rdcu.be/dY6iz
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72083-3_22
Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2694_supp.pdf
Link to the Code Repository
N/A
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Nam_InstaSAM_MICCAI2024,
author = { Nam, Siwoo and Namgung, Hyun and Jeong, Jaehoon and Luna, Miguel and Kim, Soopil and Chikontwe, Philip and Park, Sang Hyun},
title = { { InstaSAM: Instance-aware Segment Any Nuclei Model with Point Annotations } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15004},
month = {October},
page = {232 -- 242}
}
Reviews
Review #1
- Please describe the contribution of the paper
The paper introduces InstaSAM, a novel approach for weakly supervised nuclei instance segmentation that leverages the Segment Anything Model (SAM). InstaSAM enhances the accuracy of segmenting individual nuclei by utilizing a confidence-driven pseudo labeling process that refines pseudo instance maps with point annotations. This method enables improvements in training by focusing only on high-confidence segmented cells, reducing reliance on extensive labeled datasets. The authors evaluate InstaSAM in various settings, including few-shot, shifted point annotations, and cross-domain scenarios.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The method uses a confidence-driven pseudo-labeling process that improves the quality of training data by selectively using high-confidence segments, which addresses common challenges in WSI-based weakly supervised learning.
- InstaSAM has potential implications for the field of digital pathology by reducing the annotation burden and enabling more scalable and adaptable solutions, which could lead to broader adoption and adaptation in clinical settings.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- CellViT [1] is an instance segmentation method that requires cell segment masks for training, while SAC [2] is a binary segmentation method that does not require segmented masks. The proposed InstaSAM integrates the best features of both methods, making the technical novelty incremental. Also, some recent works, such as Guided-Prompting-SAM [3] and PromptNucSeg [4], have also explored similar ideas. The authors could further discuss and compare their model with these concurrent studies.
- The technical descriptions could be more detailed to enhance understanding. The pseudo labeling module is not described clearly, which, in my view, is the most interesting part of InstaSAM.
- The experiments are conducted on only two datasets (MoNuSeg and CPM17), which are insufficient. It would be better to include experiments with more datasets like MoNuSAC, PanNuKe, or Lizard.
- The used evaluation metrics are not sufficiently convincing. The authors should consider reporting the results of PQ(mPQ, bPQ) [5], which is widely adopted by nuclei segmentation methods.
[1] Hörst, Fabian, et al. “Cellvit: Vision transformers for precise cell segmentation and classification.” arXiv preprint arXiv:2306.15350 (2023) [2] Na, Saiyang, et al. “Segment Any Cell: A SAM-based Auto-prompting Fine-tuning Framework for Nuclei Segmentation.” ArXiv (2024). [3] Aayush Kumar Tyagi, et al. “Guided Prompting in SAM for Weakly Supervised Cell Segmentation in Histopathological Images.” ArXiv (2023). [4] Shui, Zhongyi, et al. “Unleashing the Power of Prompt-driven Nucleus Instance Segmentation.” arXiv preprint arXiv:2311.15939 (2023). [5] Graham, Simon, et al. “Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images.” Medical image analysis 58 (2019): 101563.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- Point annotations are important elements in this paper. However, there are several issues related to it:
- Information on how to generate a point prompt is lacking. Please describe the method used to obtain a set of K point annotations.
- There is no reason why you selected 4 random negative points. Could you please provide a theoretical or empirical explanation for choosing these negative points?
- There is no experiment that explains the impact of point annotations.
- Based on the findings in SAC, centroid-based prompt selection outperforms direct probability-based prompt selection. So why did you choose the random selection method?
-
The experiments are not sufficient. SOTA methods such as CellViT, SAC, PromptNucSeg are not compared. It is important to evaluate the performance of the proposed method with these SOTA models for a convincing conclusion.
-
The proposed method is not described clearly. Why are there frozen symbols in the fully trainable Nuclei Decoder in Figure 1? Should it be the ‘shared weight’ symbol instead? (i.e., ‘The nuclei decoder shares most parameters for both tasks, except for the input tokens related to each map and the output MLP layers.’) In Figure 1, there are two arrows from S’ to B and D. However, there are no equations describing the relationship between (S’ and B) and (S’ and D). Please provide more information on this part.
- In Equation 2, it should be F_ j.
- Point annotations are important elements in this paper. However, there are several issues related to it:
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Reject — should be rejected, independent of rebuttal (2)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The contributions and novelty of this paper are not significant. Some sections could be more detailed to enhance clarity such as pseudo labeling, point prompt generation, and the selection of hyperparameters. The experiments are also not comprehensive.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Reject — should be rejected, independent of rebuttal (2)
- [Post rebuttal] Please justify your decision
Thank the authors for the rebuttal. However, I still have concerns about the novelty and performance of the proposed method. The authors claim their method addresses instance segmentation which is not comparable to the abovementioned methods, yet the implementation does not significantly differ as it can be easily converted to binary segmentation tasks. They should compare their method with the SOTA methodologies. Without proper clarifications and comparisons with other methods, it is difficult to recognize the effectiveness of the proposed structure. The current form of the paper can hardly be considered for publication without substantial revisions. Therefore, I still vote for rejection.
Review #2
- Please describe the contribution of the paper
(1) A novel weakly supervised method for precise nuclei instance segmentation using point annotations is proposed and it improves existing methods by integrating SAM to generate and refine pseudo instance maps from point prompts with confidence of prediction.
(2) The method simultaneously considers tasks for segmenting each cell and entire cells, and excels in performance and robustness across various scenarios.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
InstaSAM adapts SAM with minimal parameter changes, demonstrating efficient transfer capabilities to nuclei segmentation tasks and can simultaneously segment each cell and the entire cells.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
(1) The comparison largely focuses on similar weakly supervised methods. A broader comparison with fully supervised methods such as U-Net or its variants would provide a clearer picture of how much performance is sacrificed for the reduced annotation burden.
(2) The paper could strengthen its claims of robustness in few-shot scenario by more thoroughly investigating how performance declines with fewer annotations, thereby clarifying practical limitations.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
(1) Including a comparison with fully supervised techniques would strengthen the assessment of trade-offs between annotation effort and performance.
(2) Some other typos: Page 2, sentence ‘anuclei decoder’
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
InstaSAM utilizes SAM in a weakly supervised setting for nuclei instance segmentation and excels in performance and robustness across various scenarios.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
The author feedback solves my concern to some extend, so I will give the weak accept score finally.
Review #3
- Please describe the contribution of the paper
The authors introduced a weakly supervised nuclei instance segmentation approach called InstaSAM. This method generates pseudo labels using point prompts and fine-tunes SAM, effectively excluding unreliable instances. InstaSAM outperforms existing state-of-the-art methods and exhibits robustness across different settings.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The authors effectively demonstrate the impact of their SAM fine-tuning scheme for cell instance segmentation.
- The proposed method shows significant improvements over existing state-of-the-art methods and proves robust in a variety of experimental environments.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The authors should consider including a discussion of areas for improvement and future research directions. This would provide a more comprehensive understanding of the method’s scope and areas for further improvements.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
It would be beneficial to explore expanding the application of the method to additional datasets, such as those used in gland segmentation. This would help determine the method’s adaptability and effectiveness across different domains.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Accept — should be accepted, independent of rebuttal (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The well-organized text and clear presentation of experimental results have positively influenced the overall evaluation score of their work.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Accept — should be accepted, independent of rebuttal (5)
- [Post rebuttal] Please justify your decision
I will maintain my decision on this work.
Author Feedback
Summary: R1(R), R3(A), R4(WA). We thank the reviewers for their valuable and encouraging feedback! We are pleased they find our work adaptable (R1), high improvement (R2), and novel(R4). We clarify the main concerns below. (R1) Novelty: We would like to emphasize that no paper has successfully achieved proper nuclei instance segmentation using SAM with only weak point annotations. All the papers suggested by the reviewer have fundamentally different settings from our method, making direct comparisons difficult. While CellViT replaces the image encoder of Hovernet with SAM, it still requires a segmentation mask label. Also, SAC and PromptNucSeg equally require masks and only binary segmentation is modeled. In contrast, our framework enables parameter efficient fine-tuning (PEFT) using a SAM adapter, introduces a pseudo-labeling process for weakly supervised learning, and proposes a point-based instance segmentation method. Aside from supervised learning-based methods, Guided prompting-SAM uses box annotations for binary segmentation and reports lower performance than All-in-SAM despite having the same settings (Table 1&2). Herein, we reiterate the benefit of our proposed method that only requires weak point annotations and still achieves higher performance over box-based approaches. (R1) Point annotation scenario setting: We generated annotations by extracting the center points of the masks from the training data similar to prior art. For the Shift a-b scenario, points were randomly extracted from the region between a and b based on the center point of the mask. As mentioned in the Pseudo Labeling section, to segment the k-th nucleus, we use p_k as the positive point and randomly select 4 points from the remaining points as negatives for the prompt. Even if this number is decreased or increased, the method still works, but if there are too many negative points, the training time increases significantly, and the foreground region becomes smaller. Additionally, random negative point selection allows the model to learn from various contexts, resulting in a more robust model. In fact, Guided Prompting-SAM also reported the highest performance when using 4 negative points, thus our setting is reasonable and fair. (R1) Limited experimental data: Due to paper page limits, we focused more on experimental results that highlight the benefit of our method across different tasks i.e., few-shot, cross-domain, and shifted points rather than adding additional datasets. Previous works like MIDL, MixedAnno, SPN+IEN, and PROnet were also validated on two datasets, including MoNuSeg. While we could not present additional results in the rebuttal due to new guidelines, similar performance gains were observed on other datasets. (R1) Insufficient evaluation metrics: In the proposed method, we achieved state-of-the-art performance with PQ scores of 68.35 on CPM and 56.45 on MoNuSeg dataset, which is consistent with the differences observed in AJI scores. If accepted, we will include this information in the final version of the paper. (R4) Performance difference according to label: To demonstrate the effectiveness of the pseudo labeling process, we compared the results in Table 3 using cluster- and Voronoi labels (as used in existing methods), pseudo instance maps, and the ground truth. Our pseudo instance map trained model showed a slight decrease in AJI (1.5% ) on CPM, and 2.2% on MoNuSeg compared to fully supervised learning. This demonstrates that our method significantly reduces the burden of label creation while maintaining the same level of performance.
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
NA
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
NA