List of Papers Browse by Subject Areas Author List
Abstract
Cell counting in microscopy images is vital in medicine and biology but extremely tedious and time-consuming to perform manually. While automated methods have advanced in recent years, state-of-the-art approaches tend to increasingly complex model designs. In this paper, we propose a conceptually simple yet effective decoupled learning scheme for automated cell counting, consisting of separate counter and localizer networks. In contrast to jointly learning counting and density map estimation, we show that decoupling these objectives surprisingly improves results. The counter operates on intermediate feature maps rather than pixel space to leverage global context and produce count estimates, while also generating coarse density maps. The localizer then reconstructs high-resolution density maps that precisely localize individual cells, conditional on the original images and coarse density maps from the counter. Besides, to boost counting accuracy, we further introduce a global message passing module to integrate cross-region patterns. Extensive experiments on four datasets demonstrate that our approach, despite its simplicity, challenges common practice and achieves state-of-the-art performance by significant margins. Our key insight is that decoupled learning alleviates the need to learn counting on high-resolution density maps directly, allowing the model to focus on global features critical for accurate estimates. Code is available at https://github.com/MedAITech/DCL.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2995_paper.pdf
SharedIt Link: https://rdcu.be/dY6iS
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72083-3_39
Supplementary Material: N/A
Link to the Code Repository
https://github.com/MedAITech/DCL
Link to the Dataset(s)
https://github.com/ieee8023/countception https://github.com/markmarsden/DublinCellDataset https://www.robots.ox.ac.uk/~vgg/research/counting/index_org.html
BibTex
@InProceedings{Zhe_Rethinking_MICCAI2024,
author = { Zheng, Zixuan and Shi, Yilei and Li, Chunlei and Hu, Jingliang and Zhu, Xiao Xiang and Mou, Lichao},
title = { { Rethinking Cell Counting Methods: Decoupling Counting and Localization } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15004},
month = {October},
page = {418 -- 426}
}
Reviews
Review #1
- Please describe the contribution of the paper
The paper proposes a model to count cells, by simply decoupling counting and localization tasks. The features for counting are utilized to aid localization. The authors validate their method on 4 cell counting datasets.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Simplicity: The paper is easy to read and follow and the idea is straightforward.
- The experiment results demonstrate a significant improvement on the counting task.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The motivation is not clear. What’s the main issue of current cell counting methods? The authors discuss that current methods often have a complex design but didn’t discuss how it affects the counting performance.
- Most of the baselines are for crowd counting instead of cell counting. Is this an apple-to-apple comparison?
- Lack of comparison with cell segmentation models. How does the proposed model perform compared to simply calculating the objects found in a segmentation model?
- The global message passing looks more like an average pooling layer. How is this module different from such a layer? The authors should also provide the definition of V(p).
- SAM visualization lack of comparisons: how does SAM perform without the prompt? The results don’t look very well on MBM and VGG, especially on the connected cells.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
It seems incorrect to say the counter module doesn’t see any location labels during training. The location labels are seen during back-propagation. The dataset size is unclear. Authors should provide a more detailed description of how many images the datasets have and how many cells they have. I’m not an expert in cell counting, but from my experience, the main difficulty of counting cells is to separate the connected cells. It would be helpful if the authors could conduct a thorough analysis of what leads to the failure of previous methods and how the proposed method improves those cases. (Related to Weakness.1)
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- It seems incorrect to say the counter module doesn’t see any location labels during training. The location labels are seen during back-propagation.
- The dataset size is unclear. Authors should provide a more detailed description of how many images the datasets have and how many cells they have.
- I’m not an expert in cell counting, but from my experience, the main difficulty of counting cells is to separate the connected cells. It would be helpful if the authors could conduct a thorough analysis of what leads to the failure of previous methods and how the proposed method improves those cases. (Related to Weakness.1)
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
- Lack of clear motivation.
- The experiment results show great improvement over the baselines. However, the comparison design is not convincing. (Please refer to Weakness.2&3)
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
The author addressed my concerns regarding the motivation, global message passing clarification, and experiment comparisons. Hence, I would like to change my score to 4.
Review #2
- Please describe the contribution of the paper
This paper presented a model for cell counting for microscopic images by decoupling the cell counting and cell localization tasks. They added a global message passing module. They demonstrated their approach provided more accurate cell counting validating on four external datasets, comparing to 9 different methods.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The advantage of this paper is the new idea for using the feature maps to generate the cell counts. The experimental results showed substantially better performance comparing to other methods. Also, the experiments were performed on four different publicly available datasets, demonstrates that this method is generic and potentially transferable to other images for cell counting tasks.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
One concern that I have for this method is also the decoupling part. Although they may achieve an accurate count, the count is disjoint to the cell centroid map. This limits the usability in the clinical practice because the qualitative result from the model does not reflects the actual count. One use case is that the pathology might use the centroid map to verify the final count when they have questions for model generated counts. Although the author provided a comprehensive comparison validation, there are weaknesses in their validation. The sample sizes of each data set are very small, and the data splitting is 10:9:1 for training, validation, and testing. The results are from single run experiments without standard deviation/confidence intervals. These make the results less convincing. The model generates two outputs: cell count and centroid map. They author only validated the cell count but lacking cell centroid validation. The author only emphasizes the importance about the counting, which is one task, but ignored another important task. The accuracy for cell localization affects a lot of downstream tasks for clinical and biological application. Section 3.4 does not provide much useful information for this paper. I understand that the authors want to demonstrate the use case for accurate centroid prediction. However, the performance for using SAM is poor in general. Also, no quantitative comparative results were provided. There are a few more public cell counting datasets of IHC images, which provide much more direct clinical use case for cell counting. The author did not provide at least one experiment on the IHC dataset.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
I would recommend the following:
- Provide the results from cell centroid map for the cell counts and incorporate the result in both Table 1 and 2.
- Remove section 3.4.
- Provide a table to describe the data. Although the citation should have all the information, it is recommended to have the sample size (number of image patches and number of cells) listed for readability.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Although there are points I raised in the weakness, the substantially improved performance in cell counting is of importance. Recommendation: Accept.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Accept — should be accepted, independent of rebuttal (5)
- [Post rebuttal] Please justify your decision
The author’s response addressed my concern. I’d recommend to accept the paper if they provide the corresponding content in their revision.
Review #3
- Please describe the contribution of the paper
the major contribution for this work is to decouple the cell counting and cell localization, which use VGG 19 for counting task and U-Net for localization task.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
conventionally, cell counting and localization are integrated task, this work provide a new perspective by separating these into 2 independent tasks, which significantly improved the performance.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
some major content is inconsistent in figures and text descriptions. for example, in text, the author proposed that the counting network is leveraged the VGG-19 architecture as the model backbone, however, in figure 1, the visual explanation stated that VGG-16 was used for counting task.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
the author provided a decoupling based approach for cell counting, which separated the counting and localization as 2 independent tasks, and improved the performance. The detailed mathematical formula was provided for audience to understand the concepts. However, some inconsistency in figure and text descriptions must be addressed, minor suggestion to open source the implementation will also help to improve the work reproducibility.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Accept — should be accepted, independent of rebuttal (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
the idea is innovative by decoupling the counting and localization tasks.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Author Feedback
We appreciate the reviewers for their constructive comments.
Code (R1&R3&R4) We promise to make our code publicly available.
Data description (R3&R4) Thanks for the suggestions. We will describe the datasets in more detail in a table in Section 3.1.
Reviewer #1 Q1 Figures and text descriptions are inconsistent. Sorry for the typo. We will change VGG-16 to VGG-19 in Fig. 1.
Reviewer #3 Q1 Motivation is not clear. Traditional cell counting algorithms rely on density maps to simultaneously learn counting and localization, which can be susceptible to errors when cells are connected. The proposed method decouples these two tasks and leverages global feature representations for counting, thereby circumventing this issue. Furthermore, we aim to counteract the trend of increasing complexity in recent state-of-the-art counting algorithms by designing a simple yet effective model.
We will make our motivation clearer in the final version of the paper.
Q2 Crowding counting and cell counting Yes, it is an apples-to-apples comparison. Both tasks involve counting, albeit different objects: one counts people, while the other counts cells (M. Marsden, et al., CVPR’18).
Q3 Cell counting versus cell segmentation Cell counting and cell segmentation are distinct tasks. The former requires only point-level annotations, while the latter necessitates pixel-wise annotations. Consequently, a direct comparison between the two would not be equitable.
Q4 Global message passing module versus pooling The proposed module differs from average pooling. Although average pooling can serve the purpose of information aggregation, it is local, non-adaptive, and alters the size of feature maps. In contrast, our module performs global information aggregation, is adaptive, and preserves the feature map size. Furthermore, V(p) represents a set of sampled positions.
Q5 SAM This section explores using cell localization results as prompts for SAM to segment cells. Please note that the core objective of this work is cell counting, not segmentation using SAM. This content is solely for additional discussion. Our results show SAM’s suboptimal performance on cell segmentation in pathology images, even with point prompts. Following Reviewer 4’s suggestion, we will remove this section.
Q6 The counter module doesn’t see any location labels. Thanks for the comment. Our phrasing in that sentence is imprecise, and we will remove it from the final version of the paper.
Reviewer #4 Q1 Decoupling part In our model, the input to the localizer includes features learned by the counter, which are directly relevant to counting. Therefore, they are not disjoint.
Q2 The sample sizes of each data set are very small. Sorry for the confusion. Our data split ratio is train:test:val=10:9:1, not train:val:test=10:9:1. We observed insignificant result variance (e.g., 0.16 on ADI), which will be discussed.
Q3 Centroid map validation Although a model’s cell count predictions can be close to actual values (e.g., prediction: 100, GT: 102), establishing a one-to-one correspondence between predicted and actual cell locations is challenging, preventing a direct assessment of positional accuracy. Furthermore, individual point coordinate errors are more volatile compared to common bounding box errors. Currently, there is a lack of effective metrics in the community to quantify such errors. Therefore, in line with conventional practices (M. Marsden, et al., CVPR’18; SANet, ECCV’18; DM-Count, NeurIPS’20; OrdinalEntropy, ICLR’23; CLIP-EBC, arXiv, 2024), our study focuses solely on quantitative count comparisons.
Q4 Section3.4 Thanks for the suggestion. We will remove this section from the final version of the paper.
Q5 IHC images We appreciate the advice. Following prior work (Count-ception, ICCVW’17; Z. Wang, et al., ICLRW’23), experiments were conducted on the four public datasets. As suggested, we will evaluate our model on IHC images for future work.
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The authors have addressed all the reviewers’ comments.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
The authors have addressed all the reviewers’ comments.
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
While the pre-rebuttal reviews focussed on the choice of baseline methods, unclear aspects in the motivation of the proposed approach, and some unclear/ambiguous presentations in the papers, these concern were addressed in the rebuttal and prompted the more critical reviewers to improve their rating above the acceptance threshold.
The reviewers mention some inconsistencies in their review, which should be corrected in the camera ready version of the paper.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
While the pre-rebuttal reviews focussed on the choice of baseline methods, unclear aspects in the motivation of the proposed approach, and some unclear/ambiguous presentations in the papers, these concern were addressed in the rebuttal and prompted the more critical reviewers to improve their rating above the acceptance threshold.
The reviewers mention some inconsistencies in their review, which should be corrected in the camera ready version of the paper.