List of Papers Browse by Subject Areas Author List
Abstract
Interactive segmentation techniques are in high demand in medical imaging, where the user-machine interactions are to address the imperfections of a model and to speed up the manual annotation. All recently proposed interactive approaches have kept the segmentation mask at the core, an inefficient trait if complex elongated shapes, such as wires, catheters, or veins, need to be segmented. Herein, we propose a new data structure and the corresponding click encoding scheme for the interactive segmentation of such elongated objects, without the masks. Our data structure is based on the set of centerline and diameters, providing a good trade-off between the filament-free contouring and the pixel-wise accuracy of the prediction. Given a simple, intuitive, and interpretable setup, the new data structure can be readily integrated into existing interactive segmentation frameworks.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2874_paper.pdf
SharedIt Link: pending
SpringerLink (DOI): pending
Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2874_supp.pdf
Link to the Code Repository
N/A
Link to the Dataset(s)
https://www.kaggle.com/c/ranzcr-clip-catheter-line-classification
BibTex
@InProceedings{Sir_CenterlineDiameters_MICCAI2024,
author = { Sirazitdinov, Ilyas and Dylov, Dmitry V.},
title = { { Centerline-Diameters Data Structure for Interactive Segmentation of Tube-shaped Objects } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15009},
month = {October},
page = {pending}
}
Reviews
Review #1
- Please describe the contribution of the paper
The authors propose a method for interactive segmentation that is based off of deep learning as well as two additional forms of annotation: tip 1 and tip 2 endpoints (in addition to the standard foreground/positive and background/negative point annotations). The network outputs a mask of the structure as well as images representing the centreline location and the distance-from-centerline. In order for the method to learn from evolving data as opposed to simply the initial few clicks, the networks are trained with a “click simulator” which takes in the existing segmentation and generates more “goal-oriented” clicks, such as correcting the area of highest error.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The use of Number of Clicks (NoC) to reach a certain quality criterion is, in my opinion, a good objective metric for the quantity of interaction and I am glad the authors measured this.
- Figure 3 well demonstrates the method’s capability to distinguish between a particular desired tubular structure and similar distractors, clearly showing the benefit of interactivity to segment a particular instance. This is further validated by the HD and #ccs metrics in Figure 2 which show this capability happens early on.
- The click generator is a good point of novelty especially given the difficulty of training a static network to interpret and react to changing input data.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The comparison to the state-of-the-art mostly involves machine-learning-based interactive segmentation algorithms rather are non-specific. It is lacking a comparison to other tube- or curve-specific ones, even if they are more traditional and path-based rather than learning-based.
- The technical novelty is relatively low as it mostly comes from the click simulator and the post-hoc mask restoration steps rather than the learnt components or types of interaction.
- Some information such as the variability measurements in Table 1 are missing. Given the very large dataset, some statistical testing is warranted.
- Given the values in Table 1, one wonders if the best method would simply be RITM with Tip 1 and Tip 2 considered positive clicks and then a simple connected components analysis is done in post. This would likely be the best performing method given it would reduce the ccs NoC down to a minimal value.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Do you have any additional comments regarding the paper’s reproducibility?
The paper is very highly conceptually reproducible. It would not be difficult for someone to approximate their methods using the information and references given with relatively little guesswork. (I am marking it as provided links to the code considering that they say they directly use a particular, cited architecture which does have implementations available.)
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
- In future, I would augment the click simulator with some noise to mimic user variability, especially with respect to the Tip 1 and Tip 2 points.
- The discussion should be improved with respect to explaining how the comparative methods achieving very high Dice values while still having high HD errors. Conceptually, this could be from the other methods having near-perfect segmentations of the desired structure but polluted with small, distant mis-segmented islands but it would be nice to have some evidence that this is the case.
- With respect to the weakness regarding state-of-the-art selection, maybe look at Liao et al.’s “Progressive minimal path method for segmentation of 2D and 3D line structures” (IEEE PAMI, 2018) as a potential, non-learning based approach.
- Given the proposed method has a ccs NoC of 2 (i.e. only two clicks are necessary to reach the desired number of connected components) does that mean that the simulator starts of with only Tip 1 and Tip 2 with no positive or negative clicks? This should be clarified as it seems to contradict the text on pg 3 (Click sampling) unless n_pos = n_neg = 0.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
In my mind, the biggest weakness is the validation being biased towards methods that are not designed with tubular structures in mind rather than recent approaches that are. This is compounded by the appearance that simple post-processing could completely change the result for one of the comparative methods which would be easy to perform and conceptually well-motivated for this task. (If the authors overcome this, I will be more than happy to change to accept.) I give this paper only a weak reject because I feel that it is very clear and very highly reproducible, which are definite points in its favour.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Accept — should be accepted, independent of rebuttal (5)
- [Post rebuttal] Please justify your decision
I think the small additions suggested by the authors are reasonable and address my concerns at least at a conceptual level. I strongly believe that the simplicity and conceptual reproducibility of the method is important and should be encouraged.
Review #2
- Please describe the contribution of the paper
The paper presents a centerline-diameters data structure for an interactive segmentation method tailored to tube-shaped objects. This is interesting and of significance.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The task of interactive segmentation for tube-shaped objects is meaningful.
- The method is functional, and the centerline-diameter structure is sensible.
- The method achieves satisfactory performance.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- The method is relatively simple, as it directly predicts the centerline landmarks, diameters, and distance transform for segmentation.
- Using the output (points) from the segmentation model as input for further refinement is not a novel approach.
- Iteratively refining the segmentation with one positive/negative example during inference is labor-intensive and time-consuming.
- Dice typically refers to the Dice coefficient, where a higher value indicates better performance. However, in Table 1, Dice values range from 1 to 10. This should be clarified.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
The paper presents a centerline-diameters data structure for an interactive segmentation method tailored to tube-shaped objects. This is interesting and significant. The method is functional, and the centerline-diameter structure is sensible, which achieves decent performance. However, there are some issues that need to be addressed. Major:
- The method is relatively simple, as it directly predicts the centerline landmarks, diameters, and distance transform for segmentation.
- Using the output (points) from the segmentation model as input for further refinement is not a novel approach.
- Iteratively refining the segmentation with one positive/negative example during inference is labor-intensive and time-consuming.
- Dice typically refers to the Dice coefficient, where a higher value indicates better performance. However, in Table 1, Dice values range from 1 to 10. This should be clarified. Minor:
- It might be beneficial to incorporate efficiency metrics into the evaluation.
- Baseline methods should be included for comparison, such as the performance of a trained HRNet without interactive segmentation.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
- The task is meaningful and the method is sensible.
- The method is relatively simple and not novel.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
This article proposes an interactive segmentation method specific to tubular structures, eg catheter, blood vessels, etc. The method uses centreline and radius estimation rather than pixel-wise mask prediction, as well as a continuously-refined segmentation output using endpoints, positive and negative point input.
The method is evaluated over two dataset on a series of challenging interactive tasks and is shown to provide good results efficiently, particularly in terms of number of clicks required to achieve a good result.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The paper is clear and purports to propose a solution to a vexing problem in interactive segmentation. Thin tubular objets are hard to segment, leading to a dearth in ground-truth annotations, which makes them even harder to segment.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Whereas the training phase is well described, the inference phase could use more details and is only described in a paragraph which I find unclear, just before section 3.
Reproducibility may be an issue as no code appears to be proposed.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
Reference: RITM was proposed in ~\cite{Sofiiuk_etal_ICIP_2022}
@INPROCEEDINGS{Sofiiuk_etal_ICIP_2022, author={Sofiiuk, Konstantin and Petrov, Ilya A. and Konushin, Anton}, booktitle={2022 IEEE International Conference on Image Processing (ICIP)}, title={Reviving Iterative Training with Mask Guidance for Interactive Segmentation}, year={2022}, volume={}, number={}, pages={3141-3145}, keywords={Training;Image segmentation;Analytical models;Codes;Annotations;Computer architecture;Benchmark testing;interactive segmentation;segmentation;mask refinement}, doi={10.1109/ICIP46576.2022.9897365}}
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper proposes a solution to a difficult interactive segmentation task, but could be more clearly described. Performances are good and constitute an acceptable compromise compared to the state of the art.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Author Feedback
We thank our reviewers for their constructive feedback. Main criticisms are addressed below.
———
R1: “comparison to other tube- or curve-specific methods” We acknowledge that we researched generic interactive segmentation methods rather than specific ones. We found some overlap with Liao et al. (to be referenced), e.g. the start and end points being similar to the tip clicks. However, a direct comparison with our work is challenging due to the use of different datasets and the lack of publicly available code. Liao et al.’s method is limited to two initial clicks, so comparisons can only be made under this constraint. Additionally, their method operates with paths that can only be approximated with a static width, which is not suitable for objects of variable width, such as blood vessels.
R1: “..RITM with Tip1, Tip2 + connected component analysis..explaining how the comparative methods achieve very high Dice values while still having high HD errors..” We agree that connected component analysis could reduce the HD of mask-based methods, especially in cases of distant false positive predictions. However, such an algorithm must be carefully optimized w.r.t. its hyperparameters: use it after each click or only as a final post-processing? how to process disconnected true positive predictions? how to select the thresholds? Moreover, connected component analysis will not address false positive predictions within a single component, such as at bifurcation points. Nevertheless, we consider this comment very important and have added a paragraph to the discussion to support mask-based methods and highlight potential areas for improvement.
R1: “does that mean that the simulator starts of with only Tip 1 and Tip 2 with no positive or negative clicks.” Yes. The click simulator is probabilistic. If n_pos=0 and n_neg=0, it starts with only Tip 1 and Tip 2. We have refactored this section to make it more explicit.
R1: “..variability measurements in Table 1 are missing…” Reporting average NoC values is a convention in the interactive segmentation field (works such as RITM, SimpleClick, and FocalClick all do just that). Yet, given the request, we propose to add the statistics to the supplement.
——— R3: “..the inference phase could use more details and is only described.. wrong RITM citation.” We added more details to the inference section and fixed the RITM citation. Thank you.
——— R4: “…in Table 1, Dice values range from 1 to 10.…” In Table 1, the Dice values do not range from 1 to 10. Rather, the number of clicks (NoC) metric, required to achieve 0.85-0.9 Dice scores, range from 1 to 10 (as stated in the table caption).
R4: “…method is relatively simple…and not novel…” + R1: “…technical novelty” In the task where powerful modern models fail to segment elongated objects, we believe the simplicity of the proposed method is the main strength of our work: plain fully-convolutional networks without any autoregressive blocks do the job. This simplicity facilitates easy reproduction and reuse of our model, as noted by the other reviewers. Moreover, to the best of our knowledge, the data structure based on the combination of the centerline and the distance transform is novel, and we presented its first use in the interactive segmentation task.
R4: “..Iteratively refining the segmentation with one positive/negative example during inference is labor-intensive and time-consuming..” Indeed, it’s a flaw of all click-based interactive segmentation methods. However, other prompts may have different disadvantages as well. For example, bounding boxes are too coarse for approximating thin and long structures, and text prompts can be too ambiguous for precise segmentation. For future work, scribble-based approaches (coarse mask or centerline) can be considered a good alternative to click-based methods. Additionally, they too can be directly used with our centerline-diameter data structure. We have added these thoughts to the discussion.
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
This paper presents an interactive framework for tubular segmentation with simple deep learning networks. There is enthusiasm for the work, two reviewers tend toward acceptance, and one towards rejection. The latter’s critiques were well rebutted.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
This paper presents an interactive framework for tubular segmentation with simple deep learning networks. There is enthusiasm for the work, two reviewers tend toward acceptance, and one towards rejection. The latter’s critiques were well rebutted.
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The click-based design of the method and evaluation is interesting and novel. The reviewers evaluate this point. The research design matches clinical situations.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
The click-based design of the method and evaluation is interesting and novel. The reviewers evaluate this point. The research design matches clinical situations.