Abstract

Dynamic coronary roadmapping is a technology that overlays the vessel maps (the “roadmap”) extracted from an offline image sequence of X-ray angiography onto a live stream of X-ray fluoroscopy in real-time. It aims to offer navigational guidance for interventional surgeries without the need for repeated contrast agent injections, thereby reducing the risks associated with radiation exposure and kidney failure. The precision of the roadmaps is contingent upon the accurate alignment of angiographic and fluoroscopic images based on their cardiac phases, as well as precise catheter tip tracking. The former ensures the selection of a roadmap that closely matches the vessel shape in the current frame, while the latter uses catheter tips as reference points to adjust for translational motion between the roadmap and the present vessel tree. Training deep learning models for both tasks is challenging and underexplored. However, incorporating catheter features into the models could offer substantial benefits, given humans heavily rely on catheters to complete the tasks. To this end, we introduce a simple but effective method, auxiliary input in training (AIT), and demonstrate that it enhances model performance across both tasks, outperforming baseline methods in knowledge incorporation and transfer learning.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1135_paper.pdf

SharedIt Link: pending

SpringerLink (DOI): pending

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/1135_supp.pdf

Link to the Code Repository

N/A

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Liu_Auxiliary_MICCAI2024,
        author = { Liu, Yikang and Zhao, Lin and Chen, Eric Z. and Chen, Xiao and Chen, Terrence and Sun, Shanhui},
        title = { { Auxiliary Input in Training: Incorporating Catheter Features into Deep Learning Models for ECG-Free Dynamic Coronary Roadmapping } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15006},
        month = {October},
        page = {pending}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    the paper introduces an auxiliary input to a model trained for Coronary Roadmapping

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • straight forward approach
    • acceptable but incomplete literature review
    • ablation study
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • no comparison with other methods and works only variants upon the proposed method
    • the proposed method appears odd to begin with as the auxiliary input is a big part of what the model is tasked to predict.
    • as the method effectively creates am alternative path in the causal graph that is incrementally taken away, i am not convinced that the actual causal path X -> Y is learned. A robustness test under some domain change would aid the argument of the paper
    • the method of using an auxiliery input has been around for almost a decade now, so the question is what is the real contribution of the paper ? Application on Coronary Roadmapping ?
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    ok ish reproducibility

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    as above

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Interesting paper , the method is not novel nor would it serve as a discussion point to the community

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    In the paper “Auxiliary Input in Training: Incorporating Catheter Features into Deep Learning Models for ECG-Free Dynamic Coronary Roadmapping” the authors describe a technique to perform two tasks on coronary angiography data: cardiac phase matching and catheter tip tracking. In a nutshell, they add an additional mask as an input which is faded out during training to increase the performance of their deep-learning-based approaches.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The paper is well written with respect to structure and content.
    • Furthermore, the technique of adding an additional mask as an input is simple but seems to have yielded a quite large improvement.
    • The methods employed reach a convincing performance.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • There are little to no details on the actual network architectures employed hindering reproducibility.
    • Even though the runtime of 25ms and 40ms respectively is impressive they are calculated for a maximum image size of 624x624 while depending on vendor larger image sizes up to 2600x1900 are possible, where this runtime is not realistic. Furthermore, to judge clinical applicability it is important to judge how fast the model would run with an advanced runtime engine like TensorRT.
    • The impact of adding gaussian noise to the is not evaluated.
  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • Please add more detail about the concrete implementation.
    • Consider benchmarking your algorithms with TensorRT
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    A well written paper showcasing a simple but effective novel technique with lack in technical detail.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Accept — should be accepted, independent of rebuttal (5)

  • [Post rebuttal] Please justify your decision

    My concerns were addressed adequately



Review #3

  • Please describe the contribution of the paper

    This paper describes the implementation and evaluation of a novel method for training deep neural networks for dynamic coronary mapping whereby segmentation maps of the catheter are included and gradually ablated to help the model learn more useful features that can be used for both catheter tip tracking and for cardiac phase matching.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper is very well written. Aside from a few minor typos, the subject matter is clear and very easy to follow. The authors also provide sufficient detail that their implementation and experiments could be reproduced even without source code. The authors also do an excellent job at outlining the clinical problem that they are attempting to solve and highlighting the existing gaps in the current literature such as the reliance on ECG signals to match cardiac phases or the reliance on images with visible vessels.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    -The main weakness of this paper is that the authors don’t provide an indication of the performance of their method compared to the current state of the art approaches (such as those that use ECG signals). This could easily be added as part of the introduction though without needing more experiments. -I’m not sure if the frame-wise distance is the correct metric to use for the cardiac phase matching task. The authors mention that the videos are either 7.5, 15 or 30fps, this would suggest that a difference of 2 frames would result in a different % of cardiac cycle difference for each of these frame rates unlike with the percentage-based metric. Did the authors try standardizing the frame rate during their experiments or evaluation?

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?
    • As previously mentioned, this paper was very well written, and the methodology was clear enough to be reproducible even without open access to source code. The paper would benefit from the authors putting a bit more detail into the description of the networks themselves (or providing source code), but this does not appear to be the novel aspect of the paper so I would consider this optional.
    • The authors appear to have an impressive dataset (2483 videos for cardiac phase matching and 4098 videos for catheter tip tracking) that would benefit the field if they were able to make it open access.
  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    -This paper would benefit from performing statistical tests to compare each of the training methods. This would allow the authors to comment on the significance of their result. At present the authors claim that the proposed AIT method “significantly outperformed the vanilla supervised learning method” on page 8 of the manuscript, the authors must perform some type of statistical test (with p-values) in order to make this claim. -The second to last paragraph of the introduction seems a bit out of place. It seems to just summarize generic methods that are used to train deep neural networks. It would be good if the authors could better put this paragraph in context with the rest of the introduction. At present this paragraph feels very disjointed and it is difficult to see how it is relevant until much later in the paper. -In the final paragraph of the section titled Datasets and Evaluation metrics on page 6, there is a typo in the final sentence: “… standard deviation for true positives (FP)…” I think that this is meant to be TP as it is referred to in the remainder of the paper.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I based my decision based on the quality of the work and the clarity of the writing. I found this paper extremely well written and easy to follow. The authors do a good job of highlighting the clinical problem that this work addresses and the limitations of previous approaches. The authors also evaluate their results using appropriate metrics and the dataset is large enough to encompass a wide variety of clinical scenarios. There are a few minor details such as typos and statistical testing that can be cleaned up as suggested above, but overall, this is a high-quality paper.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    I was satisfied with the initial submission. However, the authors provided very few details about how they would address my concerns. My concern about a lack of details about the current state-of-the-art was also expressed by Reviewer 3 and I agree with Reviewer 3 that the alternative methods presented form an ablation study of variants of a similar method, rather than truly alternate approaches. I don’t believe that the authors have adequately addressed this concern.




Author Feedback

We thank all reviewers for their constructive feedback. A. Innovation R3 questioned the innovation of our method and its rationality (“the proposed method appears odd to begin with as the auxiliary input is a big part of what the model is tasked to predict”). We would like to clarify that our method was to facilitate learning a good mapping from x to y by introducing an auxiliary input z at the beginning of the training and gradually ablating it during the training. During inference, z is no longer needed but features related to z are incorporated into the network. y and z are different, but z can be inferred from x and used to predict y more easily compared to x. In our case, x is the image, z is the catheter mask, and y is tip heatmaps or phase feature vectors. To our best knowledge, the method was not published before. The innovation of our work is also acknowledged by R1 and R4. B. About learning the causal path x->y R3 questioned if the actual causal path x->y was learned with AIT and suggested including “A robustness test under some domain change”. We want to clarify that the aim of our method was not to learn an actual causal path, nor to learn domain-transferable features. Our goal was to incorporate catheter features into the two tasks in the roadmapping application, and thus accelerate training and achieve better performance (as shown in our experiments). We also explained the intuition behind our method in terms of training convergence and shortcut learning, hoping that researchers in other fields could benefit from the paper. C. Comparison with other methods R3 suggested there was “no comparison with other methods and works only variants upon the proposed method”. We want to clarify that we compared our method with three other methods designed to incorporate features/knowledge into a network (FT, MTL, T-S), which were not variants upon the proposed method. D. Implementation details R1 pointed out the lack of details on the network architectures hindered reproducibility. We would like to clarify that we described the network architectures in the methodology but did not plot the network or specify the hyperparameters. We will include these in the supplementary of the camera-ready version. E. Speed in clinical deployment R1 questioned the inference speed on larger images. We would like to clarify that the pixel spacings were normalized to 0.2mm before inference (“Datasets and Evaluation Metrics”). For common data in roadmapping, image sizes are around 500-600. Even in cases of larger FOVs, we can crop the image around the catheter tip detected in the first frame and achieve real-time inference with PyTorch/V100 (doctors do not move catheters during roadmapping). However, we agree that it is important to benchmark our networks with TensorRT, which we will include in our future work. F. The impact of adding gaussian noise R1 pointed out that “the impact of adding Gaussian noise to AIT is not evaluated”. We agree that it is important to evaluate how sensitive AIT is to different noise-adding schedules. We will explore it in our future work. G. Other issues -R4 suggested adding an indication of the performance compared to SOTA approaches in the introduction without needing more experiments. We will add this to the camera-ready version. -R4: the validity of using frame-wise distance given different frame rates We want to clarify that we also used the ratio between the frame-wise distance and the cardiac cycle length of the case, which is a normalized metric regardless of frame rates. -R4: code and data availability We cannot make code and data public as they are proprietary. -R4: statistical tests when comparing methods We would like to clarify that we did do paired t-tests in our experiments. The bold numbers are the ones significantly better than the others (p<0.05). We will add this clarification to the camera-ready version. -R4: typos and introduction writing We will address these in the camera-ready version.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    N/A



back to top