Abstract

The surgical environment imposes unique challenges to the intraoperative registration of organ shapes to their preoperatively-imaged geometry. Biomechanical model-based registration remains popular, while deep learning solutions remain limited due to the sparsity and variability of intraoperative measurements and the limited ground-truth deformation of an organ that can be obtained during the surgery. In this paper, we propose a novel hybrid registration approach that leverage a linearized iterative boundary reconstruction (LIBR) method based on linear elastic biomechanics, and use deep neural networks to learn its residual to the ground-truth deformation (LIBR+). We further formulate a dualbranch spline-residual graph convolutional neural network (SR-GCN) to assimilate information from sparse and variable intraoperative measurements and effectively propagate it through the geometry of the 3D organ. Experiments on a large intraoperative liver registration dataset demonstrated the consistent improvements achieved by LIBR+ in comparison to existing rigid, biomechnical model-based non-rigid, and deep-learning based non-rigid approaches to intraoperative liver registration.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/3351_paper.pdf

SharedIt Link: https://rdcu.be/dV5xp

SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72089-5_34

Supplementary Material: https://papers.miccai.org/miccai-2024/supp/3351_supp.pdf

Link to the Code Repository

https://github.com/wdr123/splineCNN

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Wan_LIBR_MICCAI2024,
        author = { Wang, Dingrong and Azadvar, Soheil and Heiselman, Jon and Jiang, Xiajun and Miga, Michael and Wang, Linwei},
        title = { { LIBR+: Improving Intraoperative Liver Registration by Learning the Residual of Biomechanics-Based Deformable Registration } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15006},
        month = {October},
        page = {359 -- 368}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper we proposed LIBR+. An improvement over the linearized iterative boundary reconstruction method. The improvement resides in using deep learning to learn the the residual of a biomechnical elastic model. The purpose is to register preoperative meshes on partial intraoperative measurement. Experiments on simulated data demonstrated improvements over state of the art approaches.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The main strength of this paper is that it improves over previously proposed approach. The method on itself is not novel combining multiple bricks into one registration framework, but the added value is clear in the experiments (although short). The paper is also well-written and easy to understand.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The main weaknessess of the paper are threefold:

    • The novelty is low, since it improves over an existing approach. The improvement is not a novel formulation but is simply adapts a graph neural network architecture to the residual learning, leading to the SR-GCN contribution.
    • The experiment is only shown on simulated data from [8][9]. In contrast, the original work LIBR [8] adds to the simulated data clinical cases from three image-guided open liver resection. This is a major weakness that tempers the strength abovementioned. Please explain clearly if you used real clinical cases, or simulated data, and why adding data from [9].
    • The organization of the paper is unbalanced, it would have been better to shortne the introduction/background and extend the results section. Some missing experiments would have cemented the significance of the method. See comments below.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • Pleas consider citing recent work on DNN and biomechanical modeling for registration: https://doi.org/10.1007/978-3-030-32254-0_16
    • Figure 1 should be explained better, the caption needs to detail all the elements of the method
    • The density of the meshes is not clearly described. How spare S is w.r.t I. This is an important since the method tries to register while being robust to partial data
    • The imaging modalities are not clear, is it CT/iUS or CT/Laparoscopic images? iUS is mentioned in the experiments but the method seems to be agnostic to the modality
    • SR-GCN is poorly described in Figure 2. The caption is again very limited here. I understand the lack of space but it makes the figrus almost unreadable, thus not useful
    • A figure that explains the dataset is needed, why nine from [8] and just one from [9]. It is unclear is the datasets are simulated from real cases or they are acutal clinical cases
    • The experiment needs to clearly specify how sparse the data are “widely varying levels of sparsity” is unclear. Also, please explain is the registration is full sparse surface on volume or partial surface with volume
    • The paper could benefit from a re-organization. Since it’s an improvement over a previous method, the results section should be extensive. I suggest shortening the intro/background and improve the existing figures or add more figures in the results section, adding the additional material to the main paper should be considered too.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The paper present LIBR+, a robust method for preop-to-intraop registration by improving over LIBR. Although the novelty is limited the results do show improvement w.r.t LIBR and other similar approaches. The validation on simulated dataset is a weakness. The paper could benefit from a re-organization to extend the results section.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #2

  • Please describe the contribution of the paper

    This submission presents a novel method for the non-rigid registration of intra-operatively sampled sparse point data of the liver to a pre-operative surface. The key novelty of this work is the integration of a biomechanical formulation (Linear Iterative Boundary Reconstruction – LIBR) with previously reported Spline Graph CNNs in a novel framework to estimate residual surface deformations of the liver (and blood vessels as well, if I understood correctly). The proposed architecture uses two branches with a Spline GCNN encoder decoder architecture each, and uses 3 losses to constrain the residual estimation with the sparse point cloud. The method is validated on 10 simulated deformations from 4 different livers as target, and 700 different point collections for each, amounting to a total of 7000 registration tests. Comparison is performed with LIBR, a deep learning approach, and an ICP variant.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The paper is well motivated and addresses a relevant problem in MICCAI.
    • The paper is well written and easy to follow.
    • The formulation is interesting and brings together improvements from geometrical deep learning and biomechanical modelling.
    • The choice of comparison methods is suitable to demonstrate the hybrid principle of the paper.
    • Extensive registration experiments are conducted.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The method has not been seemingly validated on real clinical intra-operative data.
    • The sparse data used for registration is not very clearly described.
    • Accuracy metrics (TRE) would have benefitted from more justification. -There are missing details on how the network was trained.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    The algorithm is well described, but there are no details on how training was undertaken.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    I find the paper to be very clear and propose a relevant contribution in the field of biomechanical modelling in liver surgery. I only have concerns on some details of the experiments:

    • Was only simulated data used? Meaning, were only perfect point clouds (no noise) sampled from the deformed surface and then registered back to the undeformed original one? I believe this demonstrates the deformable modelling principles, but there is less clinical translation value (even though I do understand this type of data is very difficult to acquire).
    • The used collections contained surface data, surface data with 1-3 iUS images, and surface data with 16 iUS images. Is this very specific choice of data referring to the previously mentioned challenge? If not, how are numbers chosen? Also, are iUS planes simply defined as vessel lumen points from the vessel mesh? The results do not show how many cases have iUS simulations, and it is unclear how different types of sparse data were considered during the experiments (data splitting, for example).
    • Deformation notations L/R/W are mentioned, but it is not clear what this means.
    • TRE and DNN acronyms are not defined. How was TRE measured for each registration? Authors should be clear on the used targets – are the target the corresponding vertex points in the meshes (i.e there is an explicit correspondence)? Meaning, is there an explicit correspondence that makes this error not just the residual error between surfaces?
    • There are no details on how the network was trained, just the data used. Authors should have briefly mentioned the used hardware, optimizer, learning rate and lambda hyperparameter at the least.
    • An example of a sparse data cloud and resulting registration would improve the presentation. Also, for this application, inference time should be reported.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The work is novel and presents an interesting contribution to the field of image-guided surgery. My score is conditioned by the fact that validation was restricted to simulation data.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    A novel neural-network learns residuals of biomechanical simulations, in order to register a preoperative model to sparse surface intraoperative data in liver surgery. In the existing LIBR method, a linear-elastic model of the liver is first used to compute a deformation basis in response to localized perturbations. An optimization framework then finds the global deformation matching intraoperative observations. One limitation of this model is its inability to capture non-linear deformations. A neural network is here proposed to learn the residual errors of the LIBR model. Two branches assimilate the deformation of the liver and of the sparse measurements. Each branch in based on a spline-based graph CNN build over the vertices or the measured points, respectively. The model is trained on 3 livers in 3 deformation patterns each, plus 1 for testing. Compared to the original LIBR and a third-party neural-network, TRE errors are reduced from 6-8mm to 3-4 mm and the model is more robust.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • the idea of learning residual errors of a linear-elastic model, in comparison to develop a more complex non-linear model, is excellent and from my knowledge novel
    • the proposed network is well-designed, with spline-based graph networks on nodes and measurement points
    • the experimental study is sound, with an ablation study for each component of the proposed neural-network
    • results show a clear improvement over the authors’ baseline method and a third-party method from Speidel et al.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • no major weakness, mostly comments
    • from the results of the “Image-to-Physical Liver Registration Sparse Data Challenge” (reference [7]), you already proposed a method with errors in the 3-mm range. Is this method different from the original baseline LIBR used on this paper for comparison? Or did you use different test sets? This is not really clear to me.
  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    No

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • p.3, section 2: Spline-GCNN -> do you mean Spline-GCN [4], as referred above?
    • Fig.1: notations I, L, S, T from the methodology section could appear on the figure as well
    • the dataset referenced in [8] contains 10 liver models and 7000 configurations of sparse intraoperative measurements, but only 3 livers are used to train your network + 1 to test. I am not sure to understand, the dataset does not contains 10 different livers, but only 3 livers each in 3 configurations (L/R/W)?
    • Table 1: is the error improvement of LIBR+ statistically significant?
    • could you provide the overall computation time at inference? (LIBR + network)
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Novel approach, sound study, no major weakness.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Author Feedback

N/A




Meta-Review

Meta-review not available, early accepted paper.



back to top