Abstract

Reconstructing 2D freehand Ultrasound (US) frames into 3D space without using a tracker has recently seen advances with deep learning. Predicting good frame-to-frame rigid transformations is often accepted as the learning objective, especially when the ground-truth labels from spatial tracking devices are inherently rigid transformations. Motivated by a) the observed nonrigid deformation due to soft tissue motion during scanning, and b) the highly sensitive prediction of rigid transformation, this study investigates the methods and their benefits in predicting nonrigid transformations for reconstructing 3D US. We propose a novel co-optimisation algorithm for simultaneously estimating rigid transformations among US frames, supervised by ground-truth from a tracker, and a nonrigid deformation, optimised by a regularised registration network. We show that these two objectives can be either optimised using meta-learning or combined by weighting. A fast scattered data interpolation is also developed for enabling frequent reconstruction and registration of non-parallel US frames, during training. With a new data set containing over 357,000 frames in 720 scans, acquired from 60 subjects, the experiments demonstrate that, due to an expanded thus easier-to-optimise solution space, the generalisation is improved with the added deformation estimation, with respect to the rigid ground-truth. The global pixel reconstruction error (assessing accumulative prediction) is lowered from 18.48 to 16.51 mm, compared with baseline rigid-transformation-predicting methods. Using manually identified landmarks, the proposed co-optimisation also shows potentials in compensating nonrigid tissue motion at inference, which is not measurable by tracker-provided ground-truth. The code and data used in this paper are made publicly available at https://github.com/QiLi111/NR-Rec-FUS.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2245_paper.pdf

SharedIt Link: pending

SpringerLink (DOI): pending

Supplementary Material: N/A

Link to the Code Repository

https://github.com/QiLi111/NR-Rec-FUS

Link to the Dataset(s)

N/A

BibTex

@InProceedings{Li_Nonrigid_MICCAI2024,
        author = { Li, Qi and Shen, Ziyi and Yang, Qianye and Barratt, Dean C. and Clarkson, Matthew J. and Vercauteren, Tom and Hu, Yipeng},
        title = { { Nonrigid Reconstruction of Freehand Ultrasound without a Tracker } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15004},
        month = {October},
        page = {pending}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    A deep learning based 3D reconstruction of freehand ultrasound data is proposed, incorporating a new deformation optimization step in the process. Experiments are performed on data from 60 volunteers.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Tackling an important and challenging problem (considering deformation in 3D ultrasound reconstruction)
    • Large number of volunteers for experiments
    • Properly citing and building on prior art
    • Proposed methods look reasonable from a modeling standpoint
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • You are falling short of what you wanted to do; errors actually increase if you are using the deformation approach. On the other hand, it is arguably very difficult to even come up with an error metric for deformable 3D reconstruction - here, more visual results, or discussion would be needed.
    • Section 2.2 basically describes one way to put the problem commonly known as “3D ultrasound compounding” into equations, but by no means describes a novel algorithm for it. There is a lot of literature on efficicient compounding algorithms in fact. So this section could be omitted in my opinion.
    • The distinction between GPE/GLE, as well as LPE/LLE probably does not make sense, if as “landmarks” you merely use image corners - this then just amounts to a slightly different weighting of the error function.
    • Unfortunately I do not understand the “common frames” distinction in Table 1, maybe the description could be improved.
    • Minor remark: The large number of volunteers is great, but one would expect an MSK probe, not a curvilinear one, to be used on forearms, for optimal image quality.
  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    This is a really important avenue of research you are pursuing; however the manuscript is basically a work-in-progress report with no major improvements to be reported (yet). Please keep investigating!

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    As mentioned above - off to a great start, but no superior results achieved yet. It is interesting material to be discussed with other researchers, however does not meet the bar for a MICCAI main conference paper in my opinion.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    I still believe that this is mostly work in progress, but am a bit more confident about the results after the rebuttal (my concerns about the compounding method remain, but this is minor); In order to quickly disseminate this very interesting theme of work for discussion at this year’s MICCAI already (which would certainly be valuable to many researchers), I have bumped up my rating.



Review #2

  • Please describe the contribution of the paper

    While previous tracker-less freehand ultrasound reconstruction only predicts rigid transformation between frames, the authors introduce a new co-optimization method to estimate the rigid transformation and nonrigid deformation to reconstruct freehand 3D US and its potential to benefit other tasks.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The authors introduce new improvements on freehand US reconstruction by considering non-rigid deformation during the reconstruction. The authors evaluated the methods on a large dataset and compared their methods with three different benchmarks. The experiments are thorough, with ablation study showing the improvement of adding non-rigid correction and statistical analysis to show the significance. The results in Fig. 2(b) show the importance of considering non-rigid deformation in 3D freehand reconstruction to further prove the authors’ claim on model improvements.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    Authors claim that the deformation will benefit the spatial transformer network training, but there is no related citation in the paper to prove it. Also, even with the registration, the error in the reconstruction is still relatively large, so it is unclear whether the method is clinically ready to be used.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    1. Page 2: The motivation of the paper can be better organized. Though the authors mention the difficulty of obtaining ground truth deformation in paragraph 2, the method does not solve this problem. Paragraph 3 mentions that predicting rigid transformation can help image data augmentation/perturbation, but the authors do not provide further citations or discussions to show how the deformable registration can actually help. Also, it is unclear why having non-rigid deformation can help improve the model generalization, and how this is related to the authors’ work.
    2. Page 3, section 2.1: Does the method require US calibration matrix? What is the reason that the authors do not include calibration matrix estimation into the reconstruction pipeline? Also, how to deal with the error introduced by US probe calibration?
    3. Page 4, section 2.2: The time complexity is a bit confusing, since the authors have two definitions of N. I think the N stands for the number of pixels in a scan for O(N). In equation (1), I suggest that the authors use other notation for the number of support data points to avoid confusion. When I was reading this part, I was a bit confused about the difference between authors’ method and average compounding (such as the method used in PLUS: http://perk-software.cs.queensu.ca/plus/doc/nightly/user/AlgorithmVolumeReconstruction.html#VolumeReconstructionConfigSettingsCompoundingMode). I hope the authors can elaborate the difference.
    4. Page 4, section 2.2: Although the time complexity of estimating for one query grid point is O(N), the time complexity will become O(MN) to reconstruct a dense grid, where M is the number of grids that are queried. Also, since N is the number of pixels in a scan, the running time can still be pretty long, depending on the frames a sequence has. The authors need to compare with other existed methods to show that this is at least an optimal run time.
    5. Section 2.3: from my understanding, the ground truth reconstruction is reconstructed using the tracking data, which only considers the rigid transformation. It is unclear how the non-rigid registration network can capture the tissue deformation, since the model is trained with the difference between wrapped volume and the ground truth volume.
    6. Section 3: the reconstructed volume resolution (1mm x 1mm x 1mm) is pretty low and in reality an image with this resolution is hard to be informative, so I am wondering what the performance is with finer resolution.
    7. Section 4: I didn’t understand why the result means the deformation regularization reduces the bias. It may be that the rigid model has larger errors and the errors are corrected by the deformable model. I would like the authors to elaborate their explanation.
    8. Section 4: what is the definition of common frames?
    9. Section 4: I like the evaluation on rectifying the rigid reconstruction, but the authors need to report how many landmarks they have labeled, and what the standard deviation is, to show that the improvement is not a single occurrence.

    Minors:

    1. Section 2.4, meta-learning: Are D_train and D_val from different sequences? Why is it unlikely in this application, especially if it has been reported in citation [15]?
    2. Section 2.4, end-to-end training: the authors can discuss how to decide the parameter alpha.
    3. Section 2.5, metrics: though I appreciate that the authors present the pixel and landmark reconstruction error, which is also meaningful for the evaluation, I suggest that authors could also present the error of estimated transformation for benchmarking. I agree that the weighting between translation and rotation is confusing, but I think it is common to report the error separately (such as in [29]) to decouple the error.
    4. Section 3: can report the spatial calibration error (such as the re-projection error), since it is also related to the quality of reconstruction. It can also help decouple the error in US calibration and authors’ reconstruction algorithm.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Though this paper has some issues in organization and evaluation, I think the motivation of the papers is clear, the evaluation is thorough and reproducible, and the idea of introducing non-rigid registration into freehand 3D US registration is novel. Thus, if the authors can address my comments I am optimistic that this paper will have a good quality to be published.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    I would like to thank the authors for their rebuttal. I think the authors addressed my main concern in the motivation of introducing non-rigid transformation into US reconstruction and gave the running speed of the algorithm. However there is still some open questions that may need further investigation, I will keep my previous recommendation as a weak accept.



Review #3

  • Please describe the contribution of the paper

    The paper shows four independent contributions: a) An approach to optimize for at the same non-rigid deformation between 2D US frames and for reconstruction of a 3D US volume based on these frames. b) A simple O(N) method for interpolation of intensity values for non-uniformly distributed points. I am not sure if this is a contribution in broad scientific field, but seems to be new at least in the realm of science I work. c) Three new metrics for evaluation of 3D compounding (four are presented, but one is well-established) d) A fairly large dataset of optically tracked ultrasound sweeps of several arms with different trajectories.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper is well explained and proposes an interesting method for 3D compounding of ultrasound sweeps with external tracking information. By attacking the problem simultaneously on the deformation and on the reconstruction fronts, the method seems to provide better results than the baseline (simply using the pose acquired by the external tracking system) as well as two state-of-the-art deep-learning-based methods. Given the lack of proper metrics to deal with the problem of deformation in a 3D compounding setup, the authors develop three new metrics and use also the established landmark-based one. The metrics themselves are interesting and worth evaluating in other datasets and frames, however, at high level they seem aproppriate. The proposed interpolation method is also fast and simple which potentially could enable the method to run in real-time or close to it - which was not reported. Making both code and dataset available for the public is also noticeable strength.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The paper is quite strong, yet as always there are limitations and weaknesses. In my view, the assumption that the tracking information can be used as ground truth is not optimal, yet as the authors argue themselves, the deformation optimization enables to correct for potential errors. I would have loved to see also the performance on phantoms with known structures, such that other metrics like linearity or volumetry could be additionally be used.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Do you have any additional comments regarding the paper’s reproducibility?

    I checked the site and the code and data seem to be there. Yet, my review was not extensive!

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Very nice paper. As usual there are some aspects that could be improved though: a) The language is quite colloquial, yet it is correct in general. Now and then the sentences are too long. I would recommend to run the text over an AI-based correction tool or let a mother-tongue speaker with background in the field review it. b) When reporting statistical significance it is not clear for me what are you comparing to. Is it the baseline? Is it to the methods of [29] or [16]? Please make this explicit. Ideally also add the statistically significant results in table 1. c) You discuss that the global reconstruction metrics is better, the local is not in the second paragraph of the results section. Have you considered that the ground truth has also errors? Since you used an optical tracking system by NDI, you have also an error provided. Using this information may wise in future iterations of the work, but already could help you figure out if that played a role. d) I know that space is always a pain in MICCAI papers, but the level of compression is simply huge. The images are tiny. In case you get the paper approved, please increase the size of the images. e) The scanning protocols should be little more detailed. This could for example be done with better figures.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Accept — should be accepted, independent of rebuttal (5)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I think that the weaknesses and improvement recommendations do not play a role in the quality of the paper. It is simply good. I did not give it a strong accept, only because I am a little unsure if the interpolation method is really new - it is a bold claim that I think may not hold outside the realm I move in; it is also hard to look in an internet seach. Also, probably due to space limitations the scan protocols are not so clear in the images nor the text.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Accept — should be accepted, independent of rebuttal (5)

  • [Post rebuttal] Please justify your decision

    I keep my previous opinion that the paper should be accepted. I do agree with the downsides mentioned by reviewers 3 and 4, but they do not seem for me to be a reason to “lower my grade”. As for the rebuttal, I think it is acceptable and I hope the authors implement the modifications they promise.




Author Feedback

We thank the reviewers for their valuable feedback. We appreciate the recognition of: the methodological novelty; the large dataset to be released; and extensive experiments.

To R3:

  1. Clarification of the contributions and interpretation of the results. We propose to incorporate deformation in 3D US reconstruction with the following contributions: 1) open-sourced efficient implementation of the proposed pipeline; 2) determine the best co-optimisation strategy for combining rigid and non-rigid transformations; 3) extensive experiments that show improvement due to incorporated deformation. The additional optimisation of nonrigid modelling is shown effective not only in compensating soft tissue deformation, but also in the optimisation of the rigid component. This is potentially because predicting spatial transformation with higher degree-of-freedom (DoF) (unlike the DoFs in network parameters) might lead to reduced sensitivity in learning rate and weight initialisation [8, 10].
    The decreased global errors and increased local errors, reported in the results, may be due to a smaller bias but larger variance associated with local errors, which cancels itself out, when global errors are considered, interesting for further investigation. We thank the reviewer for the suggestion of MSK probe for forearm applications. We also agree that ground-truth deformation is hard to obtain, such as the careful effort in manually identifying ad hoc landmarks as illustrated in Fig. 2(b). More examples will be available in the open-source repository.
  2. Clarification of the differences between US compounding and described interpolation method. We agree with the reviewer that spatial interpolation is also an important step in US compounding. We also like to point out that compounding different ultrasound data may not require scattered data interpolation (e.g. via predefined polar coordinates) as in our freehand US reconstruction. We have not over-claimed the novelty of the fast interpolation algorithm, but provided an open-sourced PyTorch implementation, and its technical details are included for completeness.
  3. Distinction between pixel error and landmark error. The landmarks used are a surrogate of the application-specific landmarks that may be of interest in individual clinical applications. The reviewer was correct to point out that landmark error is a weighted version of pixel error, regardless of the clinical relevance of the landmarks. It is because of this difference in weighting that landmark error may provide more direct representation of the application-specific clinical value of the reconstruction.
  4. Definition of common frames. The comparison method in [16] cannot predict the probe trajectory for all frames in a scan. For fair comparison, we subsample the frames and use the same reference frame for the other methods in Table 1, so that all methods predict the transformation for the same subset of frames, i.e. the “common frames”. This will be further clarified.

To R1: We appreciate the positive comments and valuable suggestions. We will further clarify that the novelty of the method is the tested non-rigid deformation, rather than the practical interpolation method.

To R4: References [8, 10] reported that training with rigid transformation is more sensitive to weight initialisation and learning rate, than training with higher-DoF deformation. This is consistent with what is observed in our application. We will further rephrase the Introduction section. We strongly agree with the reviewer that there is potential for improving the reconstruction performance. We thank the reviewer for the positive comments and constructive suggestions. We will further clarify: 1) Calibration matrix is included in our reconstruction pipeline. 2) The interpolation process has an average speed of less than 1 ms over the dataset, with time complexity of O(N). Further technical details can also be found in the open-source repository.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    All reviewers agreed to accept the paper, though minor concerns remain. The discussion raised by the reviewers underscores the significance of the paper’s contributions to the field. Addressing these minor issues in the final version will further enhance the clarity and impact of the work, ensuring it meets the high standards expected by the community.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    All reviewers agreed to accept the paper, though minor concerns remain. The discussion raised by the reviewers underscores the significance of the paper’s contributions to the field. Addressing these minor issues in the final version will further enhance the clarity and impact of the work, ensuring it meets the high standards expected by the community.



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    All reviewers recommend acceptance, and I believe this work is interesting and strong enough for publication at MICCAI. I will go with the reviewers’ universal opinion, and recommend acceptance.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    All reviewers recommend acceptance, and I believe this work is interesting and strong enough for publication at MICCAI. I will go with the reviewers’ universal opinion, and recommend acceptance.



back to top