Abstract

Efficient computation of forward and back projection is key to scalability of iterative methods for low dose CT imaging at resolutions needed in clinical applications. State-of-the-art projectors provide computationally-efficient approximations to X-ray optics calculations in the forward model that strike a balance between speed and accuracy. While computational performance of these projectors are well studied, their accuracy is often analyzed in idealistic settings. When choosing a projector a key question is whether differences between projectors can impact image reconstruction in realistic settings where nonlinearity of the Beer-Lambert law and measurement noise may mask those differences. We present an approach for comparing the accuracy of projectors in practical settings where the effects of the Beer-Lambert law and measurement noise are captured by a sensitivity analysis of the forward model. Our experiments provide a comparative analysis of state-of-the-art projectors based on the impact of their approximations to the forward model on the reconstruction error. Our experiments suggest that the differences between projectors, measured by reconstruction errors, persists with noise in low-dose measurements and become significant in few-view imaging configurations.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/1606_paper.pdf

SharedIt Link: pending

SpringerLink (DOI): pending

Supplementary Material: N/A

Link to the Code Repository

https://github.com/ShiyuXie0116/Evaluation-of-Projectors-Noise-Nonlinearity

Link to the Dataset(s)

https://github.com/ashkarin/forbild-gen

BibTex

@InProceedings{Xie_An_MICCAI2024,
        author = { Xie, Shiyu and Zhang, Kai and Entezari, Alireza},
        title = { { An Evaluation of State-of-the-Art Projectors in the Presence of Noise and Nonlinearity in the Beer-Lambert Law } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
        year = {2024},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15007},
        month = {October},
        page = {pending}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper presents an approach for evaluating the accuracy of fast forward projectors used in iterative CT reconstruction under realistic conditions. The authors describe the sensitivity of the inverse problem to input perturbations like number of views or Poisson noise to define an upper bound for the impact of perturbations. They then compare the reconstruction errors of three different fast forward projectors with a reference projector in cases without and with noise present in the data. The results show that differences between the fast projectors persist in the presence of noise. A high impact of fast projectors on reconstruction quality is mainly observed for a low number of views.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    Defining an upper bound for the impact of approximations of fast projectors on the accuracy of reconstructions under realistic conditions is valuable.

    The comparison of different fast projectors with respect to a reference projector is interesting, especially in realistic conditions.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The findings are as expected from theory, perturbations especially affect the reconstruction quality for low numbers of views.

    Because of the lack of clear definitions of parameters and terms, it is hard to follow what input perturbations are evaluated in which part of the paper.

    A discussion of the results is missing, the paper mainly states findings without a detailed interpretation.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not provide sufficient information for reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • It is not always clear what the tested perturbations are, as the authors frequently switch between the terms “nonlinearity of the Beer’s law”, “noise”, “discretization errors”, “inconsistency”, and “detector blur”. In Figures 1-4, the x-axis then (presumably) marks the number of views (in presence of noise or not), so I am wondering if all those perturbations are only defined by the number of views? Because the number of views seems to be the measure that all condition number and RMSE values are evaluated against according to the figures.
    • The angle theta is an important variable in describing the perturbation as a measure of inconsistency, but it is defined very loosely as “The angle theta that the data b makes with (the span of) A”. Consider defining this value more precisely, mathematically, or visually such that it is easier to understand for the reader. Furthermore, the detector measurement vector b is used before it is properly defined in Section 2.
    • The results are not sufficiently discussed. Section 4 shows experiments and results, but the results are not explained in detail. For example, why does SF perform better for a very low number of views (40), but CNSF is better from 60 views on? Why is the LTRI projector performing notably worse compared with the other two methods?
    • The quality of the Figures needs improving. Sometimes labels are missing (Fig. 2), Figures 1 and 2 show the condition number kappa on the y-axis but it is labeled differently. It is unclear to me what the circled ‘x’ in Figures 2 and 4 mean and what the color coding of these ‘x’s in Figure 2 stands for. In these two Figures, the proportionality of the “box” to the logarithm is also very hard to grasp, especially as the boxes and values don’t seem to align with the ticks on the y-axis. As Fig. 3 and 4 show a comparison of the same three methods, the same order and colors should be used for them as well.
    • In Section 2, a resolution of 128 x 128 is announced, but the reconstructions in section 4.4 have a resolution of 256 x 256, why is that?
    • The error scale in Figure 5 that shows the difference in accuracy between the three compared forward projectors is 10^-4. Can you comment on if this small difference is actually relevant with regards to reconstruction quality? Does the given scale also apply to the reference reconstruction image, or what range are these values in?
    • In Section 4, what does “To cover this FOV, each view has 855 detectors with bin size of 0.78125 mm.” mean? Do you mean detector elements?
    • I am not familiar with the term “blank scan factor” used in Section 4, from the definition I assume it is the initial intensity?
    • There are a few grammatical mistakes: Section 1: “The computation of the forward model is the most expensive computations in MBIR algorithms.” “While the efficiency of fast projectors are assessed in terms of per-iteration computational cost, …” “… on a phantom dataset where effects of the Beer’s law is computable from exact line integrals …” Section 2: “Performance of algorithms on computational problems are generally characterized…” “The vector of detector measurements b are obtained from line-integrals …” Section 4: Why is the headline of section 4 all caps?
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The impact of the findings is not very high, as outcomes are as expected from experiment design. The descriptions and definitions are not always clear enough to follow. The Figure quality is not good enough.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    I’d like to thank the authors for addressing the raised concerns in a well structured manner. The added information made the purpose and impact of the presented work clear. This amount of clarity and structure with fewer grammar/notation mistakes would have been very helpful in the submitted manuscript. I’m changing my vote to a weak accept and, in case of acceptance, encourage the authors to rephrase their descriptions within the allowed range to provide as much clarity as in their rebuttal descriptions.



Review #2

  • Please describe the contribution of the paper

    This paper did comparative study of several projectors in a scenario which the authors claim to be more realistic.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    This paper is well organized and easy to follow.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • I personally do not see strong motivation for the comparative study in the proposed scenario. Are those conclusions from the new scenario any different from those in the existing evaluation scenario?
    • The evaluation results and the corresponding conclusions seem to be trivial, e.g., more views will result in smaller conditional number and smaller RMSE.
    • One interesting result is that area-based approach seem to inherently have larger error than line-integral based methods. But the authors didn’t discuss the potential reasons or intuitions accounting for observation.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
    • I personally do not see strong motivation for the comparative study in the proposed scenario. Are those conclusions from the new scenario any different from those in the existing evaluation scenario?
    • The evaluation results and the corresponding conclusions seem to be trivial, e.g., more views will result in smaller conditional number and smaller RMSE.
    • One interesting result is that area-based approach seem to inherently have larger error than line-integral based methods. But the authors didn’t discuss the potential reasons or intuitions accounting for observation.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Reject — could be rejected, dependent on rebuttal (3)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Though the paper is well written and organized. The problem is not well motivated, and the analysis of the empirical results is lacking to some extent.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #3

  • Please describe the contribution of the paper

    The authors compare the performance of three forward models for CT reconstruction under different levels of noise and nonlinearity. In this non-idealized settings, they rigorously the impact of approximations made by each projector on reconstruction quality. The main finding is that projectors that use line integrals to approximate the image formation model are the most robust to noise and the most performant in low-dose reconstruction tasks.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The paper is very well written. The methods are clearly presented and the comparisons of different projectors is rigorously conducted.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    In Figure 1, does the condition number depend on which projector is used? I am confused why there aren’t three curves, one for each projector in §3.

    Figure 2 is very hard to understand. What is the x-axis (is it number of views)? What do the numbers at the top and bottom of each box represent? What is the x inside of a circle (is it an outlier)? Why does the sensitivity not monotonically decrease with increased number of views (unless that’s not what the x-axis represents)? This figure needs a lot of clarification.

    Small nitpick: the exponential attenuation law is referred to in the paper as the Beer-Lambert law, the Beer’s law, and the Lambert-Beer law. Please use just one name for consistency.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.

  • Do you have any additional comments regarding the paper’s reproducibility?

    N/A

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html

    Addressing the confusions with the experiments in §4 would improve the quality of the paper.

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The authors present a useful and thorough comparison of many state-of-the-art techniques. A few confusing figures could be clarified in the rebuttal.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    Weak Accept — could be accepted, dependent on rebuttal (4)

  • [Post rebuttal] Please justify your decision

    My positive opinion of this work is still positive after reading the rebuttal and other reviewers’ opinions.




Author Feedback

We appreciate the thoughtful examination of our paper and valuable comments we have received. While R1/R4 considered the study valuable and interesting, R1/R6 point out the need for more detailed discussion of results. We plan to expand this discussion in the final paper along with the suggested improvements by R4/R1 (apologies for the missing annotation). Regarding R6’s comments, we would like to re-emphasize that the motivation for this study is to understand the differences between projectors in low-dose imaging conditions (and not to show that fewer views lead to larger condition numbers and RMSEs, which is expected from theory). To be specific: While existing studies in idealistic settings compare these projectors in forward error, our study shows that when the Beer’s law effects are taken into account, there are differences among projectors (Fig5) and these differences become more significant in few-view settings, where MBIR (instead of analytic methods) are actually used. Our study suggests that projectors with more accurate line integrals (compared to accurate areas) are more robust in these low-dose imaging conditions.

(R1/R6) Motivation For The Study, Difference from Previous Studies and Significance of Conclusions: (i) Existing studies of projectors (e.g., LTRI, SF, distance-driven) examine their performance in idealistic settings (i.e., linearized Beer’s law, noise free) where accuracy can be measured and projectors can be compared. But the nonlinear physics that is present in every real-world CT scan, as well as noise in measurements introduce additional sources of error to the error introduced by projectors. Therefore from a practical viewpoint, projector errors need to be put in perspective relative to errors due to nonlinearity and noise. This is precisely what our study provides (by simulating the Beer’s law and Poisson noise). (ii) The most significant conclusion in our study is that for low-dose CT applications, the projectors with most accurate computation of line integrals provide the most robust imaging results. CNSF calculates the line integral exactly and approximates the detector blur, whereas SF approximates the line integral over the detector cell, leading to differences in the results. Furthermore, LTRI that approximates the line integrals across the detector by an area calculation, introduce more significant errors that are observable in reconstructed images. We submit that these differences among projectors are not expected from what was previously known (R1).

(R1): We will clarify the perturbations in measurements due to noise versus perturbations in the forward model due to approximations introduced by (fast) projectors. Explicit formula for theta, justify resolutions and other suggestions will be incorporated.

(R4/R1): Figures 2 and 4 will be fixed to show x-axis (number of views). Further explanations on boxplots will be provided quantifying effects of Poisson noise in variations that it introduces to the condition number. We will unify (and improve description of) the color coding. Sensitivity plateaus at a level depending on strength of the Beer’s law effect (detector/pixel sizes) as the number of views increases.

(R6): Fig1 shows the reference (ideal) projector showing the difficulty of the inverse problem from a linear algebraic viewpoint (kappa with the Beer’s law effects). Fig2 shows deviations of Fig1 when noise is added. Figures 1 and 2 show the upper bound of the backward error caused by the perturbations from the fast projectors compared to the Ref projector. Figures 3 and 4 show the corresponding plots of the difference between the three fast projectors and the reference projector, providing the actual backward error in this experiment.

We will make all code and experimental results publicly available from GitHub.




Meta-Review

Meta-review #1

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    Two reviewers recommend accept with one reject recommendation that was not updated post-rebuttal. The reject review mainly critiques motivation for the work and discussion of the results. These issues are well-addressed by the rebuttal.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    Two reviewers recommend accept with one reject recommendation that was not updated post-rebuttal. The reject review mainly critiques motivation for the work and discussion of the results. These issues are well-addressed by the rebuttal.



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    Thorough comparison of an important ingredient of many (learning-based) iterative reconstruction techniques for CT. I think part of the motivation of the work only became apparent in the rebuttal, it would be good if the authors can incorporate these issues (and others raised by the reviewers) in the camera-ready version. Although not proposing a new (deep learning) method, the work has practical value for CT research. Hence, I think it would be a good addition to the conference.

  • What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).

    Thorough comparison of an important ingredient of many (learning-based) iterative reconstruction techniques for CT. I think part of the motivation of the work only became apparent in the rebuttal, it would be good if the authors can incorporate these issues (and others raised by the reviewers) in the camera-ready version. Although not proposing a new (deep learning) method, the work has practical value for CT research. Hence, I think it would be a good addition to the conference.



back to top