Abstract

Detecting slender, overlapping structures remains a challenge in computational microscopy. While recent coordinate-based approaches improve detection, they often produce less precise splines than pixel-based methods. We introduce a training-free differentiable rendering approach to spline refinement, achieving both high reliability and sub-pixel accuracy. Our method improves spline quality, enhances robustness to distribution shifts, and bridges the gap between synthetic and real-world data. Being fully unsupervised, the method is a drop-in replacement for the popular active contour model for spline refinement. Evaluated on C. elegans nematodes, a popular model organism for drug discovery and biomedical research, we demonstrate that our approach combines the strengths of both coordinate- and pixel-based methods.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/1793_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

https://github.com/kirkegaardlab/splender

Link to the Dataset(s)

SOSB dataset: https://zenodo.org/records/15519588

BibTex

@InProceedings{ZdyFra_Spline_MICCAI2025,
        author = { Zdyb, Frans and Alonso, Albert and Kirkegaard, Julius B.},
        title = { { Spline refinement with differentiable rendering } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15961},
        month = {September},
        page = {557 -- 567}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    In this manuscript, the authors propose an unsupervised, training-free differentiable rendering approach for spline refinement, achieving both high reliability and sub-pixel accuracy in slender structure detection.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The authors propose a penalized estimation approach, conceptually similar to estimation of spline-based methods. However, this approach incorporates additional penalty terms, resulting in a non-convex objective function. The Adam optimization algorithm is employed to perform the estimation.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    I believe the proposed method may be prone to overfitting, a common concern with many penalized estimation approaches. For instance, the unpenalized loss function can be minimized to zero, which may indicate overfitting or data interpolation. The method’s performance is therefore highly sensitive to the choice of penalty parameters. Additional studies are warranted to systematically investigate this issue and assess the robustness of the proposed approach under different parameter settings.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I believe the proposed method may be prone to overfitting, a common concern with many penalized estimation approaches. For instance, the unpenalized loss function can be minimized to zero, which may indicate overfitting or data interpolation. The method’s performance is therefore highly sensitive to the choice of penalty parameters. Additional studies are warranted to systematically investigate this issue and assess the robustness of the proposed approach under different parameter settings.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #2

  • Please describe the contribution of the paper

    The authors present an unsupervised, training-free method for refining spline-based shape predictions—typically obtained from deep learning-based detectors—at the pixel level. The approach leverages differentiable rendering to reconstruct the input image from spline control points by minimizing the reconstruction error between the rendered and original image. To improve rendering fidelity, the method also incorporates parameters for background color and texture modeling.

    The approach is evaluated on 2D C. elegans image datasets, using a synthetic benchmark with overlapping worms. Performance is compared against the classical Active Contours (AC) method using the average dynamic time warping distance as a metric. Across all evaluated scenarios, including varying levels of object overlap and image perturbation, the proposed method consistently outperforms AC.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The proposed method is unsupervised and training-free, making it broadly applicable without the need for annotated data or pretraining. Additionally, it refines all visible spline predictions simultaneously, which increases its practicality in dense scenes.
    • Experimental results demonstrate a clear advantage over the classical Active Contours (AC) approach, particularly in scenarios with overlapping objects and image perturbations.
    • The method has promising potential as part of an assisted labeling workflow, where user-provided line annotations could be refined into accurate, pixel-level labels—an especially valuable capability in domains where manual annotation is costly or time-consuming.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    • The paper does not report any information regarding execution times for the proposed method or the baseline (Active Contours). This is a crucial aspect, particularly if the method is intended for use in interactive annotation workflows, where responsiveness is key.

    • The comparison with alternative methods is limited to the classical Active Contours (AC) approach. Including additional baselines—such as recent deep learning-based refinement or contour optimization techniques—would strengthen the evaluation and better contextualize the method’s performance.

  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    Just two minor comments:

    • The paper would benefit from a short and clearly defined section describing the datasets used. As currently written, it’s unclear whether the results shown correspond exclusively to the SOSB dataset or if the DTC dataset—mentioned sporadically but never formally introduced—is also included (e.g., in Table 2).
    • The sentence “We note that supervised deep learning refinement methods [17,29] are always faster and have greater detection accuracy.” is somewhat unclear. Faster compared to which methods? Intuitively, one might expect unsupervised training-free methods to be faster, so this claim could benefit from clarification or justification.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    In my opinion, the proposed method is novel and has the potential to be impactful, particularly for refining annotations in a training-free and unsupervised manner. However, the paper would benefit from reporting execution times—especially relevant for interactive use cases—a clearer and more structured description of the datasets used, and, ideally, comparisons with additional baseline methods to better contextualize the performance of the proposed approach.

  • Reviewer confidence

    Somewhat confident (2)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #3

  • Please describe the contribution of the paper

    Quantitative behavioral analysis is an important and necessary task in several research fields such as behavioral neuroscience, pharmacological screens etc. The first step in ethology analysis of C. elegans is detection of the centerline of the worm. However this is extremely difficult in scenarios such as low contrast, multiple animals colliding, tightly coiled worms etc. Thus the problem is relevant and needs to be solved

    The main contribution of the paper is a method that refines initial centerline predictions based on rendering constraint i.e. the image rendered based on centerline refinement should match the given image. The proposed method reconstructs image by optimizing background parameters, spline parameters, and centerline width parameter. Apart from the reconstruction loss of image, other regularization losses are also added such as spline length, curvature, min and max width of centerline.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Overall the method is innovative. Most previous methods use deep learning models [1], or optimization [2] to detect/refine worm centerline. The method takes a different approach and optimized for centerline so that the image is reconstructed optimally.

    2. Robustness results - authors demonstrate that irrespective of initial ambiguities in spline predictions such as rotation shift, scale shift, and translation shift, refined centerlines are optimal.

    3. Authors also demonstrate a useful usecase where weak labels such as straight initial centerline predictions can be refined to worm centerlines thus leading to assisted labelling.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    1. Although the paper claims to unsupervised however this is not entirely true as method starts with initial spline predictions made using a pretrained model trained in supervised manner. Subsequently the model refines those predictions thus it is dependent on a supervised model. Or at least it requires some initial centerline prediction (and that too a good one/nearby the optimal solution) as a starting point.

    2. Many symbols and parameter choices are not clearly explained in the paper e.g. in Eq 1 what is “RES”. Similarly in Eq 3 how are the values of “w_prime” and “W_prime” set. How is the curvature “C” calculated. How are the hyperparameters “lambda_i” set?

    3. Author demonstrate some predictions for multi-animal images, however it is not clear whether in this case, does the method need initial centerline predictions for all animals, or can the method work when centerline predictions are available for only few animals, cause I can imagine the reconstruction loss to be huge in this case.

    4. Authors claim that the method handles in-vivo distribution shifts. Overall the method seems to work well for examples provided in Fig 3 but the initial centerline predictions look good as well.

    5. Authors mention a custom optimization recipe, please mention the computational complexity/time for optimization per image/worm.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The authors claimed to release the source code and/or dataset upon acceptance of the submission.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (5) Accept — should be accepted, independent of rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall the method for refining initial centerline predictions by rendering images is novel but it is not clear how dependent it is on the initial prediction from a deep learning model or manual annotation thus how much time is saved using this method or does it lead to better performance in downstream tasks such as behavior analysis.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A




Author Feedback

N/A




Meta-Review

Meta-review #1

  • Your recommendation

    Provisional Accept

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    N/A



back to top