List of Papers Browse by Subject Areas Author List
Abstract
Many anatomical structures can be described by surface or volume meshes. Machine learning is a promising tool to extract information from these 3D models. However, high-fidelity meshes often contain hundreds of thousands of vertices, which creates unique challenges in building deep neural network architectures. Furthermore, patient-specific meshes may not be canonically aligned which limits the generalisation of machine learning algorithms. We propose LaB-GATr, a transfomer neural network with geometric tokenisation that can effectively learn with large-scale (bio-)medical surface and volume meshes through sequence compression and interpolation. Our method extends the recently proposed geometric algebra transformer (GATr) and thus respects all Euclidean symmetries, i.e. rotation, translation and reflection, effectively mitigating the problem of canonical alignment between patients. LaB-GATr achieves state-of-the-art results on three tasks in cardiovascular hemodynamics modelling and neurodevelopmental phenotype prediction, featuring meshes of up to 200,000 vertices. Our results demonstrate that LaB-GATr is a powerful architecture for learning with high-fidelity meshes which has the potential to enable interesting downstream applications. Our implementation is publicly available.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2377_paper.pdf
SharedIt Link: https://rdcu.be/dY6fM
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72390-2_18
Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2377_supp.pdf
Link to the Code Repository
https://github.com/sukjulian/lab-gatr
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Suk_LaBGATr_MICCAI2024,
author = { Suk, Julian and Imre, Baris and Wolterink, Jelmer M.},
title = { { LaB-GATr: geometric algebra transformers for large biomedical surface and volume meshes } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15012},
month = {October},
page = {185 -- 195}
}
Reviews
Review #1
- Please describe the contribution of the paper
The authors propose a general geometric algebra transformer for large-scale surface and volume meshes. They leverage existing Geometric Algebra Transformer (GATr) and add a learnable tokenisation module consisting of feature pooling followed by interpolation to original mesh resolution. The main contribution is a compression module to reduce the GPU memory consumption in learning large-scale 3D biomedical meshes with high performance on downstream applications to regression tasks.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
This work aims to treat the important problem of high computational cost common to graph transformer networks and shows applications to 3D meshes in the biomedical field.
- Overall the paper is well written
- Detailed neural architecture and implementation details for reproducibility purposes
- Well defined proposed pooling and interpolation mechanisms
- Surpassing baselines on 3 regression tasks: Surface-based WSS estimation, Volume-based velocity field estimation and Postmenstrual age prediction from the cortical surface
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
-Results section does not read very well -Limited evaluation, the authors use mainly global measurements (MAE) to support their claims -Comparison between baselines on memory usage is not provided
- The novelty is limited to a compression module added to an existing geometric transformer network.
- Significance of results is not given
- Lacking qualitative comparison to baselines
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
The authors plan to provide the code which is good for reproducibility purposes. Moreover, training details are given as well as hyperparameters for the different experiments performed.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
The paper is overall well written. The methodology is clearly laid out. The paper would gain in value if more evaluation was performed. The results do not show significance with test statistics. The ablation experiments could be extended with more investigations into memory usage and the network parameters. For the Postmenstrual age prediction, the bland-altman plot shows good agreement with reference values but no comparison with other baselines is provided. For the experiments on Surface-based WSS, it would be more convincing to see surface maps and visual comparison with baselines. The paper lacks technical contributions. More work is required to highlight the contributions and the impact of this work in the field.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Reject — could be rejected, dependent on rebuttal (3)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper is well written and tackles an actual problem in computer vision of learning large scale data with graph neural networks/transformers. However, the technical contribution is limited. The work mainly uses prior work with some modifications. The results section lacks thorough evaluation and comparison to baselines both qualitatively and quantitatively. Further work is required to establish the benefits and impact of this work.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Weak Accept — could be accepted, dependent on rebuttal (4)
- [Post rebuttal] Please justify your decision
Thank you to the authors for the rebuttal answer.
Review #2
- Please describe the contribution of the paper
This work presents a variation of GATr models that can manage large 3D models (up to 200k vertices) using embeddings. In details, the authors proposed three modules: tokenization, interpolation and another optional class token (feature extraction).
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Novelty: new approach to manage large amount of vertices (complex 3D models) Transversal application: it can be used in several domains, the model has been tested with different datasets.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Performance: unexpected bad results that need additional study
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
Minor changes:
- I suppose that MLP is multilayer perception but as far as I read it is not defined in the manuscript. Please, include it for a better readiness
- page 4. ‘we partition the’ should be ‘ we divide the” or something similar using a verb.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Weak Accept — could be accepted, dependent on rebuttal (4)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The proposal is interesting and theoretically supported, but unfortunately, some results are lower than previous developments. Maybe, further analysis will provide some evidence about this behaviour.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
Authors propose an extension to Qualcomm’s Geometric Algebra Transformer (GATr, 2023). GATr improves over Graph Neural Networks (GNN) which suffer from poor receptive fields when compressing large amounts of information. But GATr is highly memory intensive on large 3D meshes like those in many Biomedical Imaging applications. Authors employs a learned pooling and upscaling approach termed Large-scale (Bio)medical GATr (LaB-GATr). Pooling involves subsampling the mesh and encoding differences via an MLP, which are averaged per cluster. Learned tokens are passed through GATr and the result is upsampled via learned interpolation to the finer mesh. While related to PointNet++ for GNN subgraph pooling, the work is novel in methodology and application to Transformer architecture. LaB-GATr on surface and volume meshes for cardiac and brain-based meshes show good results without the need for mesh morphing.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The paper is very well written, and the Authors include enough detail on the base GATr framework such that their method is understandble, and their contributions are differentiable. Author’s have evaluated their approach using 3 experiments which all demonstrate different features of the work, including its ability to reduce the memory overhead compared with GATr. The Authors have set their work into context, describing current methods and their limitations. Along with the promised code repository, the Authors provide enough detail that one could work to reproduce and adapt their methodology.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
While the Authors show results in 3 experiments, there is little discussion about important hyperparameters in their method: specifically the limitiations of vertex decimation (see comments below).
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
Authors have included enough detail that their method may be reproduced with some additional work. When the code is published, this will allow for easier reproducability.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
-
It would be interesting, and useful, for the reader to understand more how the number of subsampled vertices is chosen, given that this is vital for the construction of clusters, and directly impacts learned tokenization. Authors state that the fine mesh is subsampled with farthest point sampling, up to m vertices. But how is this chosen, and how much does it affect the result? We see in their experiments that they chose 10% and 1% of artery surface and volume mesh vertices respectively, and then 2.4% for the corticol surface mesh - how are these fractions chosen?
-
It would be beneficial if the Authors could expand upon their ablation experiments that show the interpolation step plays a bigger part than their pooling step. What does this mean exactly? Could Authros comment more on the trade-off between memory overhead and the parameters they investgated in these modules?
-
Authors mention in their discussion that the method did not work on predicting Gestational Age. Was this just the case for LaB-GATr demonstrating poor performance, or were their attempts with other methods also unsuccessful? Greater insight here would be appreciated.
-
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Accept — should be accepted, independent of rebuttal (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The addition of learned tokenization and interpolation at the ends of the GATr framework allow for this useful method to be applied to important medical imaging problems. Results shown in the paper support the idea that this method contributes helpful additions to GATr, while maintaining performance.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
Accept — should be accepted, independent of rebuttal (5)
- [Post rebuttal] Please justify your decision
The paper is justifiably novel and will be a good contribution to MICCAI.
Author Feedback
We thank the reviewers for their kind words and valuable feedback on our work. Below, we address the main issues raised by the reviewers.
Novelty
Following the remark of R4 about the technical contribution of our work, we would like to clarify what we consider its novelty: 1) the adaption of PointNet++ message passing to projective geometric algebra (PGA), in particular, identification of a suitable element of G(3,0,1) to replace relative-position conditioning, 2) the interpolation module, in which we define a convex combination of multivectors that provably confines the output to the convex hull of its components, a property that is important yet non-trivial in PGA, and 3) the addition of a geometric class token which greatly reduces computational overhead for mesh-wide regression. As the reviewer indicates, these contributions are combined into a general transformer model that we believe can have a substantial impact on the analysis of large (bio)medical meshes.
Results section
We recognize the potential added value of metrics beyond mean absolute error (MAE), tests for statistical significance, and Bland-Altman plots for baseline methods (suggestions by R4). For objectivity, we have chosen to use the metrics reported in the baselines’ original publications, rather than re-implementing the models ourselves. This means that we do not have access to sample-wise performance metrics for the baselines and are limited to aggregated metrics, such as MAE. We will address this by including the standard deviation values for our results in the camera-ready version and upload sample-wise metrics alongside our code. This will enable detailed analysis in future studies by colleagues in the field. R4 questions the lack of vertex-level evaluation in the cardiovascular experiments. We chose to use the same global metrics and visual presentation as the baseline papers (refs [23, 24] in paper) for direct comparison. We found our estimated wall shear stress (WSS) fields to be visually indistinguishable from the ground truth. Thus we agree with R4 that visualisation of local error (via “un-wrapped” surface maps) would be valuable. If this conforms with the rebuttal guidelines, we would like to add it to Figure 2 in the camera-ready version.
Memory efficiency
R4 asks for a comparison of memory usage between GATr (ref [4] in paper) and LaB-GATr. Both “use[…] memory- efficient attention [18] with linear complexity” (p. 2 in paper) proportional to the number of tokens n. LaB-GATr allows us to reduce n arbitrarily. While preparing our paper, we verified the linear scaling experimentally, but did not include it in the submitted manuscript. R1 and R4 correctly point out that the learnable pooling and interpolation introduce additional parameters. This should lead to a trade-off between memory usage (GATr) and training time due to parameter overhead (LaB-GATr). However, we found that computing self-attention dominates runtime and far outweighs the parameter overhead. We will add this information in the camera-ready version.
GA estimation
We concur with R3 that the performance on gestational age (GA) estimation that we mention in the Discussion requires additional study. R1 asks if we have tried other models besides LaB-GATr that have also underperformed in GA estimation. We have in fact studied an ablated LaB-GATr without the geometric algebra. Its performance was inferior to LaB-GATr’s. Our current hypothesis is that while postmenstrual age (PMA) can largely be estimated based on the geometry of the brain, GA is better explained by vertex-specific biomarkers such as myelination. In our dataset, these features were provided on sphericalised and subsequently sub-sampled brains. Back-projection to the cortical surface erased some of their spatial context (due to the sub-sampling). We will adapt our Discussion accordingly in the camera-ready version.
Meta-Review
Meta-review #1
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
All three reviewers suggest accepting the paper. The authors did well in responding to concerns. The work will make a valuable contribution to the conference.
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
All three reviewers suggest accepting the paper. The authors did well in responding to concerns. The work will make a valuable contribution to the conference.
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
- What is the rank of this paper among all your rebuttal papers? Use a number between 1/n (best paper in your stack) and n/n (worst paper in your stack of n papers). If this paper is among the bottom 30% of your stack, feel free to use NR (not ranked).
N/A