List of Papers Browse by Subject Areas Author List
Abstract
Prostate cancer grading from whole-slide images (WSIs) remains a challenging task due to the large-scale nature of WSIs, the presence of heterogeneous tissue structures, and difficulty of selecting diagnostically relevant regions. Existing approaches often rely on random or static patch selection, leading to the inclusion of redundant or non-informative regions that degrade performance. To address this, we propose a Graph Laplacian Attention-Based Transformer (GLAT) integrated with an Iterative Refinement Module (IRM) to enhance both feature learning and spatial consistency. The IRM iteratively refines patch selection by leveraging a pretrained ResNet50 for local feature extraction and a foundation model in no-gradient mode for importance scoring, ensuring only the most relevant tissue regions are preserved. The GLAT models tissue-level connectivity by constructing a graph where patches serve as nodes, ensuring spatial consistency through graph Laplacian constraints and refining feature representations via a learnable filtering mechanism that enhances discriminative histological structures. Additionally, a convex aggregation mechanism dynamically adjusts patch importance to generate a robust WSI-level representation. Extensive experiments on five public and one private dataset demonstrate that our model outperforms state-of-the-art methods, achieving higher performance and spatial consistency while maintaining computational efficiency.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/5209_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
N/A
Link to the Dataset(s)
N/A
BibTex
@InProceedings{JunMas_Graph_MICCAI2025,
author = { Junayed, Masum Shah and Van Vessem, John Derek and Wan, Qian and Nam, Gahie and Nabavi, Sheida},
title = { { Graph Laplacian Transformer with Progressive Sampling for Prostate Cancer Grading } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15971},
month = {September},
}
Reviews
Review #1
- Please describe the contribution of the paper
The main contribution of this paper is the proposal of a method that combines a Graph Laplacian Attention-Based Transformer (GLAT) and an Iterative Refinement Module (IRM) for improved prostate cancer grading. The method adaptively selects the most relevant tissue regions and maintains spatial consistency, enhancing both the accuracy and computational efficiency of histopathological image analysis.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Innovative Methodology: The combination of the Graph Laplacian Attention-Based Transformer (GLAT) and Iterative Refinement Module (IRM) is a novel approach for improving prostate cancer grading. It selectively focuses on the most relevant tissue patches while ensuring spatial consistency, which enhances both accuracy and efficiency. 2.Solid Experimental Results: The model consistently outperforms existing methods on multiple datasets, showing significant improvements in performance metrics like AUC and Cohen’s Kappa, which demonstrates the effectiveness of the approach.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- Computational Overhead: The model’s complexity, particularly with the use of graph Laplacian and iterative refinement, increases the computational cost significantly. Although the paper mentions computational efficiency improvements, the model still requires a considerable amount of resources, with 83.3M parameters and 32.53 FLOPs. This might limit its practical application in real-time clinical settings, especially when compared to simpler methods with lower resource requirements.
- Ambiguity in Iterative Refinement Process: The paper introduces an iterative patch selection mechanism, but it lacks clear explanations of how the iterative process enhances model performance. Specifically, Equations (1) and (2) discuss patch importance refinement, but the impact of these refinements on final predictions is not clearly demonstrated. Additionally, the choice to freeze the foundation model (FM) during the process (as described in Equation (1)) is not fully justified. It remains unclear how this choice affects model adaptability and whether updating the FM during training could lead to improved performance.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(2) Reject — should be rejected, independent of rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper presents a combination of Graph Laplacian Attention-Based Transformer (GLAT) and Iterative Refinement Module (IRM) for prostate cancer grading. However, the score of 3 is based on the following factors:
- Limited Novelty: While the paper introduces the combination of GLAT and IRM, the overall innovation is not sufficiently clear. The use of graph Laplacian constraints in attention mechanisms, such as in Equation (4) (Graph Laplacian Attention), and the iterative refinement process described in Equations (1) and (2) (patch selection refinement), are based on techniques that have been explored in previous works. The paper does not adequately explain how this specific combination provides a significant breakthrough compared to existing methods. The authors should clarify the novelty of their approach and how it improves upon prior work.
- High Computational Complexity: Despite claims of improved efficiency, the model still requires significant computational resources, with 83.3M parameters and 32.53 FLOPs as shown in Table 2. This may limit its real-world applicability in clinical settings, where computational resources are often constrained. The paper would benefit from a more detailed analysis of how the model’s complexity can be optimized without compromising accuracy. Specifically, the ablation study in Table 2 shows performance degradation when components like the IRM are removed, but the authors should discuss how to balance efficiency and accuracy in practical applications.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Reject
- [Post rebuttal] Please justify your final decision from above.
Although the authors addressed some concerns, key issues remain unresolved. The model still demands high computational resources, limiting its practicality in real clinical settings. The iterative refinement process and the choice to freeze the foundation model are not clearly explained or justified, leaving important design aspects ambiguous. Furthermore, the paper lacks sufficient evidence of real-world applicability and deployment feasibility. For these reasons, I stand by my decision to reject.
Review #2
- Please describe the contribution of the paper
This paper aims to mitigate the influence of redundant or non-informative regions in prostate cancer grading from whole-slide images. To address this issue, the authors propose a graph laplacian attention-based transformer integrated with an iterative refinement module, which enhances both feature learning and spatial consistency. Additionally, they leverage a pre-trained ResNet-50 model and a foundation model tailored for the medical domain.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- This work provides an accurate and comprehensive summary of prior research in the field.
- The effective application of knowledge from large models and pre-trained models to the specialized domain appears to be successful.
- The approach demonstrates certain performance improvements.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
The motivation of using the graph Laplacian attention-based transformer appears somewhat not clear. I see the experimental results and visualizations, but I think it is unclear why this approach can outperform MLA by such a large margin. Additionally, the code referenced in the HTML is not available.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not mention open access to source code or data but provides a clear and detailed description of the algorithm to ensure reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Please see the strength and weakness.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
The rebuttal addresses my concern, thus I vote for accept.
Review #3
- Please describe the contribution of the paper
WSI are usually graded via multiple instance learning frameworks which fail due to static attention mechanisms, lacking spatial constraints/reasoning, poor patch selection/retention.
The paper introduces an an iterative refinement process to select and reselect the most important patches from prostate cancer WSIs. Additionally, uses graph Laplacian-based constraints to enhance feature learning.
Performs extensive experiments on five public and one private dataset
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Addresses an important real-world problem.
- Provides a theoretical basis for the methodological novelty, ablation study appropriately demonstrates the importance of all components of the proposed solution.
- Proposed method is rigorously tested on 6 datasets (5 open access, 1 private) against 8 other prior methods.
TCGA-PRAD, SICAPv2, GLEASON19, PANDA, DiagSet, and a private dataset.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
No major weaknesses.
Code is linked, but the repository is not public. I assume this can be fixed easily.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(6) Strong Accept — must be accepted due to excellence
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Overall the paper is very well presented and formulated. The methods are novel and theoretically backed. Experiments are appropriately conducted and reported.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
Author has addressed the comments of all reviewers adequately.
Author Feedback
Reviewer - 1: Thank you for acknowledging the strengths of our work. We have now made the code repository fully public to support reproducibility and transparency. The updated link is included in the submission. Code Link: https://github.com/glatt-irm/glat
Reviewer - 2: Thank you for your feedback. The Graph Laplacian Attention-Based Transformer (GLAT) addresses a critical limitation of standard multi-head attention by explicitly enforcing spatial consistency. Unlike MLA, which lacks spatial regularization, GLAT models spatial and morphological relationships by connecting histologically similar patches in a graph structure. As demonstrated in Figure 3 (Attention Maps) and supported by expert pathologist validation, GLAT better captures glandular boundaries and high-frequency morphological structures, which MLA fails to preserve. The quantitative improvements in AUC and Kappa further support this.
We will revise the manuscript to better highlight these theoretical and experimental justifications. Additionally, the complete source code is already publicly available.
Reviewer - 3(1): Computational Overhead: We thank the reviewer for raising this important point. While our model introduces additional modules such as the iterative refinement and graph Laplacian attention, our experiments show that it achieves a practical balance between accuracy and computational cost. Specifically, through IRM’s progressive patch filtering, our model processes only the most informative patches rather than the entire slide, significantly reducing memory and computation requirements. As demonstrated in Table 2, removing IRM nearly triples the FLOPs (from 32.53 to 91.6) and increases parameters (from 83.3M to 130.5M) while reducing performance, highlighting the efficiency gains achieved by progressive sampling.
While the model does have a computational cost, we believe this is justified by the consistent performance improvements demonstrated across six datasets (Table 1) and the spatial consistency achieved through GLAT (Fig. 2). This makes our model practical for batch inference on modern GPUs. We also acknowledge this may limit some real-time or resource-constrained applications, and we plan to explore lighter model variants in future work to further reduce overhead while retaining the benefits of our integrated design.
Reviewer - 3(2): Ambiguity in Iterative Refinement Process: We emphasize that our contribution lies not in the isolated use of IRM or graph-based attention, but in their combined and integrated design, which is novel and specifically tailored for prostate cancer grading from WSIs. Unlike prior methods, our IRM progressively refines patch selection based on global context using a large-scale foundation model (UNI) in no-gradient mode, eliminating non-informative regions early. This reduces unnecessary computation, as validated in Table 2, where removing IRM leads to higher computational cost (130.5M parameters, 91.6 FLOPs) and lower performance.
In addition, our GLAT directly incorporates graph Laplacian constraints into the attention mechanism, combined with learnable Laplacian filtering. This improves spatial coherence and preserves glandular structures, advantages that standard MSA or GNN-based methods lack. Our visualizations (Fig. 2) and ablation results (Table 2) confirm that neither IRM nor GLAT alone can achieve the same performance, highlighting the strength of our integrated approach. This validated and unified design clearly sets our work apart from existing methods.
Meta-Review
Meta-review #1
- Your recommendation
Invite for Rebuttal
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
The paper presents a GLAT-IRM framework for prostate cancer grading. This presents an interesting approach by combining GLAT and IRM. The improve results over 6 datasets confirm its effectiveness.
Meta-review #3
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A