Abstract

The conventional histopathology paradigm, while remaining the gold standard for clinical diagnosis, is inherently constrained by its lengthy processing time. The emergence of virtual staining in computational histopathology has catalyzed significant research efforts toward developing rapid and chemical-free staining techniques. However, current methodologies are primarily applicable to well-prepared thin tissue sections and lack the capability to effectively process the section-free thick tissues. In this work, we present a novel approach that utilizes fluorescence light-sheet microscopy to directly image thick tissue samples, followed by image translation to generate virtually stained hematoxylin and eosin (H&E) images. To overcome the insufficient exploration of pathological features in current methods, we introduce Semantic Contrastive Guidance (SemCG), which enforces morphological consistency between fluorescence inputs and H&E outputs. Additionally, we incorporate subtype-aware classification to enhance the discriminator’s ability to learn domain-specific pathological knowledge. Experimental results demonstrate that our proposed modules offer an advantage in generating high-quality images. We anticipate that this sectioning-free virtual staining framework will have significant potential for clinical rapid pathology applications, offering a transformative improvement to current histological workflows. Our code is available at https://github.com/commashy/ SemCG-Stain.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/5132_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

https://github.com/commashy/SemCG-Stain

Link to the Dataset(s)

N/A

BibTex

@InProceedings{OhJin_Pathologyaware_MICCAI2025,
        author = { Oh, Jintaek and Shi, Lulin and Wong, Terence T. W.},
        title = { { Pathology-aware Virtual H&E Staining of Section-free Thick Tissues with Semantic Contrastive Guidance } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15966},
        month = {September},

}


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper proposes a virtual H&E staining method tailored for light-sheet microscopy images. The authors extend the CycleGAN framework by introducing two novel components:

    • Semantic contrastive guidance using a pre-trained pathology foundation model (PLIP), and
    • A pathology-aware discriminator (PathD) that leverages subtype labels to enhance generation quality. The approach is evaluated on an in-house lung adenocarcinoma (LUAD) dataset, demonstrating improvements in FID, KID, and Inception Score (IS) compared to baselines.
  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The idea of integrating semantic guidance from a foundation model (PLIP) into a generative framework.
    • Leveraging subtype information in the discriminator is a novel and domain-relevant enhancement.
    • The visual and quantitative results on the LUAD dataset show improvements over baselines.
    • Ablation study attempts to dissect contributions of the proposed modules.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    1. Limited Dataset & Lack of Generalization Validation:
      • The method is only evaluated on a single, in-house LUAD dataset. While the improvements are promising, there is no validation on external or different tissue types to demonstrate generalizability. Testing on different organs/tumors would significantly strengthen the claims.
    2. Evaluation Metrics:
      • While FID, KID, and IS are standard metrics for generative quality, they primarily assess statistical similarity between image distributions rather than task-specific accuracy.
      • It would be helpful to include some form of pixel-level or semantic-level evaluation (e.g., SSIM, PSNR, segmentation accuracy, or expert pathology ratings) to better assess the clinical validity of the translations.
    3. Comparisons Against Outdated SOTA:
      • The baselines used for comparison appear relatively outdated. Including more recent models (e.g., diffusion-based image translation or attention-guided GANs) would provide a better benchmark for evaluating the proposed method’s contributions.
    4. Ablation Study Baseline Confusion:
      • It is unclear what the “U-Net” baseline in the ablation study refers to. Is it a U-Net trained within a CycleGAN structure or an independently trained image translation model?
      • Also, the scores for CycleGAN and U-Net differ between Table 1 and the ablation section — this discrepancy should be clarified.
      • Ideally, the baseline in the ablation should be CycleGAN, and then progressively add SemCG and PathD to assess each component’s impact.
    5. Incomplete Data Preparation Details:
      • Section 3.1 does not clearly explain how H&E ground truth images were acquired. Figure 2 references H&E-stained images, implying access to paired H&E data. In the text, the authors vaguely described that the top surface of the tissue is stained due to the thickness of the sample, but please provide detailed information about how H&E-stained images were prepared and used for training.
    6. Justification of the proposed method
      • The authors claim that only few previous works have addressed virtual staining of thick tissue sections, which seems a novel approach of this work. However, the paper does not clearly articulate what specific technical innovations are introduced to handle thick sections. It remains unclear how the proposed method differs, in this regard, from existing approaches, as no explicit techniques for addressing the challenges of thick tissue imaging are described.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (3) Weak Reject — could be rejected, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper presents a promising direction by integrating semantic priors and pathology-aware learning into image translation. However, the current submission would benefit from improved clarity in the experimental setup and stronger validation, both quantitatively and across datasets. Addressing the issues above would significantly improve the strength and credibility of the work.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #2

  • Please describe the contribution of the paper

    The paper introduces SemCG-Stain, a virtual staining methodology designed to generate Hematoxylin and Eosin (H&E) images from fluorescence microscopy data. The study addresses the challenge of robust image generation in the context of feature discrepancies between H&E and fluorescence images, arising from the variance in tissue section thickness (i.e., thinner sections for H&E and thicker sections for fluorescence imaging). The proposed methodology employs contrastive learning on features extracted via PLIP to facilitate the learning of domain-invariant features across fluorescence and H&E modalities. Furthermore, the method utilizes pathology-aware discriminators within a Generative Adversarial Network (GAN) framework to discriminate between real and synthetic images and to enable the model to capture the nuanced characteristics of specific pathological subtypes. The method’s effectiveness is demonstrated using lung tissue samples of lung adenocarcinoma (LUAD) subtypes, with results indicating superior performance in image quality and clinical pathological relevance compared to other competing methods.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The primary novelty of this work lies in the application of PLIP to extract domain-invariant pathology features from H&E images and to align them with fine-tuned PLIP features derived from fluorescence images through contrastive learning. The integration of pathology-aware discriminators to capture subtype-specific nuances in image generation is also a notable strength. The authors’ decision to make their source code publicly available is commendable and will likely enhance the reproducibility of this research. Furthermore, the work presents strong potential for accelerating diagnostic workflows in clinical settings, ultimately benefiting patient care. The manuscript is characterized by clear and readily comprehensible writing.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    Firstly, the study’s scope is limited to lung adenocarcinoma (LUAD) subtypes. While the results are promising for this particular application, the generalizability of the method to other tissue types or cancer types is not extensively validated. Secondly, the requirement for retraining the pathology-aware discriminators when incorporating new tissue subtypes presents a limitation to the method’s scalability. Finally, the model’s reliance on PLIP encoders, Semantic Contrastive Guidance, and a Pathology-aware Discriminator introduces a higher degree of complexity in training, optimization, and implementation within clinical settings compared to simpler methodologies. The performance of SemCG-Stain is inherently dependent on the performance of PLIP, which may constitute a potential bottleneck.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The methodology described in this paper is both innovative and possesses practical significance. By addressing the challenge of virtual image generation from thick fluorescence sections, it offers a promising avenue for accelerating the diagnostic workflow. In consideration of these factors, I recommend the acceptance of this paper after addressing my comments about the weaknesses.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    Accept

  • [Post rebuttal] Please justify your final decision from above.

    Many thanks for addressing my comments. I am happy to suggest acceptance to this paper.



Review #3

  • Please describe the contribution of the paper

    The study is very interesting which they proposed a “Section-free” virtual H&E powered by AI learning to potentially overcome some of the clinical and pathological limitation in the current routine practice.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The methodology and the “preliminary” prediction of the virtual H&E looks good in the figures. As a pathologist, i believe if this is true I believe this will impact the clinical setting. however I have several points in the following that need the authors to clarify.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    The author talked about “Section free” so I would assume it is a 3D imaging however there is no figures and links to be shown to convince us that they can do it. Figure 2 looks like a 2D format. and the authors need to draw a diagram to show to us how the study designed in a clinical and logistic points of views. current figure 1 only shows the AI component. many missing info that make it almost impossible to evaluate this work, e.g. how thick is the tissue that they can image “section-free”.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (5) Accept — should be accepted, independent of rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    As mentioned above, the authors proposed the “section-free” concept and based on their figure 2, looks like at least they can make it for 2D.

    however what about 3D? and they are many missing info.

  • Reviewer confidence

    Very confident (4)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A




Author Feedback

We appreciate all reviewers for their thoughtful and constructive feedback. In the space below, we group our responses into five major themes to address the most critical concerns first.

  1. Dataset Size & Generalization Reviewer concern: Only a single, private LUAD dataset was used; no validation on other organs or tasks (R1, R3).
    • Public data gap: To our knowledge, there are no publicly available thick‐tissue fluorescence→H&E datasets; most benchmarks focus exclusively on thin‐section autofluorescence. We will explicitly note this in Section 3.1.
    • Future validation: We fully agree on the need for broader evaluation. While we cannot add new experiments post-submission, we will clearly outline plans in the Conclusion for future studies on other organs, tumor types, and modalities.
    • “Section-free” clarification: We apologize for any confusion. “Section-free” indicates that we bypass physical microtomy—imaging unsectioned blocks via light-sheet microscopy—rather than reconstructing 3D volumes. We will clarify this terminology in Section 1 and 3.1.
  2. Evaluation Protocol Reviewer concern: FID/KID/IS may not fully capture clinical relevance (R1).
    • Metrics choice: Standard alignment-based metrics (SSIM/PSNR) assume pixel-perfect registration, which our thick→surface comparisons cannot guarantee. We will add a brief note explaining why these metrics are unsuitable in Section 3.3.
    • Expert scoring: Although new reader studies fall outside rebuttal scope, we will strengthen Section 4 to emphasize that blinded pathologist ratings are the most meaningful clinical validation and discuss our plan to conduct such a study in future work.
  3. Baseline & Ablation Study Clarity Reviewer concern: Outdated baselines, unclear “U-Net” definition, and score discrepancies (R1).
    • Stronger SOTA discussion: We will augment Section 1 to acknowledge recent diffusion-based and attention-GAN translators and note opportunities to integrate them in future extensions.
    • PLIP choice: We chose PLIP as an example pathology foundation model; our SemCG module remains agnostic to the encoder, and we will explicitly state that stronger models (e.g., CONCH, UNI) can be swapped in without altering the overall design.
    • Generator confusion: We apologize for the ambiguity. In Table 1, CycleGAN uses a ResNet generator; in Section 4.2’s ablation, “U-Net” refers to CycleGAN with U-Net + self-attention generators. We will reorder the ablation results accordingly.
  4. Data Preparation & Method Justification Reviewer concern: Details on H&E ground-truth acquisition and algorithmic novelties for thick sections (R1).
    • H&E acquisition protocol: We will expand Section 3.1 to specify:
  5. Formalin fixation at hospital, transport to lab
  6. Surface staining with DAPI (10 μg/mL, 1–2 min)
  7. UV light-sheet imaging of the same surface
  8. Standard HE processing (fixation → embedding → sectioning) of the imaged block
    • Thick-section robustness: We will clarify that SemCG’s contrastive loss was tuned to emphasize semantic consistency across mismatched thick/thin features. These algorithmic details will be added to Section 2.
  9. Clinical & Logistical Workflow Reviewer concern: Tissue thickness limits and imaging constraints for surgical margin assessment (R2).
    • Workflow schematic: In the revised manuscript, we will include a new diagram in Section 2.1 illustrating the end-to-end pipeline—from unsectioned block imaging, through virtual H&E translation, to pathologist review—highlighting the “section-free” step.
    • Thickness constraints: Because our clinical target is surface margin assessment, we image only the outermost layer; there is no strict maximum thickness requirement. Avoiding sectioning accelerates turnaround without compromising diagnostic access to the margin. We will state this explicitly in Section 1. We believe these clarifications directly address the reviewers’ major concerns, improve reproducibility, and sharpen our manuscript’s focus. We deeply appreciate the guidance.




Meta-Review

Meta-review #1

  • Your recommendation

    Invite for Rebuttal

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    This paper proposes a virtual H&E staining method transferred from light-sheet microscopy fluorescence images. The method is an extension of CycleGAN, incorporating semantic contrastive guidance based on the PLIP model and a pathology-aware discriminator (PathD) that leverages subtype labels. While all reviewers appreciate the strong motivation behind the paper, they express concerns about the limited evaluation dataset (a single private dataset, single organ, and single tumor type; see R1 and R3 comments), dependence on PLIP (R3—also, I’m curious why the authors did not use CONCH, UNI, or GigaPath, which have outperformed PLIP in other tasks), evaluation metrics (R1), and confusion around outdated SOTA and ablation baselines (R1). R1 and R2 also raise an important issue regarding the application: the title refers to thick tissue sections, and the authors use light-sheet microscopy, which is a 3D imaging method—so why didn’t the authors leverage 3D fluorescence microscopy to predict 3D virtual H&E staining? Doing so could overcome the physical limitations of standard H&E staining.

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #2

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



Meta-review #3

  • After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.

    Accept

  • Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’

    N/A



back to top