Abstract

Accurately predicting the survival of cancer patients is crucial for personalized treatment. However, existing studies focus solely on the relationships between samples with known survival risks, ignoring the value of censored samples, which is inevitable in clinical practice. Furthermore, these studies may suffer performance degradation in modality-missing scenarios and even struggle during the inference process. In this study, we propose a bipartite patient-modality graph learning with event-conditional modelling of censoring for cancer survival prediction (CenSurv). Specifically, we first use graph structure to model multimodal data and obtain representation. Then, to alleviate performance degradation in modality-missing scenarios, we design a bipartite graph to simulate the patient-modality relationship in various modality-missing scenarios and leverage a complete-incomplete alignment strategy to explore modality-agnostic features. Finally, we design a plug-and-play event-conditional modeling of censoring (ECMC) that selects reliable censored data using dynamic momentum accumulation confidences, assigns more accurate survival times to these censored data, and incorporates them as uncensored data into training. Comprehensive evaluations on 5 publicly cancer datasets showcase the superiority of CenSurv over the best state-of-the-art by 3.1% in terms of the mean C-index, while also exhibiting excellent robustness under various modality-missing scenarios. In addition, using the plug-and-play ECMC module, the mean C-index of 8 baselines increased by 1.3% across 5 datasets. Code of CenSurv is available at https://anonymous.4open.science/r/CenSurv-F767.

Links to Paper and Supplementary Materials

Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/2356_paper.pdf

SharedIt Link: Not yet available

SpringerLink (DOI): Not yet available

Supplementary Material: Not Submitted

Link to the Code Repository

https://github.com/yuehailin/CenSurv

Link to the Dataset(s)

N/A

BibTex

@InProceedings{YueHai_Bipartite_MICCAI2025,
        author = { Yue, Hailin and Kuang, Hulin and Liu, Jin and Li, Junjian and Wang, Lanlan and He, Mengshen and Wang, Jianxin},
        title = { { Bipartite Patient-Modality Graph Learning with Event-Conditional Modelling of Censoring for Cancer Survival Prediction } },
        booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
        year = {2025},
        publisher = {Springer Nature Switzerland},
        volume = {LNCS 15971},
        month = {September},
        page = {97 -- 107}
}


Reviews

Review #1

  • Please describe the contribution of the paper

    The paper proposes CenSurv, a novel framework for cancer survival prediction with two techincal contributions: Event-Conditional Modeling of Censoring (ECMC) and Bipartite Patient-Modality Graph (BPMG).

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. The ECMC module introduces a principled approach to leverage censored data by dynamically updating survival times based on model confidence, addressing a critical limitation in existing survival prediction methods.
    2. The experiments are thorough, covering unimodal/multimodal baselines, ablation studies, and robustness tests under missing modalities. The plug-and-play nature of ECMC is validated across eight baselines, demonstrating broad applicability.
    3. This study shows good reproducibility with open-source codes provided in a anonymous link.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    1. Computational efficiency, runtime, or scalability (e.g., graph construction for large cohorts) are not analyzed, raising concerns about real-world applicability.
    2. Ablation analysis should be further discussed. For example, why removing DMAC resulted in larger performance degradation than removing the whole ECMC at all?
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
    1. Experimental validation should not be listed as the third contribution - it’s the validation for the first two contribution.
    2. In abstract, “ignoring the value of censored samples” should be reworded as this has been a widely recognized issue for survival prediction.
  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper presents a methodologically sound method for cancer survival prediction. The novel integration of graph learning with censored data modeling addresses the challenges in modality incompleteness and underutilization of censored data. The experimental results are solid, with statistically significant improvements over SOTA methods. I will consider further increasing the score if the authors can address my concerns.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #2

  • Please describe the contribution of the paper

    The main contribution of the paper is the development of CenSurv, a novel framework for cancer survival prediction that integrates multimodal data (pathological images, genomic profiles, and clinical records) using a bipartite patient-modality graph and introduces an event-conditional modeling of censoring (ECMC) module. Specifically, CenSurv leverages graph neural networks (GNNs) to model patient-modality relationships, ensuring robustness in modality-missing scenarios through a complete-incomplete alignment strategy. The ECMC module enhances the utilization of censored data by dynamically selecting reliable samples and updating their survival times, incorporating them as uncensored data into training. Evaluated on five TCGA cancer datasets, CenSurv achieves a mean C-index of 0.708, outperforming state-of-the-art methods by 3.1%, and demonstrates improved performance in modality-missing scenarios. Additionally, ECMC as a plug-and-play module boosts the mean C-index of eight baseline methods by 1.3%, showcasing its generalizability.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The bipartite patient-modality graph learning approach is innovative, as it models the patient-modality relationship explicitly using GNNs. This allows the method to adapt to modality-missing scenarios, a common challenge in clinical settings, by aligning complete and incomplete modality representations. The use of a siamese GNN with a complete-incomplete alignment loss (LCia) is a creative way to extract modality-agnostic features, enhancing robustness and generalizability. The plug-and-play ECMC module addresses a critical limitation in survival prediction by leveraging censored data, which is often ignored or underutilized. The dynamic momentum accumulation confidence (DMAC) strategy for selecting reliable censored samples and updating their survival times is novel and practical, reducing the mean absolute error (MAE) of survival time estimates from 14.5 to 5.7 months, as shown in the discussion. The paper provides a comprehensive evaluation on five public TCGA datasets (KIRC, LIHC, ESCA, LUSC, UCEC), demonstrating a 3.1% improvement in mean C-index (0.708) over the best state-of-the-art method (SurvPath, 0.677). The ablation studies further validate the contribution of each component (ECMC, BPMG, DMAC), reinforcing the method’s effectiveness. CenSurv’s performance under various modality-missing conditions (e.g., missing pathological, genomic, or clinical data) outperforms existing methods like ZeroP, MFM, and HGCN, as shown in Figure 4. This robustness is clinically relevant, given the frequent occurrence of incomplete data in practice.

  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.

    The authors claim that existing studies do not effectively use censored data by ignoring their true survival times (Section 1), and propose ECMC as a novel solution. However, they fail to compare ECMC with prior methods specifically designed for censored data handling, such as pseudo-labeling approaches in survival analysis. For example, Li et al. (“Deep Survival Analysis,” 2016, arXiv:1606.00931) proposed a deep learning framework that imputes survival times for censored samples, and Katzman et al. (“DeepSurv,” 2018, Journal of Machine Learning Research) introduced a neural network-based Cox model that indirectly leverages censored data. Without direct comparisons to such works, the novelty and superiority of ECMC remain insufficiently substantiated. The paper specifies key hyperparameters (e.g., α=5, β=1 for loss weighting, λ=0.4 for DMAC) chosen after “extensive experiments” (Section 3.1), but provides no sensitivity analysis or justification for these values. For instance, how does varying λ affect the confidence estimation in DMAC, and how sensitive is the model’s performance to the balance between LCox and LCia? This omission limits insight into the robustness and generalizability of CenSurv across different datasets or scenarios, potentially requiring extensive tuning for new applications. While the method achieves strong quantitative results, it lacks qualitative analysis or visualizations to explain how the bipartite graph or ECMC contributes to survival predictions. For example, the paper could include a case study showing how updated survival times from ECMC align with clinical outcomes, or visualize the patient-modality graph to highlight key modality interactions. Interpretability is crucial for clinical adoption, as noted in prior work like Chen et al. (“Pathomic Fusion,” 2021, IEEE Transactions on Medical Imaging), which used attention maps to explain multimodal predictions. Without such elements, the black-box nature of CenSurv may hinder trust from clinicians. The evaluation relies heavily on the C-index (Section 3.1), which measures ranking consistency but does not fully capture calibration or discrimination power in survival models. Prior work, such as Uno et al. (“Evaluating Survival Models,” 2011, Statistics in Medicine), recommends complementary metrics like the Integrated Brier Score (IBS) or time-dependent AUC to provide a more holistic assessment. The absence of these metrics limits the ability to judge CenSurv’s performance beyond concordance, especially in clinical contexts where calibration is critical for risk stratification. The bipartite patient-modality graph with random edge dropout (Section 2.2) is designed to handle modality-missing scenarios, but the paper does not discuss the risk of overfitting to specific dropout patterns observed in the TCGA datasets. For instance, if the training data over-represents certain modality combinations (e.g., missing genomic data), the model’s robustness to unseen patterns (e.g., missing clinical data only) may be overstated. A cross-dataset validation or synthetic data experiment, as seen in Wang et al. (“MMF: Multimodal Fusion,” 2020, MICCAI), could better validate generalizability.

  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    First, the methodological innovation of CenSurv, particularly the bipartite patient-modality graph and ECMC module, addresses significant challenges in survival prediction—modality-missing scenarios and censored data utilization—in a novel and effective way. The complete-incomplete alignment strategy and DMAC-based censoring updates are creative contributions that advance the field. Second, the empirical evaluation is rigorous, with a 3.1% improvement in mean C-index over state-of-the-art methods across five TCGA datasets, supported by ablation studies and robustness analyses. Third, the paper’s relevance to MICCAI is clear, as it bridges medical image computing with clinical translation potential. Tthe limited comparison with prior methods for handling censored data and the absence of interpretability visualizations. These issues do not undermine the core contributions but could be addressed in a rebuttal or revision to enhance the paper’s impact.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A



Review #3

  • Please describe the contribution of the paper

    The paper presents a bipartite patient-modality graph learning approach for cancer survival prediction. It leverages graph structures to model multimodal data and obtain robust representations, addresses modality-missing scenarios through a complete-incomplete alignment strategy for modality-agnostic features, and introduces a plug-and-play event-conditional modeling of censoring (ECMC) that uses dynamic momentum accumulation confidences to select reliable censored data.

  • Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • The proposed method is simple, easy to implement, and innovative, with extensive experimental validation.
    • The manuscript is well-organized, clearly written, and provides the code for reproducibility.
  • Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
    • Some details could be improved to enhance the paper’s quality. For example, the overall images are somewhat blurry (especially Figure 2); increasing the clarity and font size could improve reader comprehension.
    • After selecting high-confidence samples, the paper states that their status is updated from ‘alive’ to ‘death.’ However, in survival analysis, the exact time of death is necessary. It is unclear how the actual death time is determined for subsequent training.
    • The specifics of the five-fold cross-validation process are ambiguous—whether the best validation checkpoint or the final checkpoint is used remains unclear.
    • In Figure 4, including a legend to clearly illustrate the meanings of p, c, and g would improve clarity.
    • The Discussion and Conclusion section contains detailed results, but conventional writing standards suggest that this section should focus on summarizing and discussing the work, with detailed results better placed in the Results section.
  • Please rate the clarity and organization of this paper

    Satisfactory

  • Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.

    The submission has provided an anonymized link to the source code, dataset, or any other dependencies.

  • Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html

    N/A

  • Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.

    (4) Weak Accept — could be accepted, dependent on rebuttal

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    The recommendation is based on the strengths and weaknesses outlined above, with the expectation that the authors will address and improve the identified weaknesses.

  • Reviewer confidence

    Confident but not absolutely certain (3)

  • [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.

    N/A

  • [Post rebuttal] Please justify your final decision from above.

    N/A




Author Feedback

N/A




Meta-Review

Meta-review #1

  • Your recommendation

    Provisional Accept

  • If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.

    N/A



back to top