List of Papers Browse by Subject Areas Author List
Abstract
Retinal image analysis not only reveals the microscopic structure of the eye but also provides insights into overall health status. Therefore, employing multi-task learning to simultaneously address disease recognition and segmentation in retinal images can improve the accuracy and comprehensiveness of the analysis. Given the need for medical privacy, federated multi-task learning provides an effective solution for retinal image analysis. However, existing federated multi-task learning studies fail to address client resource constraints or knowledge discrepancies between global and local models. To address these challenges, we propose FedBKD, a novel federated multi-task learning framework for retinal image analysis. FedBKD leverages a server-side foundation model and effectively bridges the knowledge discrepancy between the clients and the server. Before local training, the adaptive sub-model extraction module ranks the activation values of neurons in the global model. It extracts the most representative sub-model based on computational resources, thereby facilitating the local adaptation of the global model. Additionally, we design a feature consistency optimization strategy to ensure alignment between the local model and the global foundation model’s prior knowledge. This reduces error accumulation in the client sub-model during multi-task learning and ensures better adaptation to local tasks. Experimental results on the multi-center retinal image dataset demonstrate that FedBKD achieves state-of-the-art performance. Our code is available at https://github.com/Yjing07/FedBKD.git.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/2799_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
N/A
Link to the Dataset(s)
N/A
BibTex
@InProceedings{YanJin_Bridging_MICCAI2025,
author = { Yang, Jing and Ma, Yuxi and Yu, Jin-Gang and Gao, Feng and Yang, Shuting and Cai, Du and Wang, Jiacheng and Wang, Liansheng},
title = { { Bridging Knowledge Discrepancy in Retinal Image Analysis through Federated Multi-Task Learning } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15973},
month = {September},
page = {35 -- 44}
}
Reviews
Review #1
- Please describe the contribution of the paper
This paper introduces a novel federated multi-task learning framework for retinal image analysis while protect the medical privacy. This framework ensures effective knowledge transfer and improve the task performance on the retinal dataset.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-
This method address resource limits among different clients, the ‘adaptive sub-model extraction’ provide a way to get the sub-model which fully utilize the client resources and meet their requirements.
-
This method optimizes the knowledge discrepancy between sub-model and global model, gradient errors accumulate during training, which influence the accuracy of the model.
-
This method provides some valuable insights on federated multi-task learning, about how to make flexible and efficient selections of sub-model and how to optimize the discrepancy between sub-model and global model.
-
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- On the experiment, about the proportion of parameters of the client model, on segmentation task β is {1/2,1/4,1/8,1/16}, but on classification task it is just {1/2}. More tests of classification task with different β are needed.
- About the applicable scenarios of this method. In this paper it’s specially for retinal image analysis. Why can not this method be universal to other medical image classification/segmentation tasks? Because if the method works it can work in other image tasks but not just for the retinal image.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Firstly the applicable scenario is reasonable and this method achieves SOAT performance on retinal image. It also provides some valuable insights on federated multi-task learning.
However, this work just focuses on retinal image analysis, if this method works, it should have good performance on other image tasks. So the limitation in the performance of this method is questionable.
In sum, a weak accept seems appropriate.
- Reviewer confidence
Somewhat confident (2)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #2
- Please describe the contribution of the paper
This paper introduces FedBKD, a novel Federated Multi-Task Learning (FMTL) framework designed for retinal image analysis. The primary goal is to address challenges inherent in existing FMTL methods, specifically the failure to account for varying client resource constraints and the knowledge discrepancy that arises between centralized global models and decentralized local models. The authors proposed novel selection method based on the degree of activation of each neuron. The authors evaluated FedBKD on multi-center retinal image datasets for segmentation and classification tasks, demonstrating SOTA performance compared to several existing federated learning methods.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The paper is easy to follow.
- The proposed method is novel and the experiments are well designed.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- The framework introduces additional components (ASE, FCO) which might add complexity to the implementation and training process compared to simpler FL methods.
- The performance is sensitive to the hyperparameter lambda and beta. Finding the optimal lambda and beta might require careful tuning.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I have no reasons to reject this paper.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #3
- Please describe the contribution of the paper
This paper proposes a federated multi-task learning framework FedBKD for retinal image analysis that effectively addresses two critical challenges: client resource constraints and knowledge discrepancies between global and local models.The key innovation lies in integrating a server-side foundation model with two specialized components - an Adaptive Sub-model Extraction (ASE) module that dynamically selects optimal neuron subsets based on activation patterns to create resource-efficient client models, and a Feature Consistency Optimization (FCO) strategy that aligns local and global representations through centered kernel alignment. The framework demonstrates significant performance improvements, achieving state-of-the-art results on multi-center retinal datasets.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
This paper presents a technically elegant solution for federated foundation model updating through two key innovations: (1) The Adaptive Sub-model Extraction (ASE) dynamically selecting neurons through activation-based ranking of existing foundation model weights, making it computationally lightweight for resource-constrained clients. (2) The Feature Consistency Optimization (FCO) operates directly on layer-wise feature representations using standard CKA similarity metrics. The framework’s strength lies in its seamless integration with existing foundation models - RETFound can be directly deployed without retraining, and the federated updating mechanism maintains the original model’s architecture while enabling efficient knowledge transfer.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
The Feature Consistency Optimization (FCO) module proposed in this paper shows limited methodological novelty. Its design essentially constitutes a direct application of the existing CKA-based feature alignment approach, and closely resembles the core ideas presented in recent work [1][2]. More critically, the paper lacks a systematic theoretical derivation to support the proposed module. [1]Kornblith S, Norouzi M, Lee H, et al. Similarity of neural network representations revisited[C]//International conference on machine learning. PMLR, 2019: 3519-3529. [2]Zhou Z, Shen Y, Shao S, et al. Rethinking centered kernel alignment in knowledge distillation[J]. arXiv preprint arXiv:2401.11824, 2024.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(4) Weak Accept — could be accepted, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
(1)While \mathcal{L}_{local} is referred to as the local loss, its precise definition is not provided. (2) In the descriptions of local training feature consistency and global aggregation feature consistency, the authors state that they are inspired by references [1] and [2], respectively. Does this imply that the formulas are directly adopted from those works? What are the differences between the proposed approach and the original papers? (3)Regarding the segmentation datasets, although the authors cite relevant previous work, it is recommended to explicitly mention the names of the datasets used. For the classification task, APTOS 2019 is employed—are there any additional datasets? If only the APTOS dataset is used, how is the training conducted? Does this setup meet the definition of a multi-center setting?
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Author Feedback
N/A
Meta-Review
Meta-review #1
- Your recommendation
Provisional Accept
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A