List of Papers Browse by Subject Areas Author List
Abstract
Continual Learning (CL) is crucial for enabling networks to dynamically adapt as they learn new tasks sequentially, accommodating new data and classes without catastrophic forgetting. Diverging from conventional perspectives on CL, our paper introduces a new perspective wherein forgetting could actually benefit sequential learning paradigm. Specifically, we present BiasPruner, a CL framework that intentionally forgets spurious correlations in the training data that could lead to shortcut learning. Utilizing a new bias score that measures the contribution of each unit in the network to learning spurious features, BiasPruner prunes those units with the highest bias scores to form a debiased subnetwork preserved for a given task. As BiasPruner learns a new task, it constructs a new debiased subnetwork, potentially incorporating units from previous subnetworks, which improves adaptation and performance on the new task. During inference, BiasPruner employs a simple task-agnostic approach to select the best debiased subnetwork for predictions. We conduct experiments on three medical datasets for skin lesion classification and chest X-RAY classification and demonstrate that BiasPruner consistently outperforms SOTA CL methods in terms of classification performance and fairness. Our code is available at: Link.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2024/paper/2799_paper.pdf
SharedIt Link: https://rdcu.be/dV53N
SpringerLink (DOI): https://doi.org/10.1007/978-3-031-72117-5_9
Supplementary Material: https://papers.miccai.org/miccai-2024/supp/2799_supp.pdf
Link to the Code Repository
https://github.com/nourhanb/BiasPruner
Link to the Dataset(s)
N/A
BibTex
@InProceedings{Bay_BiasPruner_MICCAI2024,
author = { Bayasi, Nourhan and Fayyad, Jamil and Bissoto, Alceu and Hamarneh, Ghassan and Garbi, Rafeef},
title = { { BiasPruner: Debiased Continual Learning for Medical Image Classification } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15010},
month = {October},
page = {90 -- 101}
}
Reviews
Review #1
- Please describe the contribution of the paper
The manuscript proposes BiasPruner, which identifies and removes redundant neurons by calculating the bias of each neuron, thus preventing shortcut learning. BiasPruner can adjust task-specific subnetworks for each task, thereby better mitigating forgetting and improving predictive performance.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-
This paper is well written and easy to understand.
-
The design of confidence is quite novel. The authors utilize differences between biased and unbiased data to measure whether each neuron exhibit bias, which is intuitive and computationally convenient.
-
The experiments are comprehensive and effectively demonstrate the validity of the method.
-
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
-
The authors did not discuss the time complexity of the method. Since it involves two-stage training and requires calculating confidence for every parameter in the network, could this potentially slow down the continual learning process?
-
The method requiring to store subnetworks for each task which may needs more storage overhead while the authors did not discuss about this part.
-
The shortcuts (or bias) may potentially reappear after pruning and fine-tuning, which is not explored.
-
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
-
The authors should explore the issues of time and memory usage associated with this method.
-
Further investigation into the potential reemergence of shortcuts after fine-tuning is warranted.
-
The absence of discussion on dynamic path CL methods, which share similarities with pruning, such as [1], should be addressed. Also, the paper [2] included a related method to address bias in CL, which should be cited or discussed.
[1] Conditional Channel Gated Networks for Task-Aware Continual Learning. In CVPR, 2020. [2] CBA: Improving Online Continual Learning via Continual Bias Adaptor. In ICCV, 2023.
-
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Accept — should be accepted, independent of rebuttal (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Overall, this paper investigates a novel method and the experimental results are convincing. As such, I prefer to accept it.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
The authors present a CL strategy created to address bias during Continual Incremental Learning. More specifically, given a task t, they first find the highly biased units of the network, prune them and consider this remaining part as the subnetwork for task t. After pruning, the subnetwork is fine-tuned with a weighted loss where the weight is the “bias score” of the samples (easy samples are down-weighted, hard samples are up-weighted) and then frozen. Is worth mentioning that the proposed method does not require bias annotated samples as the “bias score” is computed using the predictions during the learning of the task (correctly predicted -> easy samples, wrong predictions -> hard samples). The method can be considered an “architectural” CL method with the backbone not growing while increasing the number of tasks.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The motivation behind the paper is strong as CL needs to address more challenges (e.g. bias, fairness) to be applied successfully in the medical domain.
- The method, even though its simplicity, addresses both forgetting and bias in an interesting and novel manner. Even if the concept of pruning, and freezing subnetworks is not novel its application to reduce bias (and forgetting) is really interesting and notable.
- Experimental results are conducted in a meaningful and complete manner showing that reducing shortcuts not only reduces the bias of the model but also reduces forgetting (as stated in other works but not explicitly leveraged) as learned shortcuts are more prone to forgetting. I think that this is the real takeaway and strength of the paper.
- The paper is well written leaving no doubts to the reader regarding the method and the setting
- Authors repeated experiments with different task ordering, a choice that I appreciated a lot as task ordering is a common factor that influences the learning of shortcuts and forgetting
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
- Better comparison with architectural-based CL methods. The authors compare with three architectural CL methods. I was expecting a better-detailed description of the differences between biaspruner and competing methods (especially packnet which seems the most similar one)
- The paper lacks the forgetting measure in the results which is a common metric used in CL papers as just the final Accuracy is not useful in all the settings while forgetting effectively measures how much knowledge is lost due to the catastrophic forgetting
- The setting (UpperBound, FairDisCo) is not clearly explained. Please rewrite it to be clearer
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
Please see weaknesses and strengths
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Accept — should be accepted, independent of rebuttal (5)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The idea is simple ed effective and demonstrates that reducing the learning of the shortcuts also reduces the forgetting. The research is really well done and well presented.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
The paper presents BiasPruner as a novel approach to address the bias in medical imaging with continual learning (CL). The paper achieves a fixed-size network and task-specific debiased subnetworks by employing bias-aware network trained using generalized cross-entropy. Experimental results on skin-tone-biased datasets demonstrate superiority of BiasPruner in terms of both accuracy and fairness.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
BiasPruner introduces a novel method to address bias in CL, by forming task-specific debiased subnetworks. The paper conducts thorough experiments on multiple datasets and the comparison with various baselines and state-of-the-art methods enhances the robustness of the evaluation.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
The main limitation of this approach is that the paper assumes that bias is easier to learn, therefore, training a neural network using the GCE can enforce them. There might be some cases where we have different bias attributes with different levels of difficulty. It would be beneficial to discuss them. Also, the method involves several steps, including bias scoring, subnetwork formation, and knowledge transfer, which may increase complexity for implementation.
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Do you have any additional comments regarding the paper’s reproducibility?
N/A
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review. Pay specific attention to the different assessment criteria for the different paper categories (MIC, CAI, Clinical Translation of Methodology, Health Equity): https://conferences.miccai.org/2024/en/REVIEWER-GUIDELINES.html
It is important to note that the underlying assumption is that the bias is easy to learn and the GCE would be effective. Also, the authors should discuss the cases where there might be several bias attributes in the dataset and if this approach is generalizable.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making
Strong Accept — must be accepted due to excellence (6)
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
THe paper is well-written and the motivation is clear. The methodology is novel and described clearly. The evaluations are comprehensive and shows the effectiveness of the BiasPruner.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Author Feedback
N/A
Meta-Review
Meta-review not available, early accepted paper.