List of Papers Browse by Subject Areas Author List
Abstract
Emotion recognition plays a pivotal role in human-computer interaction by enabling machines to perceive and adapt to human affective states. While neuroimaging studies reveal significant functional lateralization between the left and right cerebral hemispheres during emotional processing, existing EEG-based emotion recognition methods face two critical challenges: (1) difficulty in aligning cross-hemispheric semantic features, and (2) limited generalizability across subjects and scenarios. To address these issues, we propose ShareLink, a novel EEG-based framework with Shared Cross-Hemispheric Structures. Our approach introduces three key innovative module: (1) the Dynamic Shared Hemispheric Structure (DSHS) enforces non-Euclidean hemispheric structure constraints by sharing learnable adjacency matrix parameters across the bi-hemispheres, thereby effectively aligning semantic representations and extracting more discriminative hemispheric asymmetry features; (2) the Cross-Hemisphere Attention (CHA) shares similarity matrix between the hemispheres to establish dynamic inter-hemispheric links, enhancing the model’s ability to capture interaction information while reducing parameters and mitigating overfitting risks; (3) the Shared Hemispheres Mixture-of-Experts (SHMoE) leverages multiple expert modules to abstract representations into a finite set of characteristics and employs a shared expert set to map bi-hemispheres features into a unified space, ensuring consistent and generalizable left-right hemisphere representations. Evaluated on SEED and SEED-IV datasets under cross-subject paradigms, ShareLink achieves accuracies of 80.61% ± 6.16% and 63.33% ± 8.29%, demonstrating superior cross-domain generalization. This work provides new insights into neurophysiologically inspired computational models for emotion recognition. The codes are available at: https://github.com/Huangzx1023/ShareLink.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/5190_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
https://github.com/Huangzx1023/ShareLink
Link to the Dataset(s)
SEED dataset: https://bcmi.sjtu.edu.cn/home/seed/seed.html
SEED-IV dataset: https://bcmi.sjtu.edu.cn/home/seed/seed-iv.html
BibTex
@InProceedings{HuaZix_ShareLink_MICCAI2025,
author = { Huang, Zixuan and Kong, Lingyao and Ao, Licheng and Yao, Shiyi and Xiang, An and Miao, Fen},
title = { { ShareLink: Neuro-Inspired EEG-based Cross-Subject Emotion Recognition via Shared Bi-hemisphere } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15971},
month = {September},
}
Reviews
Review #1
- Please describe the contribution of the paper
- Neuro-Inspired Cross-Hemispheric Modeling: The paper proposes a novel framework (ShareLink) that explicitly leverages the asymmetry and interaction between the left and right cerebral hemispheres for EEG-based emotion recognition. It introduces a Dynamic Shared Hemispheric Structure (DSHS) that shares learnable adjacency matrix parameters to enforce symmetric constraints and align semantic representations across the hemispheres.
- Innovative Attention Mechanism: The work further presents a Cross-Hemisphere Attention (CHA) module that shares a similarity matrix between hemispheres to form dynamic inter-hemispheric links, aimed at enhancing interaction information while reducing overall model parameters.
- Generalization via Mixture-of-Experts: In order to boost cross-subject generalizability, the paper introduces a Shared Hemispheres Mixture-of-Experts (SHMoE). This module leverages multiple expert modules to abstract the representations into a unified latent space that is consistent across subjects.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- The idea of enforcing shared bi-hemispheric structures for semantic alignment and modeling lateralized brain function is interesting and builds on neurophysiological insights.
- By combining DSHS, CHA, and SHMoE, the paper offers a multi-pronged approach to tackle two critical challenges: cross-hemispheric misalignment and limited cross-subject generalization.
- The ablation experiments clearly demonstrate that removing any of the proposed modules causes substantial performance drops, indicating that each module contributes meaningfully to the model’s effectiveness.
- The CHA module emulates the collaborative functioning of the left and right brains by sharing a similarity matrix between them. This sharing establishes dynamic associations among channels across the two hemispheres and enhances the model’s ability to capture interhemispheric interactions. Collectively, these modules strengthen the interaction between the hemispheres.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
- The experiments compare the proposed method with several older transfer learning methods. Relying on these outdated baselines raises concerns about whether ShareLink would still hold its advantages when evaluated against more recent state-of-the-art EEG or domain adaptation approaches.
- In the abstract, introduction, and Section 2.4, the author mentioned that the SHMoE module can enhance the generalization of the left and right hemispheres. However, I have some concerns. First, why does an operation similar to the cosine theorem increase generalization? Additionally, the authors mention that, “For example, an EEG signal corresponding to a happy emotion in one subject is likely more similar to other EEG signals from the same subject than to those from other subjects. “ How does the SHMoE module address this issue? These points are not clearly explained in the article.
- The manuscript largely builds upon known ideas in domain adaptation and EEG-based emotion recognition, raising a concern that the novelty is primarily in system integration rather than in fundamentally new techniques. A clearer positioning relative to recent advances would strengthen the paper’s claims.
- Please rate the clarity and organization of this paper
Poor
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
- The paper is grounded in interesting neurophysiological insights and proposes an integrated framework that—through its DSHS, CHA, and SHMoE modules—tackles the challenges of hemispheric misalignment and domain generalization in EEG emotion recognition.
- The ablation studies are effective in demonstrating that each component is essential for the model’s performance.
- The comparison algorithm used in the article appears to be outdated. The evaluation could be made more compelling by comparing against more recent methods rather than older transfer learning techniques.
- The presentation of the article is unclear, and the explanation of its innovations is difficult to comprehend.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Reject
- [Post rebuttal] Please justify your final decision from above.
The paper suffers from three critical shortcomings: 1) Incomplete benchmarking and experimental results: The manuscript lacks direct comparisons with recent state-of-the-art methods, and the limited experimental results—consisting of only two tables in the entire manuscript—are insufficient to demonstrate its competitiveness. 2) Weak methodological justification: No interpretability analyses are provided in the experiment, leaving the model’s cross-subject generalization mechanisms poorly explained. 3) Suboptimal performance: The proposed method underperforms the current state-of-the-art approach (Chen et al., 2024) by a substantial margin of 10% in cross-subject accuracy, which poses a significant limitation for practical applicability. —————————————————————— Reference Chen, Bianna, CL Philip Chen, and Tong Zhang. GDDN: Graph domain disentanglement network for generalizable EEG emotion recognition. IEEE Trans. Affect. Comput., 2024, 15(3), 1739–1753.
Review #2
- Please describe the contribution of the paper
The paper presents ShareLink, a framework for EEG-based emotion recognition that addresses the challenges of hemispheric misalignment and limited generalizability. It introduces a shared hemispheric structure to align features and extract discriminative asymmetry features. The framework also includes a cross-hemisphere attention mechanism to enhance interaction information capture and reduce overfitting. Finally, it utilizes a mixture-of-experts approach to improve generalization by mapping bilateral representations into a unified space. Experimental results on public datasets SEED and SEED-IV demonstrate the effectiveness of the proposed method.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- This paper proposes three modules—DSHS, CHA, and SHMoE—to address the challenges of aligning cross-hemispheric semantic features and limited generalization across subjects and scenarios in EEG-based emotion recognition. The paper features a clear structure, with the methodology section providing detailed descriptions of each module’s construction.
- The paper conducted cross-subject experiments on public datasets SEED and SEED-IV, with ablation studies demonstrating the effectiveness of each proposed module.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
The most significant issue lies in the selection of baseline methods. As shown in Table 1, DANN was originally proposed in 2016, while other compared methods are even more outdated. In fact, there exist newer models with superior performance for emotion recognition on SEED and SEED-IV datasets[1-3]. The paper’s failure to benchmark against these state-of-the-art approaches substantially undermines the claimed effectiveness of the proposed method. Whether it constitutes deliberate omission or simply reflects inadequate literature review of recent advances in the field, this oversight is unacceptable for academic research.
[1] Xu, Yongling, et al. “AMDET: Attention based multiple dimensions EEG transformer for emotion recognition.” IEEE Transactions on Affective Computing 15.3 (2023): 1067-1077. [2] Song, Yonghao, et al. “EEG conformer: Convolutional transformer for EEG decoding and visualization.” IEEE Transactions on Neural Systems and Rehabilitation Engineering 31 (2022): 710-719. [3] Ding, Yi, et al. “EmT: A novel transformer for generalized cross-subject EEG emotion recognition.” IEEE Transactions on Neural Networks and Learning Systems (2025).
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The authors claimed to release the source code and/or dataset upon acceptance of the submission.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
While the authors propose a multi-module framework to address existing challenges in EEG-based emotion recognition, with detailed descriptions of each module’s architecture, the experimental comparison employs outdated state-of-the-art methods. This likely reflects insufficient literature review of recent advances in the field, which raises serious concerns about the study’s scientific rigor.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Reject
- [Post rebuttal] Please justify your final decision from above.
Although the structure of this paper is relatively complete and the proposed method is innovative, the state-of-the-art baseline compared in the experimental part is too old (the latest one is from 2017), which greatly affects the evaluation of the effectiveness of this method.
Review #3
- Please describe the contribution of the paper
The paper introduces ShareLink, a novel EEG-based emotion recognition framework leveraging cross-hemispheric shared structures to enhance semantic alignment between hemispheres and improve cross-subject generalization. ShareLink incorporates innovative modules such as Dynamic Shared Hemispheric Structure (DSHS), Cross-Hemisphere Attention (CHA), and Shared Hemispheres Mixture-of-Experts (SHMoE).
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
Innovative use of shared adjacency matrix parameters and similarity matrices to align hemispheric semantic representations effectively.
Demonstration of significant performance improvements in cross-subject emotion recognition on widely used SEED and SEED-IV datasets.
Comprehensive ablation studies effectively highlight the contributions of individual modules, underlining the robustness and effectiveness of the proposed model.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
Little information is provided about how the hyperparameters were selected and tuned.
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
What is the input of the model? It is unclear if the classification problem was per time sample or trial.
State-of-the-art models’ performance from Table 1 were extracted from the literature or run by the authors? some references presented in the table do not show results for the SEED and SEED-IV datasets.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(5) Accept — should be accepted, independent of rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The proposed methodology is innovative and outperformed the state of the art approaches.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
The authors effectively addressed most of my questions.
Author Feedback
Q1(R1&R2): Concerns about baselines To address the concerns of Reviewers 1 and 2 regarding our baseline methods, we surveyed the latest literature (AMDET [IEEE TAC,2023], EEG conformer [IEEE TNSRE,2022], and EmT [IEEE TNNLS,2025] ). Although their data preprocessing and experimental setup differ slightly from ours for the cross-subject task on the SEED dataset, according to the reported cross-subject results section in the EmT paper, the accuracies of all three methods are lower than our model’s. This confirms the superiority of our proposed model. In future work, we will conduct comparisons with more advanced methods. Q2(R1): The source of generalization and the role of cosine router SHMoE’s generalization arises from aligning its Mixture of Experts backbone with the data’s conditional structure rather than relying on cosine router. Experts form parallel if-then branches, each learning a simple conditional mapping of emotional attributes, so adding more experts breaks a complex task into subtasks. Sparse routing picks only the experts that best match each input, filtering out subject-specific noise and focusing on stable emotion-related patterns. By maintaining the data’s inherent conditional logic in its architecture, SHMoE achieves robustness to distribution shifts and cross-subject generalization. Traditional linear router computes dot products without normalization so routing scores depend on magnitudes and certain experts dominate. In contrast, cosine router projects features into an expert space and applies L2 normalization to both projected features and each expert embedding before computing dot products on a hypersphere, making routing depend only on directional alignment and enabling balanced, fair top-k expert selection for each emotional attribute and learning cross-subject invariant emotion patterns. Consequently, the cosine router directs the same emotion attributes from different subjects to the same experts in a shared space, addressing the issue that “a happy EEG signal is more similar within a subject than cross-subjects.” Q3(R1): Novelty of the model These shared structures project bi-hemisphere features into a common semantic space with consistent attribute meanings. This allows direct, meaningful comparisons, ensuring observed differences reflect true underlying asymmetries without representational bias, thereby revealing clearer asymmetry patterns. Because the left hemisphere tends to handle positive emotions and the right handles negative ones, those asymmetry patterns then serve as effective features for emotion classification. Under the bi-hemispheric shared framework for our cross-subject emotion recognition task, the model’s input is the bi-hemisphere differential entropy feature, incorporating both spatial and spectral dimensions. (1) To learn the intrinsic relationships between features and the spatial dimension, the DSHS module achieves spatial alignment of semantic representations between the bi-hemisphere through a learnable adjacency matrix shared between them. (2) To harmonize spectral representation between hemispheres, CHA enables dynamic interaction of information between the bi-hemisphere through a cross-hemisphere shared similarity matrix. (3) To address the challenge of cross-subject generalization, the SHMoE module, by employing shared expert sub-networks, maps features to a unified expert space, capturing common cross-subject abstract emotional attributes, thereby significantly enhancing the model’s robustness and generalization capability in cross-subject EEG emotion recognition tasks. Q4 (R4): Implementation details
- We used grid search to tune the learning rate {1e-3, 1e-4, 1e-5, 1e-6} and embedding dimension {16, 32, 64}.
- We use segmented time windows as input to our model, i.e., the classification problem is addressed for each time sample.
- All baseline methods share the same experimental setup as ours, and their results are all taken from the references.
Meta-Review
Meta-review #1
- Your recommendation
Invite for Rebuttal
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
N/A
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Reject
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
- The paper lacks detailed information about the datasets used, such as SEED and SEED-IV. For example, the number of subjects in SEED-IV is not clearly stated.
- The description of how the training and testing sets are split is unclear and should be elaborated.
- The study lacks evaluations on external datasets to demonstrate the effectiveness and robustness of the proposed method.
Meta-review #3
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Reject
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
This paper introduces a method for cross-subject emotion recognition from EEG signals. While the motivation behind each module in the framework is reasonable, the contribution of the paper is limited by the lack of evaluation against more state-of-the-art methods and the use of only accuracy as a metric. Incorporating additional evaluation metrics, such as the F1-score and confusion matrix, would strengthen the assessment of the proposed approach.