List of Papers Browse by Subject Areas Author List
Abstract
The advancement of personalized cardiac modeling, particularly through digital cardiac twins, enables tailored treatments based on the physiology of the individual patient. Traditional physics-based methods for optimizing the parameters of these cardiac models face challenges in clinical adoption due to their computational cost. Recent shifts towards data-driven approaches offer improved efficiency, but struggle with generalization and integration of core electrophysiological principles. The emerging use of physics-informed neural networks (PINNs) has the potential to combine the advantage of these two approaches, although still requiring retraining from scratch for each subject. This paper introduces a novel framework for meta-learning PINNs to overcome these challenges, enabling rapid personalization of a PINN to new subjects’ data via simple feedforward computation.
We instantiate this meta-PINN framework using the Eikonal model as the governing physics, demonstrating its efficacy in significantly reducing computational demands while improving the predictive accuracy of personalized cardiac models.
Links to Paper and Supplementary Materials
Main Paper (Open Access Version): https://papers.miccai.org/miccai-2025/paper/3583_paper.pdf
SharedIt Link: Not yet available
SpringerLink (DOI): Not yet available
Supplementary Material: Not Submitted
Link to the Code Repository
https://github.com/temporary-repos/MICCAI2025
Link to the Dataset(s)
N/A
BibTex
@InProceedings{TolMar_MetaLearning_MICCAI2025,
author = { Toloubidokhti, Maryam and Missel, Ryan and Lian, Shichang and Wang, Linwei},
title = { { Meta-Learning Physics-Informed Neural Networks for Personalized Cardiac Modeling } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15960},
month = {September},
page = {346 -- 356}
}
Reviews
Review #1
- Please describe the contribution of the paper
This manuscript introduces a novel framework for estimating patient-specific bi-ventricular activation maps and the underlying tissue parameters (i.e, conduction velocities) from sparse measurements and based on physics-informed neural networks. In particular, the authors approach this topic as a learning-to-optimize meta-learning task. Given encodings of a query activation map and a set of context activation maps, two hyper networks learn to predict the parameters of two implicit neural fields, which should output the conduction velocity field and the full activation map, respectively. Consequently, such a method may predict personalized activation maps and the underlying conduction velocities for new patients/activation maps without the need of iterative optimization schemes.
As a proof-of-concept, the authors evaluated the method for the inhomogeneous isotropic Eikonal model. The method was trained on a set of 200 synthetic simulations and evaluated on additional 200 synthetic simulations as well as the activation maps of a real animal model. On the synthetic evaluation, the method proved to be substantially more accurate and faster than the optimization of a numerical simulator.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
(1) The proposed meta-learning framework for fast and personalized PINNs of cardiac electrophysiology is novel and the paper was very interesting to read. Despite only showing a proof-of-concept on limited amounts of synthetic data (and one animal case), I believe that there is merit in the presented approach and that the idea would be of interest to the cardiac modeling community.
(2) On the unseen synthetic test data, the proposed approach performed substantially better in terms of accuracy and computational run-time compared to optimizing a traditional physics solver and fitting multi-PINN to new data.
(3) I appreciate that the authors shared their source code for generating the synthetic training data and for training and evaluating the neural networks
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
(1) Even though the paper was very interesting to read, the paper is missing some essential information as well as clarity in both the method and experiments sections, which makes it difficult to fully assess the contribution (please see my questions and concerns in the additional comments section).
(2) The variability of the generated synthetic data seems to be too limited, especially with respect to the geometry. In particular, since there exist open-source bi-ventricular shape models (e.g., Schuler & Loewe 2021), it is not clear to me why the authors limited themselves to generate their synthetic data from only four human datasets. Since the authors intend to move to the anisotropic Eikonal model in the future, I recommend to have a look at the public synthetic dataset generated by Pilia et al (2023), which comprises 1.8 million simulations from 1,000 heart models.
(3) The paper lacks a thorough discussion of the results. In addition, the paper would benefit from additional context with regards to activation map regression given sparse endocardial measurements. In addition to the already cited paper by Costabal et al., it may be valuable to put the proposed method in context to alternative graph-based method, e.g., as presented by Meister et al. (2021) or Hellar et al. (2022).
(4) There are several spelling and grammer error. I would kindly ask the authors to run a spell- and grammer-checker over the manuscript.
- Please rate the clarity and organization of this paper
Poor
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
(1) Synthetic data generation: (a) Since the dataset was build from four MRI images, what pre-processing was applied? Were the heart geometries aligned or was a local coordinate system used? (b) What were the five scar locations and how was the scar modeled? (c) Were there more than 2 tissue classes? (d) What conduction velocities were described per tissue class? (e) Since graph convolutions were employed, how were the meshes constructed and how homogeneous were the lengths of the edges? (f) How many activation points were chosen for each simulation? (g) Was the data split into training and testing by geometry, scar distribution, activation position, or just randomly?
(2) Input data to encoders: (a) What were the features used as input to the encoders? Does it only comprise the activation time? (b) On page 5, you mention that “the first 20% activation of Y_i^q” are used as input to the source embedding. Could the authors please provide a more detailed explanation how the activation times were selected? Does that mean that the velocity encoder uses 100% of the activation map as input? (c) Since only sparse information is given as input, what value was prescribed for the nodes who are not associated with the sparse data? (d) Was the sparse endocardial data sampled on both the left and right ventricle or only on one side? (e) Was the input data normalized? (f) What is the dimensionality of the resulting encoding vector?
(3) Query & Context set: (a) Considering that it is in general very difficult to obtain high-quality and high-resolution activation maps, it is not clear to me which medical question could be answered with the proposed method. Could the authors please share their thoughts and provide their justification for choosing 5 activation maps as the context? (b) It is not clear to me whether the proposed method would work equally well for a varying number of k observations (i.e., k != 5 activation maps in the experiments). Would it be possible for the authors to present quantiative results when using 1 to 5 activation maps?
(4) Meta-learning-based Neural Network Architecture: (a) In general, I find the idea very interesting to have a neural network parameterize a PINN subject to data containing information about the initial condition and tissue properties. However, I would assume that it is extremly tricky for any network to learn changes of the PINN weights that will satisfy the underlying PDE. Could the authors please share their insights into how well this architecture works with regards to satisfying the Eikonal PDE? If possible, could the authors visualize per node the residual? (b) In my opinion, an alternative and more “traditional” approach may be to use the encodings as additional inputs to the PINN and thus condition the neural network (ultimately, this could be an auto-decoder architecture). Do the authors have any idea how such a more traditional approach would compare against the proposed method? (c) How many neurons do the hyper-networks hold per fully-connected layer? (d) Could the authors please share their insights and their justification why the hypernetwork is smaller than the PINN or velocity network?
(5) PINN: (a) Considering that the ground truth data is sitting on a mesh / graph, how were the residual / collocation points chosen during training? (b) How are input coordinates treated that are not belonging to the myocardium (e.g., the blood pool)?
(6) Training: (a) Could you please explain what you mean by an “episodic training scheme”? (b) For how many episodes / epochs was the network trained and how long did it take? (c) To which coordinates was the data loss term applied? (d) Why wasn’t the query sample chosen from the context set?
(7) Comparators: (a) Physics-based: Were the initial activation points assumed to be known or were they also optimized? (b) Meta-neural: Was the context set encoding still provided as input to the hyper-network that calibrates the PINN? Do I understand it correctly that the networks were trained fully supervised, i.e., the data loss was applied to all input coordinates? (c) Multi-PINN: Considering that synthetic ground truth for the conduction velocities is available, was this data used as a supervision signal for the velocity network?
(8) Results: (a) Since the method receives sparse measurements as input, I believe that it would be very valuable if the error analysis would differentiate between the points that have an activation time associated and the ones that do not. Alternatively, to differentiate between the endocardium and the remaining myocardium. (b) Fig. 2, computational cost: It is not possible to identify which bars belong to the physics-based and the multi-PINN model. (c) Fig. 3: I believe that this illustration is a bit difficult to read since there are no 3D cues. Would it be possible to show color-coded surfaces from the top and the side instead? This would then also facilitate spotting regional differences. (d) Fig. 4, Meta-Neural: I am very surprised that the meta-neural approach outputs almost identical activation maps, because it would mean that the hypernetwork is predicting almost identical weights. Do the authors have any idea why this is happening? (e) Fig. 4: Meta-PINN: It may be a visualization problem, but it is not perfectly clear to me in what regards the method produces maps that capture the main activation patterns for the second and third row. Could the authors please show and explain their reasoning? In addition, it would be very interesting to see whether the PDE loss is even satisfied in any of the presented results.
(9) Real Data: (a) Did the geometry here only comprise the epicardial surface or was an anatomical model of the ventricles available? (b) Was there any information about potential scar locations available and how was it used? (c) How similar is the geometry of the animal model to the synthetic training data? (d) Considering that meta-neural and meta-PINN both produce significantly worse results, one may opt to refine these networks for the given data. Could the authors share their thoughts on whether the networks would converge faster than multi-PINN trained from scratch?
(10) Math notation: (a) Please use a bold font to better differentiate between tensors and scalars. (b) Section 3, learning to identify: It seems like there is a contradiction between the text and the math notation regarding s and c. The text suggests here that s and c are the sources of activation and the tissue properties, respectively. However, the math notation and the following text suggest that s and c are simply the embeddings of the input activation maps. Since it does not seem like there is a supervision signal for the encoders during training, one cannot be sure that the encoders actually learn to extract the source and the spatially-varying tissue properties. Could the authors please clarify this?
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(3) Weak Reject — could be rejected, dependent on rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
Even though the proposed method is novel and the paper interesting to read, it is difficult to fully assess the technical contribution and soundness of the experiments, because some essential information is missing. In addition, the paper would greatly benefit from a thorough discussion of the results as well as a clearer description of the method and experiments.
- Reviewer confidence
Very confident (4)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
The authors have effectively addressed the majority of my questions. I strongly believe that this manuscript will be of high interest to the community and that the particularly novel method will spark a lot of fruitful discussions on possible extensions and use-cases.
Review #2
- Please describe the contribution of the paper
The paper presents a new method for personalized modelling of cardiac electrophysiological activity using neural networks. The main novelty is the introduction of a meta-learning approach, in which the personalization step is also done through machine learning. The method is validated quantitatively on synthetic cases, and on a more qualitative way on animal models.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The approach is very interesting and has a good amount of novelty. It could represent an important new tool in bringing personalized cardiac models to clinical practice, which is currently hampered by computational costs of personalization. The methods are sound, and the paper is generally well presented.
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
While a good effort has been made to validate the methods, it still falls short of demonstrating clinical applicability, for which further work would be required. While the paper is generally well written, there are quite a few errors in the writing that should be corrected.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission does not provide sufficient information for reproducibility.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
Please carry out a full revision of the paper, looking at typing and syntactic errors.
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(5) Accept — should be accepted, independent of rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I believe this would be a novel, solid contribution to the conference programme.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
N/A
- [Post rebuttal] Please justify your final decision from above.
N/A
Review #3
- Please describe the contribution of the paper
This paper proposes a meta-learning physics-informed neural network (PINN) for cardiac electrophysiology personalization. Existing frameworks either use the data-driven approach without physiological constraints, or use PINNs for which each patient needs a separate model. The proposed method addresses these issues and allows rapid personalization of a PINN to new subjects’ data through forward passes without retraining.
- Please list the major strengths of the paper: you should highlight a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
-
This paper is well-written. The limitations of existing methods are clearly described, and the proposed method can actually address these limitations. The flow of the paper is smooth and clear in general.
-
As mentioned by the authors, many existing PINNs train a model for a patient, thus 100 models for 100 patients. The proposed method that enables a single PINN for multiple patients is novel.
-
Although the use of hyper-networks is not new, the way that the hyper-networks are used in this paper is reasonable and effective.
-
Experiments were performed on both synthetic and real data.
-
- Please list the major weaknesses of the paper. Please provide details: for instance, if you state that a formulation, way of using data, demonstration of clinical feasibility, or application is not novel, then you must provide specific references to prior work.
-
Although the writing is clear in general, there are some issues: (a) In eq (1), the dimensions of the variables are missing. Are they scalars or vectors? I can infer them but they should be clearly described. As F seems to be a scalar in a 3D space, should it be speed rather than velocity? (b) In Fig. 1, what are u and v? Should they be T and F in eq (1), respectively? (c) In Section 3, if Y = T_obs, should eq (3) and (4) just use Y? (d) The sentence right above eq (8) can be rephrased for better clarity.
-
The experiments on real data show that further improvements are required for clinical application. Incorporating noninvasive measurements can be an interesting future work.
-
- Please rate the clarity and organization of this paper
Good
- Please comment on the reproducibility of the paper. Please be aware that providing code and data is a plus, but not a requirement for acceptance.
The submission has provided an anonymized link to the source code, dataset, or any other dependencies.
- Optional: If you have any additional comments to share with the authors, please provide them here. Please also refer to our Reviewer’s guide on what makes a good review and pay specific attention to the different assessment criteria for the different paper categories: https://conferences.miccai.org/2025/en/REVIEWER-GUIDELINES.html
N/A
- Rate the paper on a scale of 1-6, 6 being the strongest (6-4: accept; 3-1: reject). Please use the entire range of the distribution. Spreading the score helps create a distribution for decision-making.
(5) Accept — should be accepted, independent of rebuttal
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
This is a high-quality work. The paper is well-written in general and the framework addresses some limitations of existing models. Experiments were performed on both synthetic and real data.
- Reviewer confidence
Confident but not absolutely certain (3)
- [Post rebuttal] After reading the authors’ rebuttal, please state your final opinion of the paper.
Accept
- [Post rebuttal] Please justify your final decision from above.
This is an interesting and high-quality paper that should be accepted.
Author Feedback
We thank R1/R2 for the appreciation of our work, especially its technical novelty, and R7 for insightful questions. To all reviewers: Scope: We focus on technical novelty and proof-of-concept. Meta-PINN builds patient-specific models from procedural data (e.g., endocardial maps) and can extend to non-invasive data (e.g., ECG). Full clinical validation, using suggested datasets and graph-based baselines is planned as future work. Writing: We will revise writing and notation (e.g., variable dimensions, scalar/vector clarity). Per R2, F(x) will denote speed and Fig. 1 and Eqs. 3–4 will be updated for consistency. To R7: Q1) Geometries were aligned via non-rigid registration. Two tissue classes were used: scar (0.1 m/s) and healthy (0.6 m/s). Scars were formed by merging adjacent regions in the AHA 17-segment model—4 on LV, 1 RV. Graphs were built on point clouds using KNN (k=6) with edge length: 0.23 ± 0.04 mm, fairly homogeneous for message passing. Each simulation is run with a single initial activation and, for each task (scar segment per geometry), training and test samples are split by initial activations; all tasks appear in both splits. Q2: Encoders take LV/RV endo AT maps, with unobserved nodes masked. Source encoder uses the earliest 20% AT values on the query sample, while velocity encoder sees endo AT maps on context samples. Inputs are rescaled by 0.001. Encoders output 128-D embeddings. Q3a, Q10b: Multiple endocardial maps obtained over time for a subject can be used for personalized models. Indeed, the motivation for meta-learning is to ensure the challenge in separating s and c: source (s) locations vary across maps (encoded from each query), while tissue property (c) is shared across multiple maps from the same subject (encoded from context set). This design enables disentanglement without direct supervision on s or c. Q3b: Performance (CC) is robust with smaller k: k=1→AT: 0.86±0.17, Vel: 0.56±0.26; k=3→0.86±0.16, 0.59±0.20; k=5→0.87±0.15, 0.60±0.19 Q4a: The per-node PDE residual was 0.29 ± 0.04 for individually trained PINNs, compared to 0.31 ± 0.18 for the, Meta-PINN without retraining. We will add residual heatmaps. Q4b: Meta-PINN as presented is not restricted to using hypernet for generating PINN weights, but can also support conditioning via embeddings. The latter however modifies the PDE input space, complicating Eikonal enforcement, while the former preserves the standard PINN form. In a prior project, we compared the two and found the hypernetwork more effective. c,d) Hyperparameters were tuned empirically to avoid complexity and overfitting; hypernetworks use 256-unit hidden layers. Q5: a) Residual loss applied to all nodes, as memory permitted and the setting remained unsupervised. b) Geometries include myocardial tissue; blood pool excluded. Q6a,b,d: We trained for 8000 iterations (3s/iter). Each meta-iteration processes all tasks (scar segments). For each task (episode), context/query sets are resampled per iteration following standard episodic meta-learning. c) Data loss applied to 40% random nodes of query ATs. Q7: a) Physics-based assumes known source, b) Meta-neural uses the same setup as meta-PINN (data loss + support set) and c) Multi-PINN does not use ground truth velocities, using standard inverse PINN loss. Q8: a) Metrics will be reported per surface. Preliminary results show minimal difference (Meta-PINN AT CC: 0.87, Vel CC: 0.57 (endo) vs. 0.87, 0.60 (rest)). b,c) Figs. 2,3 will be updated. d,e) Physics guidance aids generalization and in sim-to-real setting Meta-neural’s collapsed outputs likely result from lack of physics constraints while Meta-PINN better captures activation patterns. Q9: a–c) Real data was epicardium only without scar labels and differs from synthetic data, causing the performance drop. While fine-tuning is possible, Meta-PINN targets real-time inference and we will instead retrain on epicardial-only data to close the gap.
Meta-Review
Meta-review #1
- Your recommendation
Invite for Rebuttal
- If your recommendation is “Provisional Reject”, then summarize the factors that went into this decision. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. You do not need to provide a justification for a recommendation of “Provisional Accept” or “Invite for Rebuttal”.
In addition to the reviewers’ comments, please also explain:
- how the the gradient for the PDE loss term is computed using a LV manifold parameterization and not 3D Cartesian coordinates.
- what algorithm was used to estimate AT in the experimental data signals. Were they only endocardial?
- why the LV manifolds have such a coarse appearance on Figs 3 and 4 and lack a quantitative scale.
- how MetaPINNs can provide accurate estimates of AT maps with poor velocity map estimates?
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #2
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A
Meta-review #3
- After you have reviewed the rebuttal and updated reviews, please provide your recommendation based on all reviews and the authors’ rebuttal.
Accept
- Please justify your recommendation. You may optionally write justifications for ‘accepts’, but are expected to write a justification for ‘rejects’
N/A