Jan. 20, 2026
Does estimating a patient's ejection fraction — the measurement of how efficiently the heart is pumping — require watching the heart in action? What if that ratio of the left ventricle's volume when the heart's contracted, compared to when it's relaxed, could be seen from a still image?
It can with the help of artificial intelligence (AI).
Mayo Clinic Cardiovascular Medicine researchers learned this with a multi-institutional, retrospective model development and validation study. The results were published in The Lancet Digital Health.
"We've shown that it's possible to estimate a patient's left ventricular ejection fraction (LVEF) from a single static image of the heart taken from a 2D clip from an echocardiographic exam," says Jeff G. Malins, Ph.D., principal data scientist in Cardiovascular Medicine at Mayo Clinic in Rochester, Minnesota, and co-first author of the study. Dr. Malins works in Mayo's Artificial Intelligence (AI) in Cardiovascular Medicine specialty group. "The trained eye is an artificial intelligence (AI) model that uses techniques of deep learning and computer vision to estimate a patient's LVEF based on visual features present in the image."
Model performance
This is the first time such findings have been demonstrated. "The study shows that a dynamic variable — left ventricular ejection fraction — can be determined from a single echocardiographic frame with the use of advanced convolutional neural networks when previous AI models to estimate LVEF have typically used video clips as input," says Dr. Malins. Using video can be computationally intensive.
"Estimating LVEF from a single frame is not as accurate as estimating it from a video; however, there are certain situations in which it might be advantageous to estimate LVEF from single frames, such as rapid deployment of AI models in point-of-care settings," says Dr. Malins.
This is why in addition to validating the model with transthoracic echocardiographic (TTE) cohorts from three Mayo Clinic sites, the researchers also examined how the model performed in two prospective cohorts of patients whose data were collected using handheld cardiac ultrasound (HCU), which is typically used in point-of-care settings.
"Across all cohorts, we found that model performance was strong for LVEF estimation, especially when estimates were averaged across more than one clip containing the left ventricle, even if only one frame was taken per clip," says Dr. Malins. "The model performed well with both TTE and HCU data. The area under the curve (AUC) was above 0.90 for all cohorts except for HCU collected by novice users, for which the AUC was above 0.85."
Deeper detection
When heat maps were generated to reveal the areas of the image most critical for the model's decision-making, the maps tended to show peaks in left ventricular structures.
"Ejection fraction is a vibrant measurement that looks at how effectively the heart pumps blood with each beat. The observation that AI analysis of a static image could somehow capture the dynamics of blood flow over time is surprising and highlights the potential of AI to detect subtle physiological features from medical images that the human eye cannot see. This opens the door for AI analysis of static images to quantify other measurements that have typically relied on assessment of video data," says D.M. Anisuzzaman, Ph.D., M.S., a senior data science analyst in Cardiovascular Medicine at Mayo Clinic in Rochester, Minnesota, and co-first author of the study. Dr. Anisuzzaman also works in the Mayo Clinic group focusing on AI applications in cardiovascular research.
Next steps
This research has important implications. The results suggest that future AI models could:
- Streamline the amount of required data (single frames rather than videos) without sacrificing model performance.
- Shorten time needed to get results in point-of-care settings where rapidly deploying models is key.
- Reduce the need for operator expertise required to obtain specific views of a certain length and quality to compute LVEF.
Though the study sample was compiled at three external sites, more information is needed. "With data collected using a handheld device, the dataset wasn't characteristic of the heterogeneity typically observed in true point-of-care data environments such as emergency departments," says Dr. Anisuzzaman. "The HCU data were collected in controlled settings and patient cohorts were not diverse in race and ethnicity. A key priority for this work is validating the model across larger, diverse patient cohorts and varied point-of-care settings."
For more information
Malins JG, et al. Snapshot artificial intelligence — Determination of ejection fraction from a single frame still image: A multi-institutional, retrospective model development and validation study. The Lancet Digital Health. 2025;7:E255.
Refer a patient to Mayo Clinic.