Skip to Main Content

As medicine continues to test automated machine learning tools, many hope that low-cost support tools will help narrow care gaps in countries with constrained resources. But new research suggests it’s those countries that are least represented in the data being used to design and test most clinical AI  — potentially making those gaps even wider.

Researchers have shown that AI tools often fail to perform when used in real-world hospitals. It’s the problem of transferability: An algorithm trained on one patient population with a particular set of characteristics won’t necessarily work well on another. Those failures have motivated a growing call for clinical AI to be both trained and validated on diverse patient data, with representation across spectrums of sex, age, race, ethnicity, and more.

Unlock this article by subscribing to STAT+ and enjoy your first 30 days free!

GET STARTED

Create a display name to comment

This name will appear with your comment