Artificial intelligence, or AI, can support health care leaders to a staggering degree, as a recent webinar hosted by the AHA’s Health Forum and sponsored by GE Healthcare illustrates. It can anticipate patient outcomes, help assess and treat nuanced conditions, and give clinicians valuable evidence to aid them in complicated decision-making. On top of that, its ability to supplement clinical documentation errors and to mine the sometimes-unstructured electronic health record can save time and money. 

[Watch the webinar, AI in Health Care: Keys to A Smarter Future]

Mark Michalski, M.D., executive director, Center for Clinical Data Science at Massachusetts General Hospital and Brigham and Women's Hospital, explained during the webinar how AI, or, its subset, machine learning, functions.

Machine learning is a set of statistical tools that operates within a neural network – or a computer system similar to the human brain and nervous system – to detect features within images and make predictions, he said. It uses algorithms to help clinicians make sense of the barrage of data that they are otherwise tasked with decoding, helping them to improve care and sharpen prevention efforts. 

Michalski shared an example of a machine learning model that can identify the location and exact size of patient’s stroke, enabling physicians to respond to it in an informed way. “That study shows that we have more time than we thought to treat a stroke,” he said.

He also described how machine learning can, for example, detect, through CT scans, where a spinal image should be cut for diagnostic purposes, or how it can assess patients’ muscle and fat levels to predict how they would fare in certain surgeries. In fact, AI is so adept at reading images, some worry it could displace physicians who specialize in this kind of technology, such as radiologists, cardiologists and others. 

But Michalski says that possibility is distant at best, since AI isn’t an autonomous tool. Clinicians need to do a fair amount of testing – specifically, data set validation – before applying AI to particular applications.

“I’m really excited about this technology and its capabilities,” Michalski said. “But as a community, we still have some work to do before we can start talking about artificial intelligence really changing the landscape of our workforce.”

Plus, human intelligence has a notable advantage over the artificial kind: its ability to read context. For example, machine learning might be able to detect pulmonary nodules in a lung cancer patient, Michalski said, “but I have to somehow encode for the fact that the patient in the electronic record says that they’re not a smoker, but they entered the clinical exam room smelling like tobacco.”

Michalski said that the current dynamic “would have to evolve pretty considerably before you obviate a clinical expert,” but “that being said, you can facilitate those clinical experts to an astounding degree if you do this right.” 

And as AI becomes more enmeshed in health care, it prompts questions around inclusivity and diversity, topics that are otherwise currently top of mind for health care leaders.

When training AI models to work for specific applications, physicians must be vigilant to not “encode bias” into their training data, Michalski said. For example, if a physician tailored, or “trained” an AI model based on its immediate population, that model wouldn’t necessarily be replicable everywhere else. 

“Are we representing the appropriate people and are we collecting and training data on a representative population?” Michalski said.