Still, the idea of a computer diagnostician has long been compelling. Doctors have tried to make machines that can “think” like a doctor and diagnose patients for decades, like a Dr House-style program that can take in a set of disparate symptoms and suggest a unifying diagnosis. But early models were time-consuming to employ and ultimately not particularly useful in practice. They were limited in their utility until advances in natural language processing made generative AI — in which a computer can actually create new content in the style of a human — a reality. This is not the same as looking up a set of symptoms on Google; instead, these programs have the ability to synthesize data and “think” much like an expert.
To date, we have not integrated generative AI into our work in the intensive care unit. But it seems clear that we inevitably will. One of the easiest ways to imagine using AI is when it comes to work that requires pattern recognition, such as reading X-rays. Even the best doctor may be less adept than a machine when it comes to recognising complex patterns without bias. There is also a good deal of excitement about the possibility for AI programs to write our daily patient notes for us as a sort of electronic scribe, saving considerable time. As Dr Eric Topol, a cardiologist who has written about the promise of AI in medicine, says, this technology could foster the relationship between patients and doctors. “We’ve got a path to restore the humanity in medicine,” he told me.
Beyond saving us time, the intelligence in AI — if used well — could make us better at our jobs. Dr Francisco Lopez-Jimenez, the co-director of AI in cardiology at the Mayo Clinic, has been studying the use of AI to read electrocardiograms, or ECGs, which are a simple recording of the heart’s electrical activity. An expert cardiologist can glean all sorts of information from an ECG, but a computer can glean more, including an assessment of how well the heart is functioning — which could help determine who would benefit from further testing.
Even more remarkably, Dr Lopez-Jimenez and his team found that when asked to predict age based on an ECG, the AI program would from time to time give an entirely incorrect response. At first, the researchers thought the machine simply wasn’t great at age prediction based on the ECG — until they realised that the machine was offering the “biological” rather than chronological age, explained Dr Lopez-Jimenez. Based on the patterns of the ECG alone, the AI program knew more about a patient’s ageing than a clinician ever could.
And this is just the start. Some studies are using AI to try to diagnose a patient’s condition based on voice alone. Researchers promote the possibility of AI to speed drug discovery. But as an intensive care unit doctor, I find that what is most compelling is the ability of generative AI programs to diagnose a patient. Imagine it: a pocket expert on rounds with the ability to plumb the depth of existing knowledge in seconds.
What proof do we need to use any of this? The bar is higher for diagnostic programs than it is for programs that write our notes. But the way we typically test advances in medicine — a rigorously designed randomised clinical trial that takes years — won’t work here. After all, by the time the trial were complete, the technology would have changed. Besides, the reality is that these technologies are going to find their way into our daily practice whether they are tested or not.
Dr Adam Rodman, an internist at Beth Israel Deaconess Hospital in Boston and a historian, found that the majority of his medical students are using Chat GPT already, to help them on rounds or even to help predict test questions. Curious about how AI would perform on tough medical cases, Dr Rodman gave the notoriously challenging New England Journal of Medicine weekly case — and found that the program offered the correct diagnosis in a list of possible diagnoses just over 60 per cent of the time. This performance is most likely better than any individual could accomplish.
How those abilities translate to the real world remains to be seen. But even as he prepares to embrace new technology, Dr Rodman wonders if something will be lost. After all, the training of doctors has long followed a clear process — we see patients, we struggle with their care in a supervised environment and we do it over again until we finish our training. But with AI, there is the real possibility that doctors in training could lean on these programs to do the hard work of generating a diagnosis, rather than learn to do it themselves. If you have never sorted through the mess of seemingly unrelated symptoms to arrive at a potential diagnosis, but instead relied on a computer, how do you learn the thought processes required for excellence as a doctor?
“In the very near future, we’re looking at a time where the new generation coming up are not going to be developing these skills in the same way we did,” Dr Rodman said. Even when it comes to AI writing our notes for us, Dr Rodman sees a trade-off. After all, notes are not simply drudgery; they also represent a time to take stock, to review the data and reflect on what comes next for our patients. If we offload that work, we surely gain time, but maybe we lose something too.
But there is a balance here. Maybe the diagnoses offered by AI will become an adjunct to our own thought processes, not replacing us but allowing us all the tools to become better. Particularly for those working in settings with limited specialists for consultation, AI could bring everyone up to the same standard. At the same time, patients will be using these technologies, asking questions and coming to us with potential answers. This democratising of information is already happening and will only increase.
Perhaps being an expert doesn’t mean being a fount of information but synthesizing and communicating and using judgment to make hard decisions. AI can be part of that process, just one more tool that we use, but it will never replace a hand at the bedside, eye contact, understanding — what it is to be a doctor.
A few weeks ago, I downloaded the Chat GPT app. I’ve asked it all sorts of questions, from the medical to the personal. And when I am next working in the intensive care unit, when faced with a question on rounds, I just might open the app and see what AI has to say.
Daniela J. Lamas, a contributing Opinion writer for The New York Times, is a pulmonary and critical-care physician at Brigham and Women’s Hospital in Boston.
This article originally appeared in The New York Times.
Written by: Daniela J. Lamas
Photographs by: Shira Inbar
©2023 THE NEW YORK TIMES