AI may be on its way to your doctor’s office, but it’s not ready to serve patients

AI may be on its way to your doctor’s office, but it’s not ready to serve patients

What use could health be for someone who invents, doesn’t know how to keep a secret, doesn’t know anything and, when it’s time to speak, simply completes the next word based on what came before? A lot if that individual is the newest form of artificial intelligence, according to some of the biggest companies out there.

Companies pushing the latest AI technology – known as “generative AI” – are accumulating: Google It is Microsoft they want to bring so-called big language models to health. Big companies that are familiar to people in white coats — but perhaps less so to your average Joe and Jane — are equally excited: electronic medical records giants Epic and Oracle Cerner are not far behind. The space is also crowded with startups.

Companies want their AI to take notes for doctors and give them a second opinion – assuming they can prevent intelligence from “hallucinating” or, for that matter, disclosing private patient information.

“There’s something going on that’s really exciting,” said Eric Topol, director of the Scripps Research Translational Institute in San Diego. “Their capabilities will ultimately have a big impact.” Topol, like many other observers, wonders how many problems this could cause — like patient data leaks — and how often. “We will find out.”

The specter of such problems has inspired more than 1,000 tech leaders sign an open letter in March, urging companies to halt development of advanced AI systems until “we are confident that their effects will be positive and their risks will be manageable.” Even so, some of them are investing more money in AI ventures.

The underlying technology relies on synthesizing large blocks of text or other data – for example, some medical models trust 2 million intensive care unit notes at Beth Israel Deaconess Medical Center in Boston – to predict the text that would follow a given query. The idea has been around for years, but the gold rush and the marketing and media craze surrounding it is more recent.

The frenzy began in December 2022 by OpenAI supported by Microsoft and its flagship product, ChatGPT, which answers questions with authority and style. You can explain genetics in a sonnet, for example.

OpenAI, started as a research venture seeded by Silicon Valley elites like Sam Altman, Elon Musk and Reid Hoffman, has put the hype in investor pockets. The enterprise has a complex and hybrid structure with for-profit and non-profit purposes. But a new $10 billion funding round from Microsoft raised OpenAI’s value to $29 billion. The Wall Street Journal reported. The company is currently licensing its technology to companies like Microsoft and selling subscriptions to consumers. Other startups are considering selling AI transcription or other products to hospital systems or directly to patients.

Hyperbolic quotes are everywhere. Former Treasury Secretary Larry Summers recently tweeted: “It’s going to replace what doctors do – listening to symptoms and making diagnoses – before it changes what nurses do – helping patients get up and take care of themselves in the hospital.”

But just a few weeks after OpenAI received another big cash infusion, even Altman, its CEO, is wary of the fanfare. “The hype about these systems – even if everything we hope is right in the long term – is totally out of control in the short term,” he said to a March article in the New York Times.

Few in healthcare believe this latest form of AI is about to take their jobs (although some companies are experimenting – controversially – with chatbots that act as therapists or care guides). Still, those who are bullish on technology think it will make some parts of their job a lot easier.

Eric Arzubi, a psychiatrist in Billings, Mont., used to manage fellow psychiatrists for a hospital system. Time and again he would get a list of providers who hadn’t yet completed their notes – their summaries of a patient’s condition and a treatment plan.

Writing these notes is one of the great stressors of the health care system: collectively, it is an administrative burden. But it is necessary to develop a register for future providers and, of course, insurers.

“When people are very late with documentation, it creates problems,” Arzubi said. “What happens if the patient arrives at the hospital and has a medical record that has not been filled out and we don’t know what’s going on?”

New technology can help ease these burdens. Arzubi is testing a service, called Nabla Copilot, that takes his virtual patient visits and automatically summarizes them, organizing the complaint, illness history and treatment plan into a standard note format.

The results are solid after about 50 patients, he said: “It’s 90% of the way.” Copilot produces useful summaries that Arzubi usually edits. Summaries don’t necessarily pick up on nonverbal cues or thoughts that Arzubi might not want to voice. Still, he said, the gains are significant: He doesn’t have to worry about taking notes and can focus on talking to patients. And it saves time.

“If I have a full patient day where I can see 15 patients, I would say that saves me a good hour at the end of the day,” he said. (If the technology is widely adopted, he hopes hospitals won’t take advantage of the time saved simply by scheduling more patients. “That’s not fair,” he said.)

Nabla Copilot is not the only service of its kind; Microsoft is experimenting with the same concept. At the April conference of the Healthcare Information and Management Systems Society – an industry confab where healthcare professionals exchange ideas, advertise and sell their products – Evercore investment analysts highlighted reducing administrative burden as one of the main possibilities for new technologies.

But most of all? They heard mixed reviews. And this view is common: many technologists and physicians are ambivalent.

For example, if you’re stumped by a diagnosis, feeding patient data into one of these programs “can provide a second opinion, without a doubt,” Topol said. “I’m sure the doctors are doing it.” However, this runs up against the current limitations of the technology.

Joshua Tamayo-Sarver, a clinician and executive at startup Inflect Health, fed fictional patient scenarios based on his own practice in an emergency department into a system to see how they would perform. He missed life-threatening conditions, he said. “That looks troublesome.”

Technology also tends to “hallucinate” – that is, to invent information that seems convincing. Formal studies have found a wide range of performance. A preliminary research paper examining ChatGPT and Google products using an open board exam neurosurgery questions found a hallucination rate of 2%. A study of Stanford researchers, examining the quality of AI responses to 64 clinical scenarios found fabricated or hallucinated quotes 6% of the time, co-author Nigam Shah told KFF Health News. Other preliminary paper found that in complex cardiology cases, ChatGPT agreed with expert opinion half the time.

Privacy is another concern. It’s unclear whether the information fed into this type of AI-based system will remain in there.

In theory, the system has guardrails that prevent private information from escaping. For example, when KFF Health News asked ChatGPT for its email address, the system refused to divulge this private information. But when asked to play a character and asked for the email address of the author of this article, he happily gave up the information. (It was in fact the author’s correct email address as of 2021, when the ChatGPT archive ends.)

“I wouldn’t put in patient data,” said Shah, chief data scientist at Stanford Health Care. “We don’t understand what happens to that data when it gets to OpenAI’s servers.”

Tina Sui, a spokesperson for OpenAI, told KFF Health News that “one should never use our models to provide diagnostic or treatment services for serious medical conditions.” They “are not tuned to provide medical information,” she said.

With the explosion of new research, Topol said, “I don’t think the medical community has a good clue as to what’s going to happen.”

KFF Health Newsformerly known as Kaiser Health News, is a national newsroom that produces in-depth journalism on health issues.

Leave a Comment

%d bloggers like this: