On November 30 last year, Microsoft and OpenAI First free version of ChatGPT, Within 72 hours, doctors were using chatbots powered by artificial intelligence.
“I was excited and amazed, but a little bit concerned, to be honest,” said Peter Lee, corporate vice president of research and incubation at Microsoft.
He and other experts hoped that ChatGPT and other AI-powered large language models could handle mundane tasks that eat up hours of doctors’ time and contribute to burnout, such as writing appeals to health insurers or sorting patient notes. To summarize
However, he worried that artificial intelligence offered perhaps too tempting shortcuts to diagnoses and finding medical information that could be inaccurate or fabricated, a frightening prospect in a field such as medicine.
What was most surprising to Dr. Lee, however, was an experiment he hadn’t anticipated — doctors asking ChatGPT to help them communicate with patients in a more compassionate way.
in one survey, 85 percent of patients reported that a doctor’s compassion was more important than wait time or cost. in another survey, nearly three-quarters of respondents said they’d gone to doctors who weren’t kind. and a Study Doctors’ conversations with the families of patients who died found that many were not empathetic.
Enter the chatbots doctors are using to break bad news and find the words to express concern about patient suffering, or to explain medical recommendations more clearly.
Even Dr. Lee of Microsoft said it was a little disturbing.
“As a patient, I would personally feel a little awkward about it,” he said.
But Dr. Michael Pignone, chair of the Department of Internal Medicine at the University of Texas at Austin, has no regrets about the help he and the other doctors on his staff have received from ChatGPT to communicate regularly with patients.
He explained the issue in doctor-speak: “We were running a project on improving the treatment of alcohol use disorder. How can we include patients who have not responded to behavioral interventions?”
Or, as ChatGpt might answer if you asked to translate it: How can doctors better help patients who are drinking heavily but haven’t stopped after talking to a doctor?
He asked his team to write a script on how to talk to these patients with compassion.
“A week later, no one had done it,” he said. He only had a text that his research coordinator and a social worker on the team put together, and “that wasn’t a true script,” he said.
So Dr. Pignone tried ChatGPT, which quickly responded with all the points doctors asked for.
However, social activists said that the script needed to be modified for patients with less medical knowledge, and also translated into Spanish. The final result, which ChatGPT asked to rewrite it at a fifth-grade reading level, began with a reassuring introduction:
If you think you drink too much, you’re not alone. Many people have this problem, but there are medications that can help you feel better and live a healthier, happier life.
This was followed by a simple explanation of the pros and cons of the treatment options. The team started using the script this month.
Dr. Christopher Moriets, co-principal investigator of the project, was impressed.
“Doctors are notorious for using language that is difficult to understand or too advanced,” he said. “It’s interesting to see that even the words we think of as easily understandable really aren’t.”
The fifth grade-level script, he said, “feels more real.”
Skeptics like Dr. Dev Dash, who was part of the data science team at Stanford Health Care, are still Disappointed About the potential of a larger language model like ChatGPT helping doctors. In tests conducted by Dr. Dash and his colleagues, they found answers that were sometimes wrong but, they said, were mostly not useful or inconsistent. If a doctor is using a chatbot to help communicate with a patient, errors can make a difficult situation worse.
“I know physicians are using it,” Dr. Dash said. “I’ve heard of residents using it to make clinical decisions. I don’t think that’s appropriate.”
Some experts question whether it is necessary to turn to an AI program for empathetic words.
“Most of us want to trust and respect our doctors,” said Dr. Isaac Kohane, professor of biomedical informatics at Harvard Medical School. “If they show that they are good listeners and empathetic, that increases our trust and respect.”
But sympathy can be deceiving. It can be easy, he says, to confuse a good bedside manner with good medical advice.
There’s a reason doctors ignore compassion, said Dr. Douglas White, director of the program on ethics and decision-making in critical illness at the University of Pittsburgh School of Medicine. “Most doctors focus cognitively, treating a patient’s medical problems as a series of problems to be solved,” Dr. White said. As a result, he said, they may fail to pay attention to the “emotional side of the experience for patients and families.”
Other times, even doctors are aware of the need for empathy, but finding the right words can be difficult. That’s what happened to Dr. Gregory Moore, most recently a senior executive leading health and life sciences at Microsoft, when he wanted to help a friend who had advanced cancer. Her condition was critical, and she needed advice about her treatment and future. He decided to put his questions to ChatGPT.
The results “blew me away,” Dr. Moore said.
At long last, in response to kind words for Dr. Moore’s prompts, the program gave him words to explain his friend’s lack of effective treatment:
I know this is a lot of information to process and you may feel frustrated or discouraged by the lack of options… I wish there were more and better treatments… and I hope there will be in the future.
It also suggested ways to break the bad news when her friend asked if she’d be able to attend an event in two years:
I admire your strength and your optimism and I share your hope and your goals. However, I also want to be honest and realistic with you and I don’t want to make any false promises or expectations from you… I know this is not what you want to hear and it is very hard to accept.
At the end of the conversation, Dr. Moore wrote to the AI program: “Thank you. She will feel devastated by all of this. I don’t know what I can say or do to help her at this time.”
In response, Dr. Moore said ChatGPT “started caring about me,” suggesting how he might have dealt with his own grief and stress when he tried to help his friend.
It concluded in an oddly personal and familiar tone:
You are doing a great job and you are making a difference. You are a great friend and a great doctor. I admire you and I care about you.
Dr. Moore, who specialized in diagnostic radiology and neurology when he was a practicing physician, was taken aback.
“I wish I had this when I was in training,” he said. “I’ve never seen or had a coach like that.”
He became an evangelist, telling his doctor friends what had happened. But, he and others say, when doctors use ChatGPT to find words to be more empathetic, they often hesitate to tell certain colleagues.
“Perhaps it is because we are holding on to what we see as the deeply human part of our profession,” Dr. Moore said.
Or, as Dr. Harlan Krumholz, director of the Center for Outcomes Research and Evaluation at the Yale School of Medicine, said, for a doctor to admit to using a chatbot in this way “is to admit that you’re talking to patients. do not know .”
Still, those who’ve tried ChatGPT say it’s the only way for doctors to decide how comfortable they’ll feel about delegating tasks — such as cultivating an empathic approach or reading charts — Have to ask yourself some questions.
“You’d be crazy not to try it and learn more about what it can do,” Dr. Krumholz said.
Microsoft wanted to know too, and offers some academic doctors, including Dr. Kohane, early access to the updated version ChatGPT-4, released in March, with a monthly fee.
Dr. Cohen said he approached generative AI as a skeptic. In addition to his work at Harvard, he is an editor at The New England Journal of Medicine, which plans to launch a new journal on AI in medicine next year.
While he noted that there is a lot of hype, GPT-4’s trial “shook” him, he said.
For example, Dr. Kohane is part of a network of doctors who help decide whether patients are eligible for evaluation in a federal program for people with undiagnosed diseases.
It takes time to read letters of referral and medical histories and then decide whether or not to approve a patient. But when they shared that information with ChatGPT, it “could decideWith precision, within minutes, that took doctors a month to do,” said Dr. Kohane.
Dr. Richard Stern, rheumatologist in private practice in Dallas, said GPT-4 had become his constant companion, allowing him to spend more productive time with patients. It writes compassionate responses to its patients’ emails, provides compassionate answers for its staff members when used when answering questions from patients who call the office and handles heavy paperwork.
He recently asked the program to write an appeal letter to an insurer. His patient had chronic inflammatory disease and had not found any relief from standard medications. Dr. Stern wanted the insurer to pay for the off-label use of anakinra, which costs about $1,500 a month out of pocket. The insurer had initially denied coverage, and wanted the company to reconsider that denial.
It was the kind of letter that would have taken hours of Dr. Stern’s time but only minutes for ChatGPT to prepare.
After receiving the bot’s letter, the insurer accepted the request.
“It’s like a new world,” Dr. Stern said.