Local

Chat GPT or Web MD? Doctors analyze accuracy of Artificial Intelligence, digital doctors

ATLANTA — Drs. Sruthi Arepalli and Riley Lyons are no strangers to Web MD diagnosis.

They are both often ophthalmologists, which specialize in eye care.

[DOWNLOAD: Free WSB-TV News app for alerts as news breaks]

“It is common for people to look up their symptoms, come up with treatment plans, and then ask what our opinions are,” Dr. Arepalli said.

However, those treatment plans and diagnoses found online are often wrong.

With the recent rise of tools like Chat GPT, they know it is only a matter of time before patients turn to the tech to figure out what’s wrong with them.

“People are going to be using ChatGPT for their medical advice whether or not their doctors recommend it,” Dr. Lyons said.

So, doctors at Emory created a study to test the accuracy of AIs like Chat GPT, and Bing Chat.

Using more than 40 different prompts, they found that Chat GPT correctly listed the appropriate diagnosis in its top three suggestions 95 percent of the time. Bing Chat was correct 77 percent of the time, while Web MD was only accurate in 33 percent of cases.

For comparison, physicians given similar prompts were correct 95 percent of the time.

“I was absolutely surprised. I had heard amazing things about Chat GPT, about its ability to answer medical questions, but I was really surprised how it could take these complex situations and give you a really informed answer,” Lyons said.

However, that does not mean Chat GPT will ever replace doctors.

Researchers found the less technical the prompt was, the more chat GPT struggled.

“When we turning them into layman terms, the machines didn’t do as well because they weren’t able to use those large language models or read between the lines of what patients were saying,” Arepalli said. “If a patient were to read the symptoms from a textbook, they would probably get it right, but we know patients don’t do that.”

Also, if a patient withholds or gives incorrect information, the AI will often give wrong responses. That is something doctors may be able to tell the difference between.

TRENDING STORIES:

Dr. Arepalli said that research shows that AIs were more likely to recommend a person go to a doctor, even if care is not needed, which could overwhelm the system.

Finally, AIs like Chat GPT rely on information from medical resources that are constantly updated. This means certain AIs may rely on old information to make a new diagnosis.

More research is needed, but Dr. Arepalli said there may be a future where AI can work alongside doctors in helping with the initial diagnosis. Dr Arepalli said it could be part of the check-in process to help doctors prioritize who needs care.

“It’s opened a door that I didn’t think most of us believed existed ten years ago,” Arepalli said.

You can read the study, here.

[SIGN UP: WSB-TV Daily Headlines Newsletter]

IN OTHER NEWS:

0