AI chatbots like ChatGPT could soon play a significant role in medicine
AI chatbots could soon assist healthcare professionals, according to a report by Scientific American published on Friday.
Systems such as OpenAI’s ChatGPT, the latest version of Microsoft’s search engine Bing and Google’s Med-PaLM are providing surprisingly accurate medical information much better than can be found from a mere Google search.
Some experts claim that within the year, a major medical center will enter into a collaboration that will see LLM chatbots interact with patients and diagnose disease.
Benjamin Tolchin, a neurologist and ethicist at Yale University, told Scientific American that at least two patients have already told him they used ChatGPT to self-diagnose symptoms or to look up side effects of medication. “It’s very impressive, very encouraging in terms of future potential,” he said.
But not all is rosy. Chatbots have many issues, including an unclear accuracy of the information they give people, privacy concerns and racial and gender bias found in the text the algorithms draw from.
Not that chatbots are not already used in medicine. Simpler versions of the systems are employed by physicians to perform tasks such as scheduling appointments and providing people with general health information. “It’s a complicated space because it’s evolving so rapidly,” told Scientific American Nina Singh, a medical student at New York University who studies AI in medicine.
But more advanced LLM chatbots could see doctor-AI collaborations reach new heights. Epidemiologist Andrew Beam of Harvard University and his colleagues conducted a study with Open AI’s GPT-3 that found that the LLM’s top three potential diagnoses for several diseases included the correct one 88 percent of the time. In contrast, physicians achieved the same results 96 percent of the time.
“It’s crazy surprising to me that these autocomplete things can do the symptom checking so well out of the box,” Beam said.
Concerns abound
Despite these positive results, Beam did express concerns that LLM chatbots could be susceptible to misinformation as their algorithms rely on online text that could grant equal significance to, for example ,information from a recognized medical institution and a random thread on Facebook.
One solution is to require medically-inclined chatbots to link to the source of their information. Still, this remains complicated as LLM chatbots are good at inventing sources and making them look legitimate, reported Scientific American.
Regardless of these concerns, progress cannot be stopped and it is likely that chatbots will play a bigger role in medicine in the near future. When this does happen both patients and doctors will have to proceed with caution as medical mistakes could cost lives.