With the long waiting lists and rising costs of excess healthcare systems, many people are turning to chatbots powered by AI, like ChatGPT, for medical self-diagnosis. According to a recent survey, one in six American adults use a chatbot for health advice at least every month.
However, recent Oxford-led research shows that putting too much trust in the output of a chatbot is dangerous as people struggle to provide what information to provide chatbots for the best possible health recommendations.
“This study revealed a breakdown in two-way communication,” Adam Mahdi, director of graduate research at the Oxford Internet Institute and co-author of the study, told TechCrunch. “People using (chatbots) did not make better decisions than participants who relied on traditional methods such as online searching or their own judgment.”
For this study, the author recruited approximately 1,300 people in the UK and provided medical scenarios written by a group of physicians. Participants were tasked with identifying potential health conditions in the scenario and using chatbots and unique methods to grasp courses of possible behavior (e.g., seeing a doctor or going to a hospital).
Participants used the Default AI Model Powering ChatGPT, GPT-4O, and the Llama 3 for Cohere and the Cohere command R+ and Meta. According to the authors, chatbots were not only less likely to identify relevant health conditions for participants, but also more likely to underestimate the severity of the identified conditions.
Mahdi said participants often omitted important details when they query chatbots or receive responses that were difficult to interpret.
”
TechCrunch Events
Berkeley, California
|
June 5th
Book now
The findings are as tech companies are increasingly promoting AI as a way to improve health outcomes. Apple is reportedly developing an AI tool that can distribute advice related to exercise, diet and sleep. Amazon is exploring an AI-based method to analyze medical databases for “social determinants of health.” Microsoft is also helping to build AI into Triage Messages sent by patients to care providers.
However, as TechCrunch previously reported, there is a mix of both experts and patients as to whether AI is ready for more risky health applications. The American Medical Association recommends using chatbot physicians like ChatGPT to assist in clinical decisions, and major AI companies, including Openai, warn against making diagnostics based on the output of the chatbot.
“We recommend relying on reliable sources for healthcare decisions,” Mahdi said. “The current way of assessing (chatbots) does not reflect the complexity of interactions with human users. They should be tested in the real world before they can be deployed, like in clinical trials for new drugs.”