By Dr. Joseph P. Nathan, PharmD, MS, and Dr. Sara Grossman, PharmD
It was approximately 30 years ago that the Internet entered our lives and revolutionized the way we obtain information. Search engines such as Yahoo and Alta Vista and now Bing and Google have given us access to entire libraries at our fingertips. Today, artificial intelligence is promising to take the marvels of technology to new heights. Specifically, chatbots such as Bard and ChatGPT, which can learn on their own and communicate in a way that resembles human dialogue, are becoming popular in all walks of life. Chatbots may be used for many things, including obtaining a recipe for a cake or writing a speech. Healthcare is another area where people looking for information may turn to chatbots. However, unlike recipes or speeches where inaccuracies may result in a cake that is half-baked or an error in a story, when it comes to healthcare, obtaining and acting on inaccurate or incomplete information may have serious consequences on someone’s health. Therefore, it is important to ascertain if chatbots provide health-related information that is accurate and complete. As faculty at Long Island University’s College of Pharmacy, we decided to put ChatGPT, a popular chatbot, to the test.
At the College of Pharmacy, we provide a service to healthcare professionals by researching and answering their questions about medications. In May 2023, we took 39 of the questions received by the service and evaluated ChatGPT’s ability to provide answers to these questions. We first searched professional literature and came up with answers that were accurate and complete. On average, it took us approximately 45 minutes to answer each of these questions since they were somewhat complex and, in most cases, we needed to use multiple sources. We then took the same questions and entered them into the free version of ChatGPT. The chatbot was able to provide answers to these questions in a matter of seconds. Subsequently, we compared the ChatGPT-provided answers to the answers that we came up with through a search of professional literature. We found that only 10 of the 39 questions were answered by ChatGPT in a satisfactory manner. As for the other 29 responses provided by ChatGPT, some included inaccurate information, others were missing some important information, and some did not directly answer the question that was asked. In our assessment, two of the answers provided by ChatGPT included inaccurate information that could have caused harm to the patient if put into use. We concluded that ChatGPT is not ready to debut as a reliable source for information about medications.
Since our study only examined ChatGPT, we do not know how other chatbots would perform if put to the same test. Also, as technology continues to evolve and improve, it is possible that newer and more advanced versions of ChatGPT would provide better responses. But, overall, the take-home message is that at the present time, ChatGPT is not ready to replace good professional advice from your pharmacist or doctor. n
Drs. Nathan and Grossman are faculty members at Long Island University’s College of Pharmacy.