news

AI-powered search engines suffer from a lack of reliability in medical information.


A recent study has revealed that search engines and AI-powered chatbots do not provide reliable information about medications. Researchers in the specialized journal “BMJ Quality & Safety” pointed out that the answers provided by these chatbots are often inaccurate and incomplete, making them difficult for users to understand.

Researchers urged caution when dealing with this information, calling for clear warnings to be included for users. Varam Andrikian, lead author of the study from the Institute of Experimental and Clinical Pharmacology and Toxicology at Erlangen University in Germany, stated, “Our study results show that the quality of chatbot responses is still inadequate for safe use by users. We must emphasize that the information provided by these chatbots cannot replace professional advice.

The study began with an experiment where patients were given information about their prescribed medications. Experts posed a question to the chatbot “Bing,” developed by Microsoft in April of this year, addressing ten common questions about the top fifty prescribed drugs in the United States. These questions included how to take the medications, their side effects, and contraindications.

Andrikian confirmed that the program showed good performance at times but failed to provide accurate answers in other cases, posing a risk to patients who may not have the necessary medical background to assess the accuracy and completeness of the information generated by AI technologies.

In light of the rapid advancements in AI-powered search engines and integrated chatbots since the study was conducted last year, the expert explained that improvements are still inadequate, meaning that the risks to patient safety remain until further notice.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
error: Content is protected !!