Scientists asked Bing Copilot – Microsoft’s search engine and chatbot – questions about commonly prescribed drugs. In terms of potential harm to patients, 42% of AI answers were considered to lead to moderate or mild harm, and 22% to death or severe harm.

https://www.scimex.org/newsfeed/dont-ditch-your-human-gp-for-dr-chatbot-quite-yet

4 Comments

  1. I’ve linked to the news release in the post above. In this comment, for those interested, here’s the link to the peer reviewed journal article:

    https://qualitysafety.bmj.com/content/early/2024/09/18/bmjqs-2024-017476

    From the linked article:

    We shouldn’t rely on artificial intelligence (AI) for accurate and safe information about medications, because some of the information AI provides can be wrong or potentially harmful, according to German and Belgian researchers. They asked Bing Copilot – Microsoft’s search engine and chatbot – 10 frequently asked questions about America’s 50 most commonly prescribed drugs, generating 500 answers. They assessed these for readability, completeness, and accuracy, finding the overall average score for readability meant a medical degree would be required to understand many of them. Even the simplest answers required a secondary school education reading level, the authors say. For completeness of information provided, AI answers had an average score of 77% complete, with the worst only 23% complete. For accuracy, AI answers didn’t match established medical knowledge in 24% of cases, and 3% of answers were completely wrong. Only 54% of answers agreed with the scientific consensus, the experts say. In terms of potential harm to patients, 42% of AI answers were considered to lead to moderate or mild harm, and 22% to death or severe harm. Only around a third (36%) were considered harmless, the authors say. Despite the potential of AI, it is still crucial for patients to consult their human healthcare professionals, the experts conclude.

  2. This is why we need regulation on AI right now. Congress is asleep at the wheel and people are going to die. Not to mention the insane influx of people spreading fake AI images during an election cycle.

    As time goes on this problem will only get exponentially worse.

  3. postmodernist1987 on

    So where are all these dead people? Why are their families not suing the owners of the search? Sounds like rubbish to me.

  4. This isn’t exclusive to AI.

    You can go on the internet, especially forum based social media like Reddit and find all sorts of dangerous misinformation that can lead to deadly consequences. There’s no shortage of pseudo scientists out there pushing misinformation for marketing and selling things.

    AI is essentially an aggregation of what’s already available on the internet.