Google’s AI gives wrong health advice that could hurt people

A tightly knotted stethoscope encircling the official Google multicolor G logo like a trap, set on a glossy white table with scattered medical charts as abstract shapes (no legible text), cool blue tones contrasted with urgent red highlights, medium close-up

Google’s AI Overviews feature presents false and misleading health information that puts people at risk, according to an investigation by The Guardian. The AI-generated summaries appear at the top of search results and claim to provide reliable snapshots of information. But health groups and charities identified multiple cases of dangerous inaccuracies.

Dangerous Medical Misinformation

The investigation found several alarming examples. Google wrongly advised pancreatic cancer patients to avoid high-fat foods. This is the opposite of what doctors recommend.

Anna Jewell from Pancreatic Cancer UK said following this advice could be really dangerous. Patients who avoid high-fat foods might not take in enough calories. They could struggle to put on weight and be unable to tolerate chemotherapy or surgery.

A search for liver blood test results also served up misleading information. The AI Overviews showed masses of numbers with little context. They did not account for nationality, sex, ethnicity or age.

Wrong Information About Cancer Tests

Pamela Healy from the British Liver Trust called the summaries alarming. She said people with serious liver disease may think they have normal results and not attend follow-up appointments.

The AI also listed a pap test as a test for vaginal cancer. This is completely wrong. Athena Lamnisos from the Eve Appeal cancer charity said getting wrong information could lead someone to ignore vaginal cancer symptoms after a clear cervical screening result.

Growing Concerns About AI Reliability

The Guardian found the AI Overviews changed with repeated searches. People got different answers depending on when they searched. The feature also delivered misleading results for mental health conditions.

Stephen Buckley from Mind said some summaries offered very dangerous advice. Others missed important context or suggested inappropriate information sites.

Sophie Randall from the Patient Information Forum said the examples showed Google’s AI Overviews can put inaccurate health information at the top of searches. This presents a risk to people’s health.

Google said the vast majority of its AI Overviews were factual and helpful. The company said it continuously made quality improvements. A spokesperson said they invest significantly in quality for health topics.

Total
0
Shares
Previous Post
Over-the-shoulder view of a person holding a handheld mirror that reflects a flawless plastic mannequin face surrounded by floating AI image tiles, warm amber light on the hands contrasted with cool neon blues in the reflection, medium close-up with shallow depth of field, text-free and brand-neutral.

Woman quit AI job after perfect images made her think she could fly

Next Post
Neutral close-up portrait of Elon Musk centered, encircled by glowing red warning triangles and a cracked translucent digital shield, cool blue AI interface backdrop with circuitry patterns, bright warm–cool contrast and shallow depth of field, no text

Elon Musk’s Grok AI made unsafe images of minors

Related Posts