‘Very dangerous’: a Mind mental health expert on Google’s AI Overviews | Mental health

A year-long commission has been launched by Mind to examine AI and mental health after a Guardian investigation exposed how Google’s AI Overviews, which are shown to 2 billion people each month, gave people “very dangerous” mental health advice.

Here, Rosie Weatherley, information content manager at the largest mental health charity in England and Wales, describes the risks posed to people by the AI-generated summaries, which appear above search results on the world’s most visited website.

“Over three decades, Google designed and delivered a search engine where credible and accessible health content could rise to the top of the results.

“Searching online for information wasn’t perfect, but it usually worked well. Users had a good chance of clicking through to a credible health website that answered their query.

“AI Overviews replaced that richness with a clinical-sounding summary that gives an illusion of definitiveness.

Read More:  Indonesia, Morocco, Kosovo among 5 countries to send troops under Gaza plan | Gaza News

“It’s a very seductive swap, but not a responsible one. And this often ends the information-seeking journey prematurely. The user has a half answer, at best.

“I set myself and my team of mental health information experts at Mind a task: 20 minutes searching using queries we know people with mental health problems tend to use. None of us needed 20.

“Within two minutes, Google had served AI Overviews that assured me starvation was healthy. It told a colleague mental health problems are caused by chemical imbalances in the brain. Another was told that her imagined stalker was real, and a fourth that 60% of benefit claims for mental health conditions are malingering. It should go without saying that none of the above are true.

Rosie Weatherley said that, during a test conducted by Mind experts, Google served false information in AI Overviews, including that starvation is healthy. Photograph: Jill Mead/The Guardian

“In each of these examples we are seeing how AI Overviews are flattening information about highly sensitive and nuanced areas into neat answers. And when you take out important context and nuance and present it in the way AI Overviews do, almost anything can seem plausible.

Read More:  New Zealand vs Pakistan: T20 World Cup Super Eights – teams, start, lineups | ICC Men's T20 World Cup News

“This process is especially harmful for people who are likely to be in some level of distress.

“A multi-billion-dollar company like Google that profits from AI Overviews should have more resources dedicated to providing accurate information. The extent of their concern seems limited to reactively retraining or removing AI Overviews when individuals, organisations or indeed journalists flag new insights. This ‘whack-a-mole’ style of problem-solving feels unserious and not scaled to the size and resource of the company profiting from them.

Read More:  ‘They pushed so many lies about recycling’: the fight to stop big oil pumping billions more into plastics | Plastics

“Search engines have evolved to make access to the most harmful search results, like suicide methods, less immediately available. But if you search as an unwell person might search, the risk remains that you will be served harmful inaccuracies and half-truths, presented in calm and confident copy as uncontroversial neutral facts with the stamp of approval from the world’s biggest search engine.

“In a search for crisis information, the AI Overview haphazardly collaged various contradictory signposts in long lists.

“Perhaps AI has enormous potential to improve lives, but right now, the risks are really worrying. Google will only protect you from the potential faults of AI Overviews when it thinks you’re in acute distress. People need and deserve access to constructive, empathetic, careful and nuanced information at all times.”

Facebook Comments Box