Futurism logo

Mind Launches Global Inquiry Into AI and Mental Health After Google AI Advice Exposed

England and Wales Charity to Examine Safeguards as AI Overviews Raise Concerns Over Dangerous Medical Guidance

By Behind the TechPublished about 8 hours ago 3 min read

What Happened

The mental health charity Mind has announced a year-long inquiry into the impact of artificial intelligence on mental health, following a Guardian investigation that revealed harmful advice generated by Google’s AI Overviews feature.

The inquiry, described as the first of its kind globally, will examine the risks and safeguards required as AI tools increasingly shape access to health information. It will bring together psychiatrists, clinicians, policymakers, technology companies, and people with lived experience of mental health conditions to assess both the opportunities and dangers posed by AI-driven systems.

The move comes after reporting by The Guardian found that Google’s AI Overviews — automated summaries displayed at the top of search results — provided false or misleading medical advice on a range of issues, including psychosis, eating disorders, cancer, liver disease, and women’s health.

Google’s AI Overviews are shown to approximately two billion users per month and appear above traditional search results. According to the investigation, some AI-generated summaries offered guidance that experts described as “very dangerous,” potentially discouraging individuals from seeking professional treatment or reinforcing harmful misconceptions.

Following the reporting, Google reportedly removed AI Overviews for some — though not all — medical search queries. However, Dr. Sarah Hughes, chief executive of Mind, said dangerously incorrect mental health advice continues to appear.

Hughes stated that AI has “enormous potential” to expand access to mental health support and strengthen public services, but emphasized that deployment must include safeguards proportionate to risk. She warned that misleading AI-generated summaries could prevent people from seeking care, reinforce stigma, and in extreme cases endanger lives.

Mind’s inquiry aims to gather evidence on the intersection of AI and mental health while creating an open forum for individuals affected by mental health conditions to share their experiences.

Google has defended AI Overviews as helpful and reliable, stating that it invests significantly in quality improvements, particularly for health-related topics. A company spokesperson noted that systems are designed to show crisis hotline information when distress-related queries are detected.

Why It Matters

The launch of Mind’s inquiry reflects a growing recognition that AI-generated health information occupies a uniquely sensitive space.

Search engines have long been a first point of contact for individuals seeking medical advice. Traditionally, users would click through to established health websites, where information was contextualized, sourced, and often accompanied by guidance to seek professional help. AI Overviews alter that dynamic by presenting concise summaries that appear authoritative and definitive.

This shift introduces several structural concerns.

1. Authority Without Transparency

AI-generated summaries often lack clear sourcing, nuance, or context. While brevity improves accessibility, it can obscure uncertainty and remove the layered explanations found in traditional health resources. For mental health queries — where symptoms, diagnosis, and treatment are complex — oversimplification can mislead.

Experts cited by the Guardian described some responses as incorrect or harmful. In areas such as psychosis or eating disorders, inaccurate advice could delay intervention or normalize dangerous behaviors.

2. Scale Amplifies Risk

With billions of users exposed monthly, even a small percentage of flawed outputs could affect large numbers of people. AI systems operating at global scale create systemic exposure: errors are not isolated but potentially widespread.

Unlike a misinformed forum post, AI Overviews carry the implicit authority of a major technology platform.

3. Innovation Versus Regulation

Mind’s inquiry underscores tension between technological innovation and public safety. AI developers argue that generative systems can democratize access to information and reduce barriers to care. However, health-related AI requires stricter standards than general-purpose tools.

The commission’s goal to shape regulation, standards, and safeguards suggests that voluntary corporate guidelines may be insufficient. As AI systems embed themselves into healthcare information ecosystems, oversight mechanisms may need to evolve.

4. Lived Experience at the Center

Mind’s emphasis on involving people with lived experience signals a broader shift in digital health governance. Historically, technology design has been driven by engineers and executives. Integrating mental health patients into policy formation could influence how AI systems are evaluated and audited.

The Bigger Picture

This inquiry arrives amid wider global scrutiny of AI’s societal impact. From employment disruption to misinformation, generative AI systems are testing existing regulatory frameworks. Health information may represent one of the most sensitive frontiers.

AI tools can potentially improve mental health support by offering early screening, crisis signposting, and accessible information. But the same systems, if poorly calibrated, may propagate inaccuracies at unprecedented scale.

The Guardian’s investigation highlights how generative AI’s “veneer of confidence” can mask underlying uncertainty. For vulnerable individuals, that confidence can carry weight.

Mind’s year-long commission may serve as a template for other countries and health organizations confronting similar challenges. As AI becomes further embedded in daily digital life, balancing innovation with patient safety will likely become a defining issue of the decade.

The central question is no longer whether AI will influence mental health care — but how responsibly that influence will be governed.

artificial intelligencetech

About the Creator

Behind the Tech

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.