Google Disabled AI For Medical Queries

Google Disabled AI For Medical Queries

The company limited AI Overviews functionality following publication of an investigation by The Guardian. Journalists discovered that the search service was providing users with inaccurate and potentially dangerous medical recommendations.

News

Jan 12, 2026

Photo Source: Freepik

 

According to the publication, the problem lay in incorrect interpretation of test results. For example, in response to queries about normal liver function indicators, the AI provided averaged figures without accounting for the patient's age, sex, or ethnicity. This could lead users to incorrect conclusions about their health status.

 

After the article's publication, Google promptly removed the controversial AI overviews, although some query variations still display neural network responses in search results. A company representative stated that an internal group of physicians analyzed the queries and in many cases found no inaccuracies. According to them, the information provided in the overviews was based on content from specialized medical websites.

 

Google's problems with medical recommendations turned out not to be an isolated case. According to a study by Stanford and Harvard universities, AI models provide potentially dangerous medical recommendations in 11.8-14.6% of cases, with the most imperfect models making errors in more than 40% of cases.

 

For this research, scientists tested 31 large language models, including systems from Google, OpenAI, Anthropic, and Meta, on 100 real clinical cases from 10 medical specialties. Notably, these same AI models are actively used by physicians in practice. According to data from various surveys, the proportion of active users among physicians in the US and UK ranges from 66 to 86%.

 

In fall 2025, OpenAI revised its ChatGPT usage policy and added medical services to the list of prohibited scenarios. The chatbot is now prohibited from interpreting test results, diagnosing from photographs, or prescribing treatment. ChatGPT stopped answering medical questions and instead recommends consulting a physician. The neural network understands context, so it's impossible to bypass the restriction by changing query phrasing.

 

Parallel to restrictions on consumer AI products, work is underway to structure the use of artificial intelligence in professional medical practice. In Russia, for example, in December 2024, the Ministry of Health approved a Code of Ethics for the Application of Artificial Intelligence in Healthcare. The document regulates ethical aspects of developing, implementing, and using AI technologies in healthcare.

 

This is relevant in connection with the expanding use of AI products by Russian physicians. Investment in developing such services amounted to 4.7 billion rubles from 2018-2024, and on average, each Russian region currently uses 4-5 AI services, primarily for analyzing medical images and supporting clinical decision-making. According to the Ministry of Health's plans, by 2030 each region will employ at least 12 solutions for medical data analysis.

 

Source: TechCrunch

All information on this website is provided for informational purposes only and does not constitute medical advice. All medical procedures require prior consultation with a licensed physician. Treatment outcomes may vary depending on individual characteristics. We do not guarantee any specific results. Always consult a medical professional before making any healthcare decisions.

Application
You choose the clinic — we’ll take care of travel and treatment arrangements and all the paperwork

Send a request

You choose the clinic — we’ll take care of travel and treatment arrangements and all the paperwork.

Attach file
You can upload up to 10 files, each up to 10 MB. If you encounter an error, please submit the form without attachments.