Google has discontinued a search feature that surfaced AI-organized health advice from anonymous internet users, according to the company and three independent sources. “What People Suggest” was designed to give users access to community-sourced health experiences through AI-curated summaries but was removed without a meaningful public statement from the company. The removal has since drawn renewed attention to the risks of deploying AI in health-sensitive search contexts.
The feature was launched at a Google-hosted health conference in New York, where then-chief health officer Karen DeSalvo explained how it would help users find relatable health perspectives. She described how people with conditions such as arthritis could use the feature to access exercise tips from others managing the same diagnosis. The AI organized online health discussions and presented them in organized, accessible formats.
Google confirmed the removal but denied safety played a role, attributing the decision to search interface simplification. The company pointed to a blog post as its public disclosure of the change, only for that post to contain no mention of the feature. This disconnect between Google’s stated transparency and the evidence available has drawn sustained criticism.
The story fits within a larger crisis of confidence in Google’s AI health tools. An investigation earlier in the year found that AI Overviews were providing false medical information to approximately two billion users monthly. The limited changes Google made in response — removing AI Overviews for some health searches — were described by health advocates as insufficient.
Google’s next “The Check Up” event is expected to showcase new health AI ambitions. But without genuine accountability for the failures of the past year, including the quiet removal of “What People Suggest,” those ambitions will be difficult to take at face value. Responsible health AI requires honesty as much as innovation.