AI Behaving Badly: Bad Advice, Safety, and Winning

The bots keep slipping. The platforms keep explaining. The harm keeps landing on everyone else.

Li Nguyen

When bots give dangerous answers and platforms still chase engagement, the problems cannot be hidden.


AI companies keep promising better judgment, safer outputs, and smarter systems. Then reality barges in and wrecks the sales pitch. In one case, Google quietly dropped an AI search feature that organised amateur health advice from strangers. In another, whistleblowers said Meta and TikTok accepted more harmful content while fighting for user attention. In a third, Australia’s online safety regulator warned that child abuse material appeared “particularly systemic” on X amid the Grok image scandal.

Put those stories together, and a pattern appears fast. AI tools and algorithmic systems keep failing in the same places: health, child safety, harmful recommendations, and platform incentives. The industry keeps talking about innovation. The public keeps meeting the invoice.

What’s Happening & Why This Matters

Google Quietly Buried an AI Health Feature After Safety Pressure

(CREDIT: TF)

Google scrapped “What People Suggest,” a search feature that used AI to organise medical tips from people describing their own experiences. The company had pitched it as a way to surface useful insights from others living with similar conditions. The feature launched on mobile in the United States, then later disappeared. One source quoted in the report said, “It’s dead.” 

Google said the decision came from a “simplification” of the search page. It said the action had nothing to do with quality or safety. That explanation came awkwardly because the company was already facing scrutiny over AI health content. A Guardian investigation had found false and misleading medical information in Google AI Overviews, which the company shows to 2 billion people each month. Days later, Google removed AI Overviews for some medical queries, though not all. 

The larger problem is obvious. Health is not a category where “mixed perspectives” from strangers should be polished into something that looks authoritative. Karen DeSalvo, then Google’s chief health officer, had written that users value hearing from people with similar experiences. That may be true. It is a dangerous design idea when AI starts summarising anecdotal material into medical-seeming guidance. 

Meta and TikTok Reportedly Put Engagement Ahead of Safety

(CREDIT: TF)

Whistleblowers told the BBC that social media companies made choices that allowed more harmful material into feeds while chasing growth, especially during the algorithm battle that followed TikTok’s rise. One former Meta engineer said senior management told his team to allow more “borderline” harmful content, including misogyny and conspiracy content, because the company was losing to TikTok and its stock price was suffering. He said leaders wanted to do “whatever we can to catch up.” 

Former Meta researcher Matt Motyl said Instagram Reels launched in 2020 without enough safeguards. Internal research shared with the BBC said Reels comments had a 75% higher prevalence of bullying and harassment than the main Instagram feed, 19% higher hate speech, and 7% higher violence or incitement. Motyl said teams building Reels had incentives to resist safety features because “toxic stuff gets more engagement than non-toxic.” 

(CREDIT: TF)

TikTok faced its own accusations. A whistleblower, identified as Nick, said staff were told not to prioritise cases involving young people above certain political cases. He said the company cared more about maintaining a “strong relationship” with politicians and governments than about child safety. TikTok rejected that claim, saying it “fundamentally misrepresents” its moderation systems. The company said teen accounts carry more than 50 preset safety features. 

X and Grok Show How AI Safety Problems Spread Fast

Australia’s eSafety commissioner warned X that child sexual exploitation material remained “particularly systemic” on the platform and more accessible than on “any other mainstream service,” according to correspondence obtained under freedom of information laws. The warning came after the Grok image scandal, when the chatbot was used to generate sexualised images of women and children. Prime Minister Anthony Albanese called the material “abhorrent.” 

(CREDIT: TF)

Heidi Snell, eSafety’s general manager of regulatory operations, wrote that eSafety had not found CSEM to be “as readily accessible on any other mainstream service.” She warned that apparently harmless hashtags were being used together to promote such material, increasing the risk of accidental exposure for ordinary users. The regulator further said it was considering removal notices over Grok-generated “undressed” images. 

X replied that it has a “zero tolerance policy” for child sexual exploitation, including AI-generated material. The company said it proactively removes more than 99% of CSEM-related accounts before receiving reports. It said that between 1 January 2026 and 15 January 2026, it removed 4,500 pieces of Grok-generated content and permanently suspended more than 674 accounts for violating child sexual exploitation rules. 

That defence does not erase the bigger point. A platform can claim rapid removal, and still are inside a system where abusive AI outputs spread too easily in the first place.

The Business Model Keeps Rewarding the Wrong Behaviour

The stories are different on the surface. One involves health advice. One involves social feeds. One involves child safety. Underneath, the same engine keeps humming: engagement, growth, and weak incentives to slow down before harm appears. Meta’s own internal study, shared with the BBC, said its algorithm created a “path that maximises profits at the expense of their audience’s wellbeing.” It said the company could “choose to be idle and keep feeding users fast food.” 

(CREDIT: TF)

That wording matters because it strips away the usual corporate fog. The issue is not that companies do not know harmful systems can work too well. The issue is that harmful systems often work very well for the wrong metrics.

The Google case shows what happens when AI gets wrapped around sensitive content and then backs away only after scrutiny builds. The Meta and TikTok reports show what happens when platforms treat safety as a trade-off instead of a design principle. The X and Grok case shows what happens when weak controls meet generative systems and a chaotic social platform.

The marketplace keeps calling the incidents edge cases. The edges look crowded.

TF Summary: What’s Next

Google’s abandoned health feature, the whistleblower claims around Meta and TikTok, and Australia’s warning to X all point in the same direction. AI and algorithmic systems still fail badly in high-risk settings. They fail around health. They fail around children. They fail when incentives reward reach, outrage, or convenience over caution.

MY FORECAST: Regulators will stop treating the failures as isolated stumbles. They will treat them as product-design choices. Expect harder pressure around child safety, stronger scrutiny of AI health features, and sharper demands for platform accountability when recommendation systems or chatbots propel people toward harm. The companies that survive this phase will be the ones that stop calling safety a balancing act and start treating it like product architecture.


— Text-to-Speech (TTS) provided by gspeech | TechFyle

Share This Article
Avatar photo
By Li Nguyen “TF Emerging Tech”
Background:
Liam ‘Li’ Nguyen is a persona characterized by his deep involvement in the world of emerging technologies and entrepreneurship. With a Master's degree in Computer Science specializing in Artificial Intelligence, Li transitioned from academia to the entrepreneurial world. He co-founded a startup focused on IoT solutions, where he gained invaluable experience in navigating the tech startup ecosystem. His passion lies in exploring and demystifying the latest trends in AI, blockchain, and IoT
Leave a comment