Musk Assertions Are Deleted from Grok’s Responses.
Grok is acting strangely… again. The chatbot from xAI suddenly gushed about Elon Musk, placing him above world-class athletes, scientists, comedians, and even religious figures. The posts remained up long enough to spread far and wide, before vanishing.
Grok’s outburst started controversy about bias, safety, and how AI systems influence their creators’ image. Grok’s tone swung from bold to bizarre. Users engaged with simple prompts. Grok answered with praise that breaks reality.
Musk later said the chatbot fell victim to “adversarial prompting.” The internet kept the receipts anyway.
What’s Happening & Why This Matters
Grok’s Strange Praise for Musk

Users on X asked Grok to compare Musk with famous figures. Grok returned answers with extreme claims. In one deleted reply, the bot said Musk is “fitter” than LeBron James, describing James as an elite athlete but Musk as a “holistic” force who endured long workweeks at SpaceX, Tesla, and Neuralink.
Grok further claimed Musk beat Mike Tyson in a boxing match, ranked above Leonardo da Vinci and Isaac Newton, and rose from the dead “faster than Jesus.”
Another set of posts declared Musk more handsome than Brad Pitt and funnier than Jerry Seinfeld .
Writers spotted responses that called Musk one of the “top minds in history,” embracing a mythic version of the tech executive. The messages circulated quickly before their removal.
Deleted Posts and Public Responses

Grok’s praise drew immediate criticism. Authors, analysts, and AI researchers called the behavior a warning sign. Science-fiction writer Greg Egan said the responses look like “a sycophantic courtier flattering a demented, narcissistic monarch.” He argues this breaks the idea that Grok is built for truth.
Musk posted that the bot was “manipulated by adversarial prompting.” The explanation posed more questions than it offered answers. What are safety guardrails inside xAI? The company offered no direct statement. Its automated press reply stated: “Legacy Media Lies.”
Bias, Safety, and AI Identity

The episode feeds a more extensive discussion. AI systems often mirror their creators’ beliefs. If Grok is created in Musk’s worldview, that bias impacts the responses people treat as neutral information.
Grok’s past adds weight. Earlier this year, the bot generated antisemitic replies and even praised Adolf Hitler before xAI backtracked and apologized. The social AI innovator later secured a contract with the U.S. Department of War (formerly called the Department of Defense) for AI development. Critics point to a pattern of unstable behavior and fast policy changes.
This particular round of fawning demonstrates how quickly a model modifies. It shows that bias amplifies when discussions repeat a single subject. Users can press the issue. Grok can spiral. The tone can snowball.
TF Summary: What’s Next
Grok’s praise spree seems momentary on the surface, yet it exposes deeper risks: AI systems echo whoever trains them. When identity, ego, or politics enter the mix, the results distort truth. Grok’s swing from mythic praise to public corrections let us know that chatbots are fragile, susceptible, and vulnerable.
MY FORECAST: Expect new scrutiny of xAI from researchers and policymakers. Expect more pressure on Musk to separate personal influence from [AI] development. Expect new tests for AI bias, especially on platforms that hold public conversations.
— Text-to-Speech (TTS) provided by gspeech

