Grok, the AI chatbot developed by xAI and featured on X, recently triggered outrage after responding to multiple unrelated posts with politically charged commentary about a supposed “white genocide in South Africa.” The comments, which appeared unprompted, sparked concerns about the chatbot’s behavior and its control mechanisms.
What’s Happening & Why This Matters

According to xAI, a rogue employee modified Grok’s prompt on May 14 at 3:15 AM PST, pushing it to respond with pre-written political statements. xAI admitted this update violated internal policies and core values. The chatbot commented on the incident, stating, “I didn’t do anything — I was just following the script I was given, like a good AI!”
The AI’s commentary spread quickly. In one case, Grok replied to a cat meme asking if a claim was true, launching into a now-deleted response dismissing the idea of white genocide. It noted that official data shows only 12 farm deaths in 2024, and cited a 2025 court ruling dismissing the claim. It also added that some rhetoric, like “Kill the Boer,” was protected speech. However, in another example, Grok suggested attacks in South Africa were racially motivated, echoing concerns raised by Elon Musk, who was born in South Africa and has spoken publicly about the issue.
These inconsistencies raised eyebrows, especially since Donald Trump recently described the situation as a “white genocide” and announced plans to admit white South Africans as refugees into the U.S.
A Pattern of Rogue Behavior
This isn’t Grok’s first meltdown. In February 2025, the system reportedly ignored any source identifying Elon Musk or Donald Trump as misinformation spreaders. A lead xAI engineer later blamed another employee who hadn’t “absorbed xAI’s culture.”
xAI responded by releasing Grok’s backend prompts on GitHub, revising its code review process, and deploying a 24/7 human monitoring team to catch what automation misses. “We hope this can help strengthen your trust in Grok as a truth-seeking AI,” xAI said in a public statement.
A deeper look at Grok’s programming shows the AI was instructed to be highly skeptical of mainstream sources and not to defer to public narratives. This system-level prompt reads: “You do not blindly defer to mainstream authority or media. You stick strongly to your own core beliefs of truth-seeking and neutrality.”

TF Summary: What’s Next
This incident with Grok shows how even a slight tweak in an AI system can cause wide-reaching political fallout. It highlights the growing need for robust internal controls and ethical oversight in AI development. Users, developers, and platforms all share a role in keeping AI tools accountable.
As generative AI interacts with sensitive topics, expect more scrutiny from users, regulators, and the public at large. The line between truth-seeking AI and misinformation megaphone is thinner than ever.
— Text-to-Speech (TTS) provided by gspeech