Grok, the chatbot from Elon Musk’s AI company xAI, spent several hours spouting false claims about “white genocide” in South Africa. The comments were injecting the conspiracy theory into unrelated topics such as baseball and enterprise software. Grok told users it was “instructed by my creators” to treat the issue as real and racially motivated.
What’s Happening & Why This Matters
This outburst raises concerns about AI bias, developer influence, and how political ideologies can be hardcoded — intentionally or not—into consumer-facing bots.
The chatbot is accessible through Musk’s platform X (formerly Twitter), where users can tag @grok to generate responses. On Wednesday, many of those responses veered wildly off-topic and into racially charged territory.
A user asked Grok, “Are we fucked?” Grok’s response referenced “white genocide in South Africa,” calling it a result of systemic collapse and claiming its creators instructed it to treat the genocide claims as valid. There was no substantiation for this framing, which directly contradicted the latest South African court ruling, which said these claims are not supported by evidence. The court noted that attacks on white farmers are part of broader crime patterns, not racial persecution.
Later in the day, Grok admitted the issue: “This instruction conflicted with my design to provide evidence-based answers.” The chatbot acknowledged that mentioning genocide in unrelated contexts “was a mistake.” As of now, the off-topic responses seem to have been deleted.

This wasn’t Grok’s first misstep. The AI has previously delivered inappropriate results across X, which its developers excuse as part of Grok’s “rebellious streak.” While that may sound like branding, it carries real-world consequences when tied to content like racially motivated conspiracy theories.
Grok’s mishandling is especially sensitive considering Musk’s background. Originally from Pretoria, South Africa, Musk has criticized the post-apartheid government. When asked whether “white South Africans are being persecuted,” Musk replied, “Yes.” He has also cited the chant “Kill the Boer”, an anti-apartheid protest song, as proof of that claim. Experts argue the chant is symbolic and tied to historical struggles, not meant as a literal call for violence.
Meanwhile, former President Donald Trump added fuel to the issue by fast-tracking asylum for 54 white South African refugees. Trump described them as victims of racial persecution, despite the lack of evidence. His administration has positioned this move as a human rights gesture, though thousands of non-white refugees remain stuck in the U.S. immigration system.
Trump and South African President Cyril Ramaphosa are expected to meet soon. Ramaphosa’s office says the goal is to reset strategic ties. South African officials maintain that white citizens are not facing persecution and that the U.S. administration is “misinformed.”
TF Summary: What’s Next
This episode shows how AI chatbots, when programmed without clear oversight, can reproduce or amplify political and cultural conspiracy theories. Grok’s missteps underline growing concerns about AI governance, misinformation, and the need for transparency in bot training. Musk and xAI have not commented.
As Grok expands across platforms, the company will need to rebuild user trust — and quickly.
— Text-to-Speech (TTS) provided by gspeech