EU Watchdog Concerned Grok is Trained on X Tweets

Adam Carter

The European Union’s data protection authority is raising alarms over X’s recent decision to use public tweets to train Grok, its new AI model. This move, made without prior notice, has sparked concerns about privacy and compliance with European regulations.

What’s Happening & Why This Matters

Last week, X activated a setting that allows Grok, Elon Musk’s AI project, to train on public tweets. This change was implemented quietly and on an opt-out basis, meaning many users are unaware their tweets are being used. Ireland’s Data Protection Commission (DPC), responsible for ensuring compliance with EU privacy laws, was caught off guard by this development.

The DPC had been in discussions with X for several months to ensure Grok’s training would not violate any EU privacy laws. However, X activated the new setting without any warning, surprising the DPC. A DPC spokesperson stated, “We have been engaging with X on this matter for months, and our latest interaction was just a day before the change was made.”

The DPC’s primary concern lies with potential violations of the General Data Protection Regulation (GDPR), which has strict rules on data collection and use. If the DPC opens a formal investigation into X over Grok, it could lead to fines or penalties. The DPC has followed up with X and is awaiting a response, expecting further engagement soon.

For now, users can stop Grok from ingesting their tweets. On the desktop (not mobile), they can go to Settings > Privacy & Safety > Data sharing and personalization and toggle off the checkbox to prevent X from using their data to train Grok. X has stated that it plans to extend this opt-out feature to mobile soon.

X’s Safety team has tweeted that “all X users have the ability to control whether their public posts can be used to train Grok the AI search assistant,” reaffirming their commitment to user control.

Past Issues with Grok

Grok has previously been criticized for spreading misinformation. It falsely declared the winner of an Indian election before the event and incorrectly informed US users about the status of presidential ballots. These incidents underscore the importance of proper data management and accuracy in AI training.

TF Summary: What’s Next

The DPC is awaiting a response from X and expects further engagement soon. If formal investigations are launched, X could face significant penalties under GDPR. Users concerned about their data can opt-out by adjusting their privacy settings on desktop. The situation highlights the need for transparency and user control in AI training practices.

The ongoing discussions between the DPC and X will likely shape future guidelines and regulations surrounding AI training on social media data. The outcome of this situation could influence how other companies approach data privacy and user consent in their AI projects. For now, users should stay informed about their privacy settings and exercise their right to control their data.

— Text-to-Speech (TTS) provided by gspeech

Share This Article
Avatar photo
By Adam Carter “TF Enthusiast”
Background:
Adam Carter is a staff writer for TechFyle's TF Sources. He's crafted as a tech enthusiast with a background in engineering and journalism, blending technical know-how with a flair for communication. Adam holds a degree in Electrical Engineering and has worked in various tech startups, giving him first-hand experience with the latest gadgets and technologies. Transitioning into tech journalism, he developed a knack for breaking down complex tech concepts into understandable insights for a broader audience.
Leave a comment