Dublin — Ireland’s Data Protection Commission (DPC) has launched a formal investigation into X, the platform formerly known as Twitter, over allegations it used publicly available data from users within the European Union and European Economic Area to train its generative artificial intelligence system, Grok. The move intensifies regulatory scrutiny over how big tech companies handle personal data in the AI era.
As the primary EU privacy regulator for X—by virtue of the company’s European headquarters being located in Ireland—the DPC confirmed that it is examining whether the collection and use of EU/EEA users’ data to train Grok violates the General Data Protection Regulation (GDPR). Under the GDPR framework, the DPC holds enforcement powers that include issuing fines of up to 4% of a company’s annual global turnover for breaches.
The focus of the investigation is on the use of publicly-accessible posts by EU-based users, which X allegedly employed to train its AI without obtaining proper user consent.
AI Ambitions Collide with EU Privacy Protections
Grok, X’s AI chatbot, is part of the company’s push to compete in the rapidly evolving artificial intelligence sector. Backed by Elon Musk, who acquired the platform in 2022 and integrated it into his broader tech ecosystem under X Corp, the chatbot is intended to offer conversational AI services similar to those developed by other U.S. tech giants.
However, the manner in which Grok is being trained—by utilizing user-generated content—has sparked concerns among European regulators, who argue that such data falls under the protection of GDPR when it originates from identifiable EU citizens. The DPC has indicated it is seeking clarity on the legal basis under which X collected and processed this data, and whether adequate safeguards were put in place.
According to the commission’s official statement, the investigation will assess the platform’s compliance with transparency, accountability, and user consent requirements, all of which are core pillars of the EU’s privacy law.
A History of Tensions Between X and European Regulators
The current investigation comes less than a year after the DPC pursued court proceedings to block X from further processing EU user data for AI training. That case was resolved when the platform agreed to cease using personal data collected from EU users until explicit consent mechanisms were in place. At the time, X committed to making the change permanent, leading the DPC to drop its legal action.
However, this latest inquiry suggests that regulatory doubts persist about the company’s commitment to the terms it agreed upon. The Irish regulator’s decision to re-examine X’s data practices implies potential concerns that the company may have continued to use or mishandled EU user data after the agreement.
A Broader Regulatory Crackdown
The DPC has played a key role in enforcing GDPR across Europe, levying multi-billion euro fines against several high-profile U.S. tech companies, including Meta, TikTok, and LinkedIn. Since gaining full enforcement powers in 2018, it has emerged as one of the most active privacy watchdogs in the region, reflecting the EU’s growing determination to hold large digital platforms accountable.
Despite the DPC’s aggressive stance in previous cases, X has largely flown under the radar since its 2020 fine of €450,000. That penalty was issued after a data breach that exposed protected user data, marking the first time the DPC fined the company under the new privacy regime.
Now, with the spotlight once again fixed on X, the DPC is expected to delve into the technical and procedural aspects of Grok’s development, including how user data was sourced, whether users were informed, and if they were given an opportunity to opt out.
Musk’s Disdain for EU Regulation
Elon Musk, who is known for his outspoken opposition to what he calls “overreaching bureaucracy”, has frequently criticized EU regulations as being hostile to innovation. A vocal supporter of limited government oversight, Musk has clashed repeatedly with European officials over issues ranging from digital content moderation to data governance.
As a close adviser to former U.S. President Donald Trump, Musk has also echoed broader Republican criticisms that EU regulatory policies disproportionately target American firms. Trump’s administration regularly argued that fines imposed by EU regulators amount to de facto taxation of U.S. businesses.
This combative posture from the top has further strained relations between X and European institutions. While Musk and his executive team claim to prioritize user safety and compliance, critics argue that the company’s track record reflects a pattern of resistance when it comes to adhering to EU rules.
Legal and Financial Implications for X
If found to be in violation of GDPR, X could face a significant financial penalty. Under GDPR provisions, the DPC can impose fines based on the company’s total global revenue. Given X’s scale and the prominence of its AI ambitions, the outcome of this investigation could set a powerful precedent for how AI systems are trained using user-generated content.
Furthermore, the DPC’s inquiry might influence broader EU legislative efforts around AI, including the proposed AI Act, which aims to regulate how AI models are trained, deployed, and used across the continent. While the AI Act is still in draft stages, investigations like the one targeting X serve as test cases for future regulatory enforcement.
Questions Around Consent and Transparency
At the heart of the DPC’s concerns is whether X sufficiently informed users that their data might be used to train AI systems, and whether it offered a genuine mechanism to withdraw or deny consent. GDPR mandates that such processing activities be lawful, fair, and transparent, with an emphasis on user autonomy and control.
While some tech companies rely on “legitimate interest” clauses to justify data processing, this approach has faced increasing legal pushback. For data processing related to AI training, the threshold for compliance is even higher, especially when involving potentially sensitive or personally identifiable content.
The outcome of the DPC’s probe will likely hinge on how X documented user consent, whether it maintained internal data audits, and if it honored user preferences when requested.
Industry-Wide Repercussions
The investigation into X is expected to resonate beyond the platform itself. As AI development becomes a central focus for many social media platforms and tech conglomerates, the use of real-world data—particularly from public posts—has emerged as a contested battleground between innovation and privacy rights.
Should the DPC find X in breach, other companies pursuing similar AI initiatives might need to reconsider how they collect and process user content. The inquiry also puts pressure on tech firms to develop stronger compliance mechanisms to avoid similar legal scrutiny in future.
While generative AI tools have shown remarkable potential across industries, the ethical and legal debates surrounding how they’re built—especially when it comes to personal data—are far from settled.
The Irish DPC’s investigation into X’s use of EU user data to train its AI system Grok marks another critical chapter in Europe’s effort to rein in big tech’s data practices. At a time when AI systems are rapidly reshaping the digital landscape, questions of transparency, consent, and legal accountability are becoming more pressing. The outcome of this case could have lasting implications not just for X, but for the future of AI development across the European Union.