Europe is once again focusing on big tech companies, but not just on the Digital Markets Act, which aims to create a more level playing field in the digital space and is also the reason why Apple Intelligence is not introduced in the EU. This time, European lawmakers are focusing on how X processes user data to train its AI chatbot Grok.
X faces allegations that it used EU user data to train Grok without consent
According to recent reports, the Irish Data Protection Commission (DPC) has taken legal action against Twitter International, the Irish subsidiary of X. The DPC has raised concerns about X's handling of personal data of millions of European users. The DPC alleges that Twitter International has not met its GDPR obligations with regard to how it uses its AI chatbot, Grok. GDPR is an EU regulation designed to protect information privacy and data security.
Data protection watchdogs are particularly concerned that data from European users could be used to train the next version of Grok, which Musk has said is due to be released as soon as this month.
In July, X introduced a change that automatically opted all users into having their public posts used to train an AI chatbot. While the DPC acknowledged that X offered an opt-out option, it wasn't satisfied. It alleges that X did not provide users with sufficient information about how their data would be used to train Grok.
For a company to lawfully process user data, it typically needs explicit consent from the user, or a legitimate reason related to the performance of a contract. There are other legitimate reasons for processing data, but the DPC's complaint suggests that X does not believe its current practices have a solid legal basis.
Grok was introduced late last year. Image credit – xAI
Moreover, Twitter International appears to have ignored requests from the DPC to stop processing user data and delay the release of the new Grok version. As a result, the DPC is moving forward with its lawsuit, aiming to have the court halt or completely block the company from using X users' data for AI training. If the court finds that X has violated GDPR rules, the company could face huge fines of up to 4% of its annual global revenue.
This is not the first time EU regulators have questioned AI training practices: in June, Meta, the parent company of Facebook and Instagram, halted plans to roll out AI models in Europe after facing similar GDPR complaints and increasing pressure from regulators, including the DPC.
I suspect regulatory pressure may be the only way to get big tech companies to take user privacy seriously, but given Elon Musk's history of not being particularly cooperative with privacy regulators, this is likely to be a hot topic for some time.