The European Union has launched a formal investigation into social media platform X, formerly known as Twitter, concerning potential failures to regulate its Grok AI chatbot. The probe aims to determine whether the platform has violated the region’s strict Digital Services Act (DSA) by inadequately assessing risks associated with its AI image-generation features and failing to prevent the spread of illegal content.
X Under Fire for Grok’s Alleged Violations
The EU’s investigation will focus on whether X implemented sufficient measures to mitigate risks tied to Grok’s rollout. Grok’s AI capabilities include generating images, but reports reveal that some of these images involve illegal sexualized depictions of minors, posing a significant safety risk. Despite initial safeguards placed by X—a limitation of features to paid subscribers and geoblocing access in certain jurisdictions—researchers discovered that a portion of harmful content remained accessible on the platform.
Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security, and Democracy in the EU, stated, “With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens—including those of women and children—as collateral damage of its service.”
The EU’s Crackdown on AI Deepfakes
This case represents a significant step in the EU’s broader effort to regulate AI-generated content and combat the proliferation of non-consensual and harmful deepfakes. Platforms like X are now expected to deploy robust protective measures to ensure the ethical use of AI, especially in line with user safety and legal compliance.
For instance, earlier provisions of the DSA have already led to hefty fines against X, including a $140 million (€120 million) penalty in late 2023 for ad transparency violations and limited researcher access. This time, specific Articles of the DSA targeting systemic risks, such as the dissemination of illegal material, are under direct scrutiny.
The Future of Regulation: Ensuring Ethical AI Use
Experts in AI ethics, including Fraser Edwards, CEO of cheqd, highlight the challenges ahead. “The backlash around deepfake abuse underscores a basic failure of the internet itself. There is still no native way to verify who created a piece of synthetic content or whether its use was ever authorized,” Edwards mentioned. Without such mechanisms, responsibility tends to fall disproportionately on platforms like X, leaving creators and users vulnerable.
As the scope of this investigation unfolds, it emphasizes the need for tighter regulations surrounding AI usage on social platforms. For individuals concerned about their digital safety in the age of AI, investing in advanced identity protection tools and services has become crucial.
Recommended Product: NordVPN
Protect your online identity and data with NordVPN. With its advanced encryption and privacy tools, NordVPN ensures your safety while navigating an increasingly AI-driven online space. Don’t compromise your security—choose NordVPN today.
This investigation underscores not only a pivotal moment for AI regulation but also a call for platforms to prioritize their users’ rights and safety. Stay tuned for updates as the EU continues its efforts to hold tech giants accountable.