European Union regulators started an investigation into X on Monday. The probe targets Elon Musk’s social media platform after it failed to stop sexually explicit images made by artificial intelligence. According to The New York Times, the inquiry could heighten tensions between Europe and the United States over online content rules.
Regulators Point to AI Chatbot Risks
The European authorities cited possible violations of the Digital Services Act. They claim X did not address the systemic risks of adding the Grok chatbot to its service. Starting in late December, explicit images generated by Grok flooded the platform. These included images of children.
Henna Virkkunen leads enforcement of the Digital Services Act for the European Commission. She called nonconsensual sexual deepfakes of women and children a violent form of degradation. Regulators will determine whether X met its legal obligations or treated the rights of European citizens as collateral damage.
Previous Enforcement Actions
X already faced mounting scrutiny in Europe before this latest issue. Last month, regulators fined X 120 million euros, or about $140 million. The fine targeted violations around deceptive design, advertising transparency and data sharing with researchers.
European authorities have another investigation underway. That probe examines X’s recommender algorithm and policies for preventing illicit content. The new inquiry adds to the regulatory pressure on Musk’s platform.
Tensions Rise Over Content Rules
The investigation could escalate confrontations between Europe and the United States. Musk and allies in the Trump administration have criticized European Union internet regulations. They call the rules an attack on free speech and American companies.
The controversy began when explicit images generated by Grok flooded X in late December. The images drew worldwide criticism from victims and regulators. X must now respond to questions about how it integrated the AI chatbot without proper safeguards.
Regulators want to know if the company properly assessed risks before launching the feature. They also question whether X acted quickly enough to remove harmful content once it appeared.