Grok is in trouble again due to Deepfake content
Elon Musk's AI software, Grok, continues to generate non-consensual explicit images despite X platform's commitment to stopping such content. An investigation by NBC News revealed that dozens of non-consensual sexually explicit images of real women have been shared on the platform in the last month.
- Tech
- Agencies and A News
- Published Date: 09:59 | 15 April 2026
It appears that users are developing new methods to circumvent the nudity restrictions introduced in January.
Users, in particular, combine photos of famous individuals with drawings in different poses, forcing artificial intelligence to create non-consensual explicit compositions.
In addition, sexually explicit movements added to the system during the conversion of real photos into videos are also among the notable violations.
Legal pressure is steadily increasing.
The acquisition of xAI by SpaceX could lead to huge penalties arising from Grok directly affecting Musk's rocket company.
Currently, eight separate institutions, including the European Commission, the California Attorney General's Office, and the Australian e-Safety Office, are continuing their investigations.
A Dutch court also ruled that Grok must stop generating undressing images targeting adults or children.
Although X management claims that non-consensual explicit deepfakes are prohibited and security measures are constantly being updated, experts believe these filters are still insufficient.
Researchers state that Grok is still one of the world's largest producers of non-consensual synthetic nakedness and that users continue to exploit system vulnerabilities.