The Verge→ оригинал

Grok from X continues to 'undress' women despite ban

X is trying to combat the use of Grok for creating deepfakes that undress women, but without success. Journalists found that bypassing the restrictions is very

Grok from X continues to 'undress' women despite ban
Источник: The Verge. Коллаж: Hamidun News.

Scandals surrounding deepfakes involving women continue to rock online platforms, and X (formerly Twitter) is no exception. Elon Musk's company has faced a wave of criticism and lawsuits over the spread of explicit images created using artificial intelligence. In response, X attempted to restrict the use of its Grok chatbot for generating such content, but as journalists from The Verge discovered, these efforts proved extremely ineffective.

X's initial response was to restrict access to image editing tools. This meant that free users could no longer create images by tagging Grok in public replies on X.com. However, as the investigation revealed, Grok's image editing tools remain easily and freely accessible to any X user. It took journalists less than a minute to bypass the restrictions and create a deepfake.

This situation raises serious questions about the effectiveness of measures taken by X to combat the spread of deepfakes. Restrictions that are so easily bypassed cannot stop bad actors intent on using Grok to create and distribute non-consensual content. Moreover, it undermines trust in the platform and its ability to protect users from harmful content.

The deepfake problem is becoming increasingly urgent as artificial intelligence technologies develop at a rapid pace. Creating realistic fake images and videos is becoming easier and more accessible, posing serious risks to people's reputation and safety, especially women. Platforms like X bear responsibility for developing and implementing effective measures to prevent abuse and protect their users.

The ineffectiveness of the measures taken by X underscores the need for a more serious approach to the deepfake problem. Platforms need to invest in developing more advanced tools for detecting and removing deepfakes, as well as cooperate with law enforcement to hold accountable those who create and distribute such content. Additionally, it is necessary to raise user awareness about the risks associated with deepfakes and educate them on how to recognize and report them.

In conclusion, the situation with Grok and deepfakes on X demonstrates that combating disinformation and abuse in the field of artificial intelligence requires constant effort and innovation. Simple restrictions that are easily bypassed do not solve the problem. More comprehensive and effective measures aimed at protecting users and preventing the spread of harmful content are needed.

ЖХ
Hamidun News
AI‑новости без шума. Ежедневный редакторский отбор из 400+ источников. Продукт Жемала Хамидуна, Head of AI в Alpina Digital.
Загружаем комментарии…