MIT Technology Review→ original

Deepfake porn as a weapon of harassment: how AI creates a new form of violence

A woman discovered synthetic porn featuring her face simply by checking whether a professional photo could be found through facial recognition. The problem is s

Deepfake porn as a weapon of harassment: how AI creates a new form of violence
Source: MIT Technology Review. Collage: Hamidun News.
◐ Listen to article

When Jennifer started working at an NPO in 2023, she ran her professional photograph through facial recognition—simply wanting to check her visibility online. The system returned a result that shocked her: deepfake porn videos with her face, created from videos she had filmed ten years earlier. Jennifer's story is not an exception, but a warning about the scale of the crisis that AI has created.

Synthetic Content in Minutes

Tools for creating deepfake pornography have long been available. You don't need to be a specialist: just select open-source code, upload a few frames of the target face, and—done. The process takes minutes. The most popular tools are distributed through GitHub, Reddit, and TikTok. Each update makes the process simpler and the quality higher. The main driver is demand: the demand for such content seems endless, especially on niche platforms and the dark web.

Trauma and Scale

The effect on the victim is immediate and traumatic. Synthetic pornography spreads like wildfire: friends find it, colleagues see it, employers may react. In some cases, the content is used for blackmail. Women encounter this far more often—90% of deepfake porn victims are women.

  • Psychological trauma and post-traumatic stress in victims
  • Job loss and relationship breakdown due to "leaks"
  • Use of content for blackmail and extortion
  • Reputation damage that cannot be fully undone
  • Loss of control over one's image

Platforms remove content when complaints are filed, but it's slow. By that time, the video has already been copied thousands of times, and complete removal is an illusion.

Defense Attempts

Several approaches are already underway. Some countries (South Korea, some US states, the UK) have criminalized deepfake pornography. Platforms and researchers are developing synthetic content detectors, but they lag behind the quality of 2026 models. Awareness campaigns and victim support are underway. However, law lags behind technology, and deepfake detectors remain unviable.

What It Means

Deepfake pornography is not just a category of crime—it's a new form of digital abuse. Technology created for creativity has become a tool of violence on an industrial scale. For society, it means that the privacy of one's image has de facto disappeared. For platforms, there is an urgent need for moderation. For lawmakers, regulation is necessary.

ZK
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…