Habr AI→ original

The censorship paradox: why neural networks were banned from generating the nude human body

Habr has raised the question: why are neural networks prohibited from generating the nude human body when it has historically been displayed in museums and used

The censorship paradox: why neural networks were banned from generating the nude human body
Source: Habr AI. Collage: Hamidun News.
◐ Listen to article

Why are generative neural networks prohibited from reproducing nude human bodies if one can calmly see them in museums, private spaces, and educational materials? This question, raised on Habr, opens a profound philosophical paradox about who sets the rules for artificial intelligence systems and why.

The Paradox of Censorship

At first glance, the answer is simple: censorship. But it goes deeper. The nude human body is one of the most classical themes in art history. It is depicted in museums around the world, hangs in private bedrooms, and is used for educational purposes in medicine and anatomy. Art students have been drawing nude models for several centuries. No one forbids it. However, when it comes to generative neural networks, there is a sharp turn. Systems like DALL-E, Midjourney, and Stable Diffusion have strict restrictions on generating any images of nude human bodies. Where did this belief come from that AI cannot do what human art has been doing for millennia?

Two Standards for One Object

Here's the heart of the paradox:

  • Michelangelo's sculpture in a museum — recognized art and a masterpiece
  • A classical portrait of a nude model in an artistic context — legitimate art
  • An image of the human body in a medical atlas — science and education
  • A nude body in private space between partners — personal and private
  • AI-generated image of a nude body — categorically forbidden

One and the same visual object receives completely different statuses depending on who created it and in what context. Why can an artist do it, but an algorithm cannot?

The Roots of the Prohibition

Developers of neural networks fear two main things: the potential use for creating deepfake pornographic content without people's consent, and powerful social pressure from activists. This makes sense from a commercial standpoint — it's easier to prohibit completely than to deal with nuances. But it creates a strange hierarchy of morality: a dead artist can, a living person can, but a computer algorithm cannot.

Who Writes the Rules for AI

Here's an interesting point: this is not law in the classical sense. It's a company decision. OpenAI, Google, Anthropic, Meta — they themselves choose these restrictions based on their perceptions of risk and reputation. And they choose them not because it logically follows from deep ethics, but because they fear scandals, lawsuits, and regulation. There are thousands of models in open access without these restrictions. Hugging Face, local open-source solutions that anyone can run on their computer. But major commercial platforms chose the path of maximum conservatism. The result? Users simply switch to open models.

Ethics or Marketing

This paradox demonstrates something important: rules for AI are often established not on the basis of deep ethical analysis, but on the basis of fear of scandals and reputational risks. Companies create the appearance of moral control, although in reality it's about commercial calculations. Does this generate good? Probably not. Real prevention of undesirable use requires more sophisticated approaches than a complete ban. And attempts to impose one standard or another through corporate platforms only leads to fragmentation and loss of control.

ZK
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…