Wired→ original

Anthropic’s «Dreaming»: why AI companies should stop anthropomorphizing features

Anthropic has named a new feature for AI agents «dreaming» — the ability to sort context and “memories.” But a Wired journalist criticizes the trend toward anth

Anthropic’s «Dreaming»: why AI companies should stop anthropomorphizing features
Source: Wired. Коллаж: Hamidun News.
◐ Listen to article

At a developer conference, Anthropic introduced a new feature for AI agents—they can now "dream" and process their "memories." But one journalist warns: it's time for companies to stop presenting machine processes as human capabilities.

What happened at Anthropic

At a developer meeting, Anthropic announced a feature it called "dreaming" for AI agents. In practice, this means: an agent can reprocess its context and "memories" (actually data in memory) for better task analysis. Sounds interesting. But the name is pure marketing. Reality: the algorithm reindexes available data. This is not sleep, not a dream, not the subconscious. It's computation. But "dreaming" sounds better than "contextual reindexing," so the company chose that word.

A wave of anthropomorphism

This isn't the first case. A few months ago, OpenAI announced "extended thinking"—actually just a longer computational process with intermediate steps. Before that, neural network companies talked about AI having "memory," "consciousness," and "self-awareness." All these names are attempts to explain machine processes through human metaphors. The problem isn't metaphors as such. The problem is that companies intentionally use anthropomorphism as a marketing and PR tool.

Why this is dangerous

When a company calls a feature "dreaming" or "memory," users subconsciously start thinking that AI sleeps and dreams like a human. Or that a neural network has consciousness. Neither happens. These names create problems:

  • They mislead users about AI's capabilities and limitations
  • They exaggerate near-term technological possibilities
  • They make complex processes seem simple and human-like
  • They complicate honest conversations about how AI actually works
  • They create false product expectations

The journalist gives an example: if you call a feature "memory," people will think AI remembers them between sessions, learns their personal details, can develop long-term relationships. Actually, it's just fast context recovery from a database, which resets with every new request.

Why companies do this

Good names mean social media, clicks, media success. "AI gained the ability to dream" sounds cooler in the news than "the algorithm reindexes context to optimize request processing." Investors and journalists love hearing this. Startups win the attention race through catchy names. But it's risky. Overly sweetened marketing creates hype. Hype creates inflated expectations. Expectations create disappointment. Then regulators demand honesty. And companies lose trust.

What this means

AI companies need honesty in terminology. Not "dreaming" and not "thinking"—these words only work if you don't respect your audience and their ability to understand complexity. Anthropic and OpenAI should be bolder. Honest feature descriptions are the best PR in the long run. And the world needs a conversation about what AI can and cannot do, without pretty deception. Because people have the right to know what they're dealing with.

ЖХ
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…