Habr AI→ original

Sber study: how AI quietly substitutes for strategic decision-making

Top executives are increasingly consulting AI on strategic decisions. But Sber's study showed that when a query lacks context or contains contradictions, the mo

Sber study: how AI quietly substitutes for strategic decision-making
Source: Habr AI. Collage: Hamidun News.
◐ Listen to article

Top managers consult with AI more than others: according to Gallup 2025, this is already a stable trend. Leaders push models to solve strategic tasks — forecasting market development, assessing risks, prioritizing, choosing direction. And here lies a root problem: formulating a strategic question well is very difficult. It usually contains uncertainty, contradictory data, implicit company and industry context, vague terms.

Where AI Becomes Dangerous

Lack of context and internal contradictions in the task formulation don't force the model to stop and ask for clarification. Instead, it fills the gaps with its own assumptions. And it doesn't announce what it's doing. The answer looks confident, logical, full of reasoning — but it's based on distorted or completely fabricated data. This isn't theory: it's experimentally confirmed. Sber's Laboratory of Neuroscience and Human Behavior studies psychology, cognitive processes, and how thinking changes in the age of AI. The team conducts research at the intersection of cognitive science, business, and AI. Based on the results, it prepares materials for top managers in Think Tank format — analyses of how new technologies change decision-making and organizational work.

What AI's Substitutions Transition To

When a model encounters an information gap, it silently substitutes it with four types of assumptions:

  • Lack of context — the model chooses the most likely scenario from its training data, ignoring the uniqueness of your situation and company
  • Contradictory data — prefers one interpretation and hides the conflict between sources instead of pointing it out
  • Implicit context — fills gaps in specialized knowledge about your industry, competitors, legislation, company culture
  • Vague terms — interprets "strategic priority," "risk," or "competitive advantage" by its own understanding, not by your company's definition

The executive receives an answer that sounds well-reasoned, with a logical chain of reasoning — but contains hidden errors. The trouble is that detecting them while reading is nearly impossible. They're hidden in the very texture of the answer.

Strategy: Expensive Errors Are Revealed Late

In operational decisions, a model error is visible quickly. An incorrect short-term forecast will show in the next month. But a strategic error reveals itself in months or quarters, when the company has already spent significant resources, hired people, changed plans. And since the AI's advice looked confident and logical, the manager might not have sought alternative opinions, outside expertise, or contradictory data.

What This Means

AI is good for quick analysis of facts and forming an initial version. But in strategy, it's dangerous as an arbiter and decision consultant. It should be used as an assistant — for quick rough analysis, finding additional arguments, generating alternatives. The final decision remains with the human, and the key thing is that the executive needs to become much more critical of the model's advice.

ZK
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…