MIT Technology Review→ original

AI takes hold in finance — through employees, not management

In corporate finance departments, AI is spreading not thanks to management strategy, but in spite of it. Employees are already automating reports, analyzing ris

AI takes hold in finance — through employees, not management
Source: MIT Technology Review. Collage: Hamidun News.
◐ Listen to article

An unusual conflict is unfolding in the finance departments of large companies: employees are massively implementing AI tools to work faster, while management rushes to catch up with policies to govern this technology. The result is paradoxical — one of the most regulated sectors of the economy is experiencing a spontaneous AI revolution.

How AI Penetrates Finance

Finance department employees are already using AI for tasks that previously took hours. Analysts upload quarterly reports to ChatGPT and LLM applications to extract key figures. Risk specialists run AI models to assess credit portfolios. Teams automate document processing — contract scanning, KYC verification, data extraction. All this happens quickly and often without explicit compliance department approval. Employees see the efficiency: processing a document takes minutes instead of hours. Portfolio calculations accelerate many times over. It works, and people do it.

Why Management Lags Behind

Finance directors and chief risk officers realize the problem too late. By the time they start writing AI usage policies and audit requirements, half the department already depends on ChatGPT for daily work. The challenges management faces:

  • Lack of visibility — difficult to track which tools each employee uses
  • Conflict with regulators — financial regulators demand explanations of sources and reliability of algorithms used
  • Risk of data leaks — employees may upload confidential reports to public services
  • Accountability for errors — if an AI model miscalculates risk, who is responsible?

Add to this staffing constraints: there are no specialists who understand AI deeply enough for proper assessment.

Regulatory Risks

Regulators are already paying attention to this problem. Financial supervisory authorities are starting to require banks and investment funds to explain: which AI systems are used, how they are tested, who is responsible for errors.

"Financial organizations must have complete visibility in AI usage,

from tool selection to result validation," say regulators, but standards and rules do not yet exist.

This creates a wave of retroactive regulation: companies are forced to document and audit after the fact what has already happened. Some financial organizations have already received regulatory remarks for using unverified AI solutions in critical processes.

What This Means

The financial sector demonstrates that regulation always lags behind technology. When a governance framework begins to form, dozens of solutions are already working in practice that no one has approved. Ahead lies a long process of bringing order and developing standards that balance innovation with risk management.

ZK
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…