The real threat of AI at work is not job loss but digital surveillance
The main threat of AI in the workplace is not job loss, but the widening gap between employees who use AI to expand their skills and those whose working lives a

The debates about artificial intelligence and its impact on employment have long been stuck in the wrong frame. On one side, they talk about the loss of millions of jobs; on the other, about explosive productivity growth. But both of these narratives miss the main point: what's actually happening in offices around the world right now.
Two Classes of Workers
The real threat of AI isn't so much job loss as it is the creation of fundamentally different categories of workers. One group of employees receives AI tools to expand their capabilities. They use systems to work faster, make better decisions, analyze more information. Their work is amplified by the tool.
Another group discovers that their labor is regulated in the opposite way. Their activities are tracked, analyzed, and managed by opaque AI systems. The mechanism remains a black box — the worker sees only the result, the requirement, the evaluation, but doesn't understand how the system reached that conclusion.
Real Examples
This isn't a hypothetical scenario. Monitoring platforms already track how much time an employee spent on a specific task. Algorithms distribute work in real time. Some companies analyze client conversations and monitor keyboard input. Stories come from different countries: British workers report increasing digital surveillance, in Kenya people work on systems that control their every click, in the US nurses and logistics workers face algorithmic management that is often harsher than human manager oversight.
Why This Is Dangerous
The key problem is opacity. A worker doesn't know what data about them is being collected. They don't know what parameters the system analyzes. This creates a power asymmetry: the employer sees everything, while the employee is in complete darkness. Moreover, such systems often reproduce existing biases. If the data on which the system was trained contains discrimination, the system will reproduce that discrimination at scale.
- Complete activity tracking without real consent
- Opaque algorithms with no worker access
- Absence of fair appeal mechanisms
- Reproduction and amplification of discrimination
- Psychological stress and burnout
What This Means
The question is no longer just whether AI will do your job. The more pressing question is: who controls this technology? Will workers have a voice when AI systems are implemented in their processes? Will the systems be transparent? Will people be able to challenge algorithmic decisions? Or will control remain entirely in the hands of employers and tech corporations?
The answers to these questions will determine the future of work far more than discussions about job automation.