The illusion of mastery: how generative AI makes beginners look like experts
Parkinson's law in the AI era: work stretches and scales to whatever extent people can generate it. The main danger is not laziness, but that beginners create e

Parkinson's Law states: work expands to fill all the time allocated for it. In the era of generative AI, this law has taken on a new dimension—people can now scale their output almost infinitely, limited only by how much content they can generate.
The Appearance of Expertise Without Foundation
The first warning sign appeared about a year and a half ago: a colleague in a public discussion was responding exclusively with generated text. He was given away by punctuation—long dashes in odd places, rhythmic structure, and confident arguments on topics he clearly didn't understand. Arguing was pointless: there was no meaningful dialogue, only copying model responses.
Here lies the critical problem: generative AI can produce work that looks expert-level while the AI itself is not an expert. A beginner can quickly reproduce content that resembles senior specialists' work, but without the experience to assess their own quality.
Hallucinations as an Invisible Enemy
But that's not the most dangerous part. As AI has proliferated, two categories of problems have emerged:
- A beginner generates work close in content to expert work—at least it can be measured and tracked
- People receive hallucinations and artifacts in areas they don't understand—this is the real danger
Researchers have managed to measure the first problem well. The second—they haven't. And by observation, the second is far more dangerous. When someone doesn't understand a topic, they can't tell where the model is hallucinating and where it's telling the truth. The result: non-existent methods end up in production code, invented statistics in analyses, impossible recommendations in advice.
What This Means
We're entering an era not so much of illusory productivity, but of illusory competence. AI gave everyone a tool to scale their output, but not the knowledge to distinguish real from hallucination. This demands from professionals not laziness, but the opposite—critical thinking and honest assessment of their knowledge in every area they ask AI to process.