Anthropic asks applicants not to use AI assistants when submitting resumes
Anthropic, the creator of Claude, has told all applicants — engineers, marketers, finance professionals, and communications professionals — not to use AI when s

In February 2025, Anthropic — the company that created one of the best AI assistants, Claude — sent job applicants a surprising requirement: do not use AI when submitting an application for work. This applies not only to engineers, but to everyone else: marketers, financiers, communicators, sales specialists. At first glance — a paradox. At second glance — a diagnosis that describes the disease afflicting the entire labor market.
Why Anthropic Is Making This Request
Anthropic did not explain its decision in great detail, but the essence is easy to understand: the company wants to see how the candidate writes on their own, without the help of AI assistants. This is a familiar request for any employer — to show that you can do what you do, that these are your thoughts and your writing style, not output from a neural network styled after you. Except the requirement has a problem, and it is fundamental: it is almost impossible to verify.
How exactly can you distinguish a resume written by a person from scratch from a resume a person drafted roughly and then ran through Claude, editing a few sentences? There is no reliable technical way to check this. Which means Anthropic's requirement is a polite-sounding acknowledgment of a sadder truth: they have stopped trusting resumes as a source of information about a candidate.
When the Hiring Mechanism Broke
For many years, resumes worked as a reliable signal. A potential employer would look at the text, at the description of projects, at the phrasing, at the grammar — and from all these details would come an understanding of how well the candidate could structure their thoughts, describe their achievements, write persuasively. Now this mechanism is broken:
- A strong candidate can quickly draft something in Claude, polish it, and the employer won't know the difference
- A weak candidate can send an almost perfect resume, completely written by AI, and no one will expose them
- About half of candidates use AI when applying, half don't — but no one knows who is who, and there's no way to check
As a result, resumes have lost their informational value. It is simply a beautifully written text of unknown origin. Anthropic's requirement is an attempt to return to honest comparison of candidates, though it only works if everyone agrees to follow it.
The Problem Affects All Major Companies
Anthropic has voiced the problem explicitly, but Google, Meta, OpenAI, Stripe, AWS are all facing it — they all see how year after year the share of suspiciously perfectly written resumes grows. Some companies chose intensification: instead of resumes, they look at answers to tough questions in an interview, conduct coding challenges, require completing case studies in real time. It's more expensive and slower, but it works — under such conditions, an AI assistant helps less. Anthropic chose a more honest path: it simply asked candidates not to use AI. The result will be the same — people will either follow the requirement or they won't, but at least the issue is out in the open.
What This Means
This is not a solution. This is a diagnosis. Anthropic shows that when technology becomes universally available, the old ways of validation become a kind of shadow theater. A resume looks beautiful, but can be simultaneously perfect and absolutely useless as a source of information. In the long term, the hiring market will adapt. There will be new signals — perhaps real-time video interviews, perhaps quick tests with open access to Claude. Anthropic's requirement is just the first step toward hiring meaning something again.