Language models don’t understand time, but they talk about it
An LLM generates answers instantly, but often talks about timeframes as if they were real. In reality, language models have no concept of time — they simply rep

Language models generate an answer in two seconds, but confidently state: "This task will take two weeks." Behind this strange contradiction lies something more fundamental than just an echo of training data — language models simply don't have what we call time.
How LLM sees time
Language models work token by token, predicting the next word in a sequence. They have no internal clock, don't perceive past and future as a continuum. For them, time is simply words that appear in training data alongside other words. When a model says "two weeks," it's not assessing the real duration of a task. It produces a statistically probable answer based on how often the phrase "two weeks" appeared in contexts similar to the current one. It's like remembering a phrase you once heard, but forgetting what context it was in.
The speed and estimation paradox
Here's the crux of the problem: the model generates an answer faster than any human can write a full answer to a complex question. Yet it confidently names time frames that completely don't match its own speed. This isn't a simple mistake. It's a structural feature of how language models work. They don't model the process of solving a task over time — they only predict what words should come next. A system based on this principle physically cannot "understand" time the way humans do.
Why this matters
This reveals several key problems in using LLMs:
- A model cannot honestly assess task complexity, only guess based on statistics
- Its answers about timelines are not forecasts, but hallucinations, probable patterns
- When planning projects with AI, you must account for the fact that the model physically cannot calculate real time
- For critical assessments, human review is needed, not just model predictions
"A language model doesn't have what we call time," — emphasizing the
fundamental gap between how models work and how people think about them.
What this means
This is the first article in a series about collaborative thinking between humans and LLMs. The conclusion is simple: language models are not mini-humans with a fast processor. They're a completely different system operating by different rules. Using them without understanding these features means expecting errors.