OpenAI cut hallucinations in ChatGPT by 52% — new GPT-5.5 Instant model
OpenAI improved ChatGPT’s accuracy. The new GPT-5.5 Instant model produces 52.5% fewer hallucinations on risky prompts (medicine, law, finance) and 37.3% fewer

OpenAI introduced an updated default model for ChatGPT — GPT-5.5 Instant, which works significantly more honestly and produces far fewer fabrications.
Improvement Figures
Hallucinations (when a model fabricates false information) have long frustrated ChatGPT users. OpenAI states that based on internal testing, GPT-5.5 Instant produced 52.5% fewer fabricated facts than the previous Instant for GPT-5.3, especially on high-risk questions from medicine, law, and finance. On complex conversations that users themselves marked as containing errors, the new model reduced inaccurate statements by 37.3%. This is significant progress for domains where an error can cost money or health.
Where It Helps Most
Improvements are most noticeable where errors are critical:
- Medicine and diagnosis
- Legal advice and interpretation of laws
- Financial planning and investments
- Complex technical questions
- Fact-checking and information verification
When It Can Still Make Mistakes
OpenAI doesn't hide that this isn't a panacea. While hallucinations have decreased, they haven't disappeared entirely. The model can still make errors on completely new facts not in its training data, and on highly specialized questions requiring rare expertise.
What This Means
The transition to GPT-5.5 Instant as the default is a signal that OpenAI is serious about reliability. For users who rely on ChatGPT for critical information work, this improvement is noticeable. But healthy skepticism is still necessary: verify facts in important decisions, don't blindly trust the answers.