TechCrunch→ original

Glossary of popular AI terms: what hallucinations, tokens, and prompts mean

TechCrunch has published an updated, detailed glossary of core AI terms and popular jargon, bringing together the key concepts that have emerged alongside the e

Glossary of popular AI terms: what hallucinations, tokens, and prompts mean
Source: TechCrunch. Collage: Hamidun News.
◐ Listen to article

TechCrunch published an updated glossary of artificial intelligence terms. With the explosive growth of AI around us, dozens of new words and slang have appeared that can confuse even experienced users.

Key AI Terms

When talking about machine learning and neural networks, words like "training," "parameters," and "vectors" often come up. But there are a number of specific terms that appear especially frequently in the context of large language models.

  • Hallucinations — when an LLM confidently generates false information, presenting it as facts. A typical example: ChatGPT invents non-existent articles or attributes actions to people that they never committed.
  • Prompt engineering — the art of formulating instructions for AI. The right prompt can increase the quality of the answer many times over, while the wrong one can reduce the result to nothing.
  • Fine-tuning — adapting a pre-trained model to specific tasks and data. Companies take GPT-4 and train it on their corporate documents.
  • Embeddings — numerical codes that AI uses to encode the meaning of words and documents. This is the foundation for search, clustering, and semantic analysis.
  • Token — the minimum unit of text (a word or part of a word) that the model processes. ChatGPT calculates costs by tokens, so "token" = money in the context of API.

Architecture and Methods

Over the past few years, a standard set of techniques has formed for working with models.

Transformer — the architecture on which all modern LLMs like GPT and BERT are built. It was the transformer that allowed neural networks to process data sequences in parallel, revolutionizing training speed.

RAG (retrieval-augmented generation) — a technique where the model first searches for relevant documents from external sources, then generates an answer based on them. This allows LLMs to work with current information without retraining and reduces hallucinations.

Another key term is RLHF (reinforcement learning from human feedback). It was through RLHF that OpenAI made ChatGPT polite, helpful, and safe. Human evaluations help the model learn not only from statistical patterns in the data, but also from human preferences, which changes the model's behavior itself.

Why This Matters

Understanding basic terms helps you perceive AI news critically. If you learn about "hallucinations," you'll understand why models sometimes lie. Knowing about "tokens," you can optimize your API spending. Realizing that "prompt engineering" exists will convince you not to rush — the right formulation of a question is worth the attention.

The TechCrunch glossary does not claim to be complete, but it provides a lingua franca — a common language in which you can discuss AI with a technical specialist, journalist, or manager alike.

What This Means

Standardization of terminology is the first step toward informed discussion of technology. As AI develops, the language of professionals and the public must converge, otherwise there will remain a feeling that they are discussing different things. When everyone speaks the same language, it's easier to find the truth and avoid panic.

If you work with AI tools or simply follow the news, a glossary saves hours of searching for definitions and examples. Instead of wandering through multiple sources — one reference that you can open at any time.

ЖХ
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…