Rosenblatt's Perceptron: How the History of AI Began
In 1958, inventor Frank Rosenblatt created the perceptron, the first working device based on the principle of an artificial neuron. It could learn and recognize

In 1958, American psychologist Frank Rosenblatt introduced the perceptron — a device that many consider the first practically functioning neural network. This was a decisive step that opened the path to modern artificial intelligence.
What is a perceptron?
A perceptron is an electronic machine that mimicked the work of a biological neuron in the brain. Visual information (for example, an image of an object or a letter) was fed as input, which was converted into electrical signals through an array of photocells. The device contained electronic components and a system of weights that could be adjusted mechanically. Each weight influenced how strongly one input signal affected the machine's output decision. The main innovation was that the perceptron could learn. The learning algorithm worked simply: if the machine made a mistake, the weights were adjusted. Over time, the perceptron became increasingly accurate at recognizing patterns. This resembled how the human brain learns through repetition and error correction.
Rosenblatt's history and the birth of neural networks
Frank Rosenblatt (1928–1971) was an American psychologist and computer scientist who worked at the Cornell Aeronautical Laboratory. His project received support from the American Navy, which saw enormous potential in automating the analysis of intelligence information and images. The first perceptron, named Mark I, was a tremendous achievement in engineering.
It was presented to the general public in 1958 at a press conference in the presence of journalists, scientists, and senior Navy officers. Rosenblatt confidently stated that the perceptron was the beginning of a completely new era of machine intelligence that would soon put an end to routine intellectual work. Physically, the device resembled an electronic wall the size of a refrigerator.
Inside were 400 photocells for image perception, a complex system of electronic circuits and mechanical switches for adjusting weights.
Capabilities and limitations
The perceptron could solve tasks that were impressive for its time, although by today's standards they seem elementary:
- Recognize simple geometric shapes (squares, triangles, circles) with accuracy above 90%
- Distinguish Latin alphabet letters with sufficient accuracy for practical application
- Learn from a small set of examples without explicit programming of each recognition rule
- Adapt to minor variations in input data (rotations, shifts, scaling)
- Operate in real time, which was an outstanding technical achievement for electronics in the 1950s
But in the late 1960s, mathematician Marvin Minsky and his colleague Seymour Papert published critical work in which they proved fundamental limitations of the perceptron. They showed that a single-layer perceptron could not solve even simple logical tasks like the XOR function. This criticism led to a sharp decline in interest and funding for neural network research. An era began that would later be called the "AI winter."
What this means
Rosenblatt's perceptron was not simply an interesting engineering discovery — it was proof that machines could learn in practice. Before the perceptron, all computers were simply calculators that executed programmed instructions. The perceptron showed that an electronic device could change its behavior based on experience. Today, more than 60 years later, modern neural networks are direct descendants of Rosenblatt's perceptron. Yes, they contain billions of parameters instead of dozens, run on graphics processors instead of electromechanical switches, and use complex backpropagation algorithms instead of the simple Hebb rule. But the basic idea remains the same: a system of weights that adapts to data and learns to solve tasks through examples.