Habr AI→ original

Asimov’s Three Laws Are Becoming Reality for Humanoid Robots

Asimov’s Three Laws of Robotics are moving from fiction into engineering reality. Humanoid robots are learning to walk, fall and get back up, but safety is no l

Asimov’s Three Laws Are Becoming Reality for Humanoid Robots
Source: Habr AI. Collage: Hamidun News.
◐ Listen to article

Asimov's Three Laws of Robotics from fiction are becoming reality. More than 80 years ago, the writer formulated simple rules for machines that seemed purely theoretical. But today, when humanoid robots learn to walk, run and grasp objects, this task has turned into a concrete engineering problem.

Asimov's Laws: A Reminder

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

"A machine may be more intelligent, but it will remain a slave; a human may be less intelligent, but he will be the master," —

Asimov captured the essence of this relationship in these three lines.

Why Theory Became an Engineering Task

Humanoid robot technology is advancing rapidly. Boston Dynamics, Tesla Optimus, Figure AI are releasing machines more agile than any industrial manipulators. They move through human spaces—in offices, warehouses, homes. And here a problem emerges that Asimov foresaw:

  • A robot must distinguish human intent from actual danger
  • During a malfunction, code must prevent the machine from attacking
  • The safety system must work without network connection
  • Power failures, sensor failures and memory failures must not trigger unsafe behavior
  • The machine must be able to say "I don't know" instead of making a dangerous assumption

It seems simple, but in real engineering—it's hellishly complex. A robot cannot see through walls. A recognition system can make mistakes. A human may give a contradictory command. And somewhere in all this noise, Asimov's Three Laws must work.

How It's Implemented Now

The developers' approach is pragmatic, not Asimovian. Boston Dynamics uses physical limiters on speed and force. Tesla adds redundant backups and failure monitoring to training. Figure AI works on behavioral models that compromise between performance and safety. No one writes simple code like IF robot_sees_human THEN stop(). Reality is millions of conditional statements, learning from data, and probabilistic models that must work in different environments.

What It Means

Asimov was both right and wrong at the same time. Right in that machine safety is a central engineering task. Wrong in that it can be solved with three simple laws. Real laws for robots are thousands of pages of specifications, hardware failure tests, ethical guidelines and legal precedents yet to be created. The first question an engineer asks of a humanoid now sounds exactly like Asimov's First Law.

ZK
Hamidun News
AI news without noise. Daily editorial selection from 400+ sources. A product by Zhemal Khamidun, Head of AI at Alpina Digital.
What do you think?
Loading comments…