Cambiar a Español
Art
Books
Design
Lifestyle
Movies
Music
Photography
Technology
History
Fashion
Travel
Qatar

TECHNOLOGY

Isaac Asimov’s “three laws of robotics” won’t stop robots from harming humans

Por: Verónica Suárez6 de abril de 2022

On the 30th anniversary of Isaac Asimov’s death, it’s time to see if the “three laws of robotics” mean something outside of his stories.

Isaac Asimov was an American writer and professor, considered (during his lifetime) as one of the “Big Three” science fiction writers along with Robert A. Heinlein and Arthur C. Clarke.

He wrote or edited more than 500 books, and an estimated 90,000 letters and postcards. While Asimov’s mostly known for his hard science fictions, he also wrote mysteries, fantasy, and nonfiction.

You might find interesting: The Biological Theory That Claims Telepathy Exists

Today, 30 years after his death, we have to dig a little about his “three laws of robotics”.

Explaining the “three laws of robotics”

Also known as “Asimov’s Laws”, these are a set of rules introduced by the author in his 1942 short story “Runaround” (included in the 1950 collection I, Robot). They are:

Later, Asimov added another law, called “the zeroth law” in Robots and Empire (1985), which stated: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

In Asimov’s stories, these laws were incorporated into nearly all of the robots. They were not suggestions or guidelines; they were embedded into the software that governs their behavior. Thus, these laws could not be bypassed, over-written, or revised.

Do these laws really work?

Many of Asimov’s novels demonstrated the imperfections, loopholes, and ambiguities within these laws, which often resulted in strange and counterintuitive robot behaviors. One of the most common problems is that these laws were too vague and, for example, failed to properly define and distinguish “humans” and “robots”.

According to Chris Stokes, a philosopher at Wuhan University in China, the “three laws of robotics” don’t work because:

Are these laws a good starting point?

Behind these laws hide a very serious problem: ensuring the safe behavior of machines that have greater-than-human intelligence. But, according to AI theorists, Asimov’s Laws are inadequate for the task.

“I honestly don’t find any inspiration in the three laws of robotics,” said Louie Helm, the Deputy Director of the Machine Intelligence Research Institute (MIRI) and Executive Editor of Rockstar Research Magazine for Gizmodo. “The consensus in machine ethics is that they’re an unsatisfactory basis for machine ethics.” The Three Laws may be widely known, he says, but they’re not really being used to guide or inform actual AI safety researchers or even machine ethicists.

How can we have safe AI?

There are new attempts to draft new guidelines for AI to follow as a way to create safe, compliant, and robust robots. There have also been attempts to propose rules for robot-makers.

The biggest problem with creating guidelines or laws for robots is to translate them into a format that robots can work with. Broad behavioral goals, like preventing harm to humans or protecting a robot’s existence, can mean different things in different contexts. For example: can the guidelines be the same for autonomous vacuum cleaners and military drones?

Christoph Salge and Daniel Polani propose that robots should be empowered to maximize the possible ways they can act so they can pick the best solution for any given scenario.

“Being empowered means having the ability to affect a situation and being aware that you can. We have been developing ways to translate this social concept into a quantifiable and operational technical language. This would endow robots with the drive to keep their options open and act in a way that increases their influence on the world” writes Christoph Salge in an article for The Conversation US.

But it seems impossible to build a safe AI.

“Very few AGI researchers believe that it would be possible to engineer AGI systems that could be guaranteed totally safe,” said Ben Goertzel, an AI theorist and chief scientist of financial prediction firm Aidyia Holding for Gizmodo. “But this doesn’t bother most of them because, in the end, there are no guarantees in this life.”

He also added: “And to the folks who have watched Terminator too many times, it may seem scary to proceed with building AGIs, under the assumption that solid AGI theories will likely only emerge after we’ve experimented with some primitive AGI systems. But that is how most radical advances have happened.”

Asimov’s Laws are a good way to write interesting stories about robots, but they mean nothing for the creation of robots today. The laws need to be way more comprehensive, but it also seems like implementing such laws will remain an impossible task. Nevertheless, it’s important to consider the potential for hurt if humans start to fall in love with robots.


Recomendados: Enlaces promovidos por Taboola: